GB2457215A - Automatic 3D Modelling - Google Patents

Automatic 3D Modelling Download PDF

Info

Publication number
GB2457215A
GB2457215A GB0704368A GB0704368A GB2457215A GB 2457215 A GB2457215 A GB 2457215A GB 0704368 A GB0704368 A GB 0704368A GB 0704368 A GB0704368 A GB 0704368A GB 2457215 A GB2457215 A GB 2457215A
Authority
GB
United Kingdom
Prior art keywords
building
points
lidar
roof
models
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0704368A
Other versions
GB0704368D0 (en
Inventor
Nikolaos Kokkas
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB0704368A priority Critical patent/GB2457215A/en
Publication of GB0704368D0 publication Critical patent/GB0704368D0/en
Publication of GB2457215A publication Critical patent/GB2457215A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

Automated 3D city modelling is achieved by fusing airborne optical data with LiDAR point clouds or DSMs (Digital Surface Models). Building hypothesis is generated by fusing LiDAR data or DSMs with stereo matched points. Building detection and generating building footprint may be achieved using a plane fitting algorithm on the LiDAR point cloud or DSM using conditions based on the roof slope and building minimum size. Initial building footprint may be subsequently generalized using a simplification algorithm enhancing the orthogonality between the individual linear segments. Final refinement of the building outline may be performed for each linear segment using filtered stereo matched points with a least squares estimation. Roof reconstruction may be performed by implementing a least squares-plane fitting algorithm on LiDAR or DSM data, restricted by the building outline, the minimum size of the planes and maximum height tolerance between adjacent points. Subsequently neighbouring planes may be merged using Boolean operations for generating polyhedral models.

Description

Patent Specification Title
Geodata fusion for automated 3D City Modelling
Background of the Invention
1. Field of the invention
More than 50% of the world population lives in urban/suburban areas, so detailed and up-to-date building information is of great importance to every resident, government agencies, and private companies. Public government agencies as well as private companies spend millions of dollars each year obtaining aerial photographs and other forms of remotely sensed data.
The large amount of airborne data, acquired every year, require automated solutions for post-processing in order to extract 3D City Models in a consistent and cost effective way.
Currently the cost for producing detailed 3D City Models of urban areas is estimated at �6000 -�10000 per square kilometre that, even for current state of the art commercial systems, requires 4-5 working days for several operators. In order to improve the situation, fully automated solutions for extracting 3D City Models in a consistent and cost effective way are required.
3D City Modelling is of primary importance in many applications, including urban planning, telecommunication network planning and vehicle navigation which are of increasing importance in urban areas. Unfortunately, manual reconstruction from photogrammetric techniques is time consuming and not a cost effective solution. Automatic 3D City Modelling from multiple aerial images is however a difficult problem due to the presence of shadows, occluded regions introduced by vegetation and the perspective geometry of the scene. Most automatic approaches have to account for the complexity of roof structures and thus supplying a general approach that can model most of the buildings present on a scene. At the same time simplifications of the roof shape is needed for providing a fast and reliable solution. Currently most approaches are not designed to reconstruct roof details for dormer windows, chimneys and small building recesses, except where LiDAR data with very high point density are available. In contrast the invention presents an automatic process that is efficient in a variety of different situations. The invention presents reliable results with data collected from different sensors and projects with different specifications regarding the geometric accuracy and the desired level of detail.
Most of the alternative approaches in order to overcome these difficulties introduce some kind of external knowledge, either as models of buildings (gable, hip and flat) which reduce the generality of the approach, or as constraints on primitives extracted from aerial images.
The invention uses a unique geodata fusion process between LiDAR or Digital Surface Models (DSMs) and point features extracted from aerial photography for automated 3D City Modelling that combines the advantages of increased vertical accuracy, usually provided from L1DAR data, and the precise location of the building outlines and roof details, derived from aerial images.
2. Description of related art
Automatic building reconstruction from aerial images is a difficult problem due to occlusions, introduced mainly from vegetation, scene perspective and the complexity of the roof structures. In the context of 3D City Modelling from multiple aerial images the proposed methods can be categorized as data-based or model-based. In the data-based methods (Heuel and al., 2000; Amen and Fritsch, 2000; Scholze and al., 2002) the building reconstruction is performed without any assumption for the structure of the roof and therefore no restrictions are introduced in the method. On the contrary the model-based methods (Willuhn and Van Gool, 2005) use some models of buildings to restrict the set of possible shapes. This external knowledge enables the user to overcome the lack of detection due to occlusions and over detection.
In the data-based methods it is very common to extract primitives from the imagery to assist in the reconstruction process. In Schoize and al., (2002) 3D segments are extracted, edges are extracted in Heuel and al., (2000) and in Amen and Fritsch (2000), planar patches are used to solve the lack of generality that is inherent in these strategies, by introducing heuristic rules (Fischer and al., 1998). Despite the promising results both approaches are still limited to simple forms and thus can't handle all the shapes available in urban or suburban areas as a function of the limited number of models. Increasing the library of models would result in an increased complexity and a least robust method (Taillandier, F., Deriche, R., 2004).
Most methods for building reconstruction using aerial photography only divide the task using a two-phased approach whereby a detection phase is initially applied extracting the locations of single buildings and then a reconstruction phase is implemented.
The main objective of the building reconstruction solely using LiDAR point clouds is to extract surfaces from the dataset. In general these methods can be divided into two categories. The first includes the methods that directly derive the surface parameters in a parameter space by clustering the point cloud, which can be a very effective and robust approach when planes or other simple shapes are extracted. The second category includes methods that segment a point cloud based on criteria, like proximity of points or similarity of locally estimated surfaces (Vosselman et al. 2004).
Several approaches have been presented for building extraction from laser altimeter data.
Mass and Vosselman (1999) extracted parameters of standard gable roof type using invariant moment analysis. The method was based on intersection of planes fitted into a TIN model which had the ability to determine even more complex buildings. Merging of TIN mashes was used by Gorte, (2002) in order to compose the surfaces of the polyhedral building models. In this method the initial planar surfaces are created by the TIN mesh and then adjacent planes patches are merged if their plane equations are similar. The merging process is based on a similarity measure that is computed for each pair of neighboring surfaces. The merging process continues until there are no more similar adjacent surfaces.
Additional cues were used by Wang, (1998) that implemented a Laplacian of Gaussian edge detector to extract edges from a DSM produced from LiDAR data. Moment analysis was i used to describe edge properties while shape and morphological parameters were used to classify the buildings edges from other features.
One of the most frequent methods for plane extraction, used for polyhedral modeling, is the 3D Hough transform. The 3D Hough transform is an extension of the (2D) Hough transform used for the extraction of line segments in imagery.
One of the major problems related with the building extraction process solely from LiDAR point clouds is the discrimination between buildings and vegetation. In addition most approaches require very high density point clouds to work effectively and even then the accuracy for the derived building outlines as well as the roof details is inferior to extracted features from aerial photography. In contrast the invention presents reliable and accurate 3D City Models even with low density LIDAR data that can be supplemented with DSMs extracted from aerial imagery.
Statement of Invention
From the previous section is evident that the efficiency of the alternative approaches is strongly related to the data at hand. In contrast the invention is irrelevant of the quality of the data and the complexity of the urban areas. The invention's process consists of four major stages that include the automatic feature extraction from the aerial imagery, the building detection from LIDAR or DSMs, the adjustment of the building outlines and small roof details using a geodata fusion process and finally the building reconstruction stage.
For the process of feature extraction the location of the edges, constituting the vertical walls and additional roof features, are extracted initially from the L1DAR point cloud or DSM and subsequently are refined based on the point features derived from the aerial imagery. The points extracted from stereo pairs of aerial images are matched in the stereo model space and then projected at the object space. The information from the point features will not only refine and improve the accuracy of the building outlines but provide information for smaller roof details that are not modelled correctly in the case of using only course LiDAR data or a DSM. A major difference of the invention is that the adjustment of the building outline and related features is performed using a least square adjustment and not the Hough transform used in Chen et aL(2004).
The invention classifies the vegetation using LiDAR data or a DSM by employing a process that scans the surface and match tree shapes according to a library of several tree types.
The main difference of the invention compared to previous approaches is the improved robustness by using additional information from aerial images and a library of several tree types. The classification of low features, above the ground, is performed after the classification of the ground points, where by low features that aren't related with buildings, are filtered using a range of relative heights above the ground surface.
The process of building detection and generating the building hypothesis is based on the detection of planar patches on LIDAR point clouds or on a DSM. The initial building hypothesis is subsequently refined by merging the linear features extracted from the aerial
I
images. The adjustment of the building outline is performed for each linear segment using a least squares estimation.
The invention during the roof reconstruction process utilizes the adjusted building boundaries, accurately representing the outline of the roof face to subsequently yield the precise location of the vertical walls. In addition small roof details are efficiently reconstructed by merging the linear features with the 3D surface patches extracted from the LiDAR point cloud. Subsequently neighbouring planes are merged using Booleen operations for generation of solid features. Figure 1 summarises the key stages Of the invention.
Advantages The invention is based on state of the art processes which automate production and minimise manual interaction even under the most complicated urban areas, It produces highly detailed models comparable only to traditional manual techniques while achieving 7-8 times faster production rate at 113 of the cost. The solution adapts to customer requirements depending on the level of detail and accuracy they require in order to achieve highest customer satisfaction. Similar advantages are gained for producing alternative geospatial products that require 3D City Models such as True Orthophotos, distortion free and highly accurate processed aerial images, which form the basic geographic layer for every GIS related application.
* Disruptive technology -Automated solution for 3D city modelling by minimising manual interaction and preserving reliability even under the most complicated building structures.
* Highly detailed and accurate models comparable only to 3D models derived by traditional, manual time-consuming methods.
* Increasing productivity and minimising time requirements and cost. Production of 3D building models in a matter of hours per square kilometre with minimum manual input.
7-8 times increase in production speed at between 1/2 and 1/3 of the cost.
* Scalable and flexible product characteristics: compatible with different raw data sources and adapts to customer's requirements.
* Unique advantages of the 3D City Modelling solution improve cost and time efficiency for alternative geospatial products such as True-orthophotos requiring City Models.
Introduction to the drawings
(27 Drawing sheets) Figure 1 shows a diagram indicating the overall process of the invention Figure 2 shows extracted edges from aerial photography using the Sobel edge operator Figure 3 shows extracted stereo matched points from aerial images defining the building outline and surrounding features Figure 4 shows extracted conjugate points from aerial photography overlaid on a Digital Surface Model Figure 5 Extraction of conjugate points from aerial photography for adjusting the building outlines Figure 6 Diagram indicating the invention's process for the building detection stage Figure 7 Perspective scene of the combined LiDAR data, visualized as a color coded shaded relief map.
Figure 8 Shape of the generic tree models used to scan over the entire LIDAR point cloud or the Digital Surface Model Figure 9 Results from the tree detection using the two tree models over LiDAR point clouds or DSMs. Top view of the study area (left), TIN model with tree points superimposed (right) Figure 10 Example of undetected individual trees present in the area Figure 11 Diagram illustrating the iterative selection of new points at the ground surface Figure 12 Resulted ground surface points (orange points), classified using the iterative selection process Figure 13 Classified points (red) representing the ground surface, overlaid on the shaded relief map of the LiDAR data and unclassified ground regions Figure 14 Classified features and low vegetation with height in the range of 0-25m, superimposed on the TIN model Figure 15 Remaining unclassified tree crowns and points representing building roof tops.
Figure 16 Top view of the initial building detection, laser points representing buildings superimposed on the color coded shaded relief map.
Figure 18 Perspective view of the initial building detection, superimposed building points on the color coded, shaded relief map Figure 17 Remaining unclassified points (white features) representing individual trees and roof details overlaid on a TIN model 4.
Figure 19 Digital Surface Model of the initial detected buildings and surrounding features at
the background
Figure 20 Polygon layer of building hypothesis, produced form the raster to vector conversion process.
Figure 21 Selected unclassified points located within the building hypothesis Figure 22 Incorrectly classified building points (highlighted in blue ellipses), superimposed on a rectified true color composite Figure 23 Diagram indicating the overall process for the Geodata Fusion stage and adjusting the building outlines Figure 24 Void areas introduced from the classified ground and low feature points (brown points).
Figure 25 Perspective scene of reconstructed roof planes (red boundary), with building points superimposed over the ground TIN.
Fgure 26 Extruded roof planes on the ground surface (grey lines) representing the vertical facades of the buildings Figure 27 Resulting building outline from the procedure of topology generation and spatial cleaning Figure 28 Oversimplified building outline as a function of the increased linear tolerance in relation with the size of the building footprint Figure 29 Simplified building outline (green boundary) compared with the initial building footprint (blue boundary) superimposed on a TIN model Figure 30 Generated buffer zones of width 25cm at an increment of 25cm around the linear segments constituting the building outline.
Figure 31 Filtered conjugate points derived from the stereo matching process (red points) based on incremental buffer regions around the simplified building outline (green polyline) Figure 32 Individual linear segments (blue lines) after the least squares adjustment using the stereo matched points (red) Figure 33 Initial simplified building outline (green boundary) versus adjusted building footprint, superimposed on a TIN model Figure 34 Diagram indicating the overall workflow for the building reconstruction process Figure 35 Projected building footprints at the ground level Figure 36 Visualization of the reconstructed planes (hidden lines are excluded). Successful reconstruction of small roof details (ventilation equipment, dormers etc.) Figure 37 Perspective view with the reconstructed roofs and vertical building facades-hidden lines excluded Figure 38 Example of the implementation of the Constructive Solid Geometry to automatically merge adjacent polyhedral facets.
Figure 39 Perspective scene of the final building reconstruction for the entire study area.
Rendered scene with global illumination and phong shading Figure 40 Resulting solid building models (rendered models-smooth shading) from the implementation of Constructive Solid Geometry at the planar facets.
Figure 41 Roof details in the 3D City Models Figure 42 Deficiencies introduced as small intrusions during the plane merging function Figure 43 Results from the implementation of the invention over central London. 3D City Models were automatically textured by draping aerial photography Figure 44 Results over heerbgrugg, Switzerland -Leica facilities -3D City Models automatically derived using the inventions process.
Detailed description of invention
1. Extracting features from optical data The process initially is concerned with extracting conjugate points from the available stereo pair of images. There are three main steps for extracting the desired features from the optical data: * Select the most appropriate stereo pair, if multiple overlapping images exist, for the study area * Apply edge detector and add the extracted features to the radiometric values of the initial image * Optimize stereo matching algorithm for extracting conjugate points in urban areas 2. Selecting appropriate stereo pair The selection of the most appropriate stereo model may seem a relatively unimportant step but in the case of airborne digital sensors such as the ADS4O, it becomes a vital parameter.
The reason is that the sensor collects 4 bands of data in three look angles resulting in different combinations of stereo pairs. One of the parameters that must be taken into account is the importance of the base to height ratio in the vertical accuracy of the extracted conjugate points. Therefore, considering only the base to height ratio as a factor, a straightforward solution would be to use the 28° forward and 14° backward looking panchromatic bands. b
At this stage, there are two major issues that should also be taken into consideration. That is, the possibility of the study area being located outside the overlapping region of the two look angles and also the occlusions introduced from the relief displacement. The latter issue is the most critical, because the stereo matched points are subsequently used for adjusting the building outline and therefore, having conjugate points representing the planimetric position of all the building façades is more important than having the highest vertical accuracy possible.
It is evident that a check mechanism should be introduced in the process to determine the location of the study area on block of imagery. The proposed method incorporates the generalized collinearity equation for back-projecting a polygon enclosing the study area at the image space and checking if it is within the extend of the image. The generalized collineanty equations are given as. x y
This check mechanism doesn't require any substantial user interaction since in most cases the polygon, enclosing the study region, is previously defined from the project specifications.
From the discussion so far it is evident that the optimum selection is to have a stereo pair of the nadir and 14° backward looking panchromatic bands in order to have a small convergence angle and thus decreased occluded regions.
3. Applying edge detection algorithm The selected edge detector implemented for extracting edge features was the Sobel edge detection operator. In principle this operator uses the first derivative of a continuous function that is subsequently approximated by the following equation (Schenk, 1999).
4'@çy) f(x+1y)-f(y) =f(x+ l,y) -f(.cy) & (x+I)-x i(y) f(.y+1)-f(.xy) f(f() 4 (y-'-l)-y The above equation leads to the concept of the image gradient for a discrete digital representation. Nevertheless in order to be more computationally efficient, this algorithm uses a kernel window of size 3x3 to scan the entire digital image. The defined window has the following form: a.
A B C
D E F
G H
With the utilization of the above kernel window the updated value for the central pixel is calculated based on the following equation.
S=/X2 y2 Where X =(C+2F+I)-(A+2D�G) Y =(A+2B+C)-(G�2H+J) Although there is a variety of edge detection operators, Sobel presents one of the most widely used operators with reliable results. The extracted edges from the implementation of the Sobel operator with a 3x3 kernel size are illustrated in figure 2.
The motivation for applying an edge detector is to increase the probability of the stereo matching algorithm producing as many conjugate points as possible along the enhanced linear features.
Subsequently the extracted edges are merged with the initial aerial imagery in order to enhance the discrimination of the linear features present on the optical data. This process has no impact in the geometric properties of the image since it influences only the radiometric values of the pixels coinciding with the detected edges.
4. Extracting conjugate points In order to implement feature based matching, the image features must initially be extracted.
There are several well-known operators for feature point extraction. Examples include the Moravec operator, the Dreschler operator and the FOrstner operator.
After the features are extracted, the attributes of the features are compared between two images. The feature pair having the attributes with the best fit is recognized as a match. For estimating the best fit of the extracted features, least squares correlation techniques can also be employed. Least squares correlation uses the least squares estimation to derive parameters in order to optimize the match between the interest points. It accounts for both gray scale and geometric differences, making it especially useful when ground features on one image look somewhat different on the other image (differences which occur when the surface terrain is quite steep or when the viewing angles are quite different).
Because of the large amount of image data, constraints such as epipolar geometry and image pyramid are usually adopted in order to reduce the computation time and to increase the reliability. The image pyramid is a data structure consisting of the same image represented several times, at a decreasing spatial resolution each time. Each level of the pyramid contains the image at a particular resolution. The matching process is performed at each level of resolution. The search is first performed at the lowest resolution level and subsequently at each higher level of the image pyramid.
The invention uses an area based stereo matching algorithm that calculates the cross correlation coefficient between the template window and the search window, in order to identify and match conjugate points according to the following formula: [g1(c1, r1) -1][g2(c2,r2) -g2] I 2 2 J[g1(c1, r) - [g,(c,, 2) -g2] with g1 -g1(c1,r1) g2 -g2(c2,r2) i,J i,,j From the above equation P = the correlation coefficient g(c,r) = the ON value of the pixel (c,r) c1,r1 = the pixel coordinates on the left image n = the total number of pixers in the window i,j = pixel index into the correlation window Based on the cross correlation formula the invention optimizes 3 basic parameters in order to provide a reliable solution in urban areas. These parameters include the size of the search window, the size of the correlation (template) window and the correlation coefficient limit.
The search window defines the search area along the X and Y direction, where X direction is equivalent with the epipolar lines (in aerial photos) in order to locate the conjugate points (in the right image) of the previously extracted interest points. The search length in the X direction is directly related to the amount of relief displacement present on the scene and thus, in urban cases a value of 20 pixels is adequate. The search length in the Y direction is related with the geometric configuration of the stereo pair. In a normal case of a stereo pair, consisting of aerial images, the epipolar lines pass over the same scan line between the two images. Deviations from the normal case or inaccurate results from the aerial triangulation result in an increase in the search length along the Y direction. These deviations from the normal case are also increased in the case of the pushbroom sensors. The invention utilizes a search length of 5 pixels in the V direction, which provides adequate results over urban areas.
The correlation or template window defines the size of the area to be matched in the left image. This area defines the density of the interest points and subsequently the density of the stereo matched points. For regions containing high degree of topographic relief this window should be small to increase the density of the DSM. The invention uses a correlation window of size 3x3 in order to extract as many conjugate points as possible for the urban area.
S
The correlation coefficient limit defines the correlation coefficient threshold used to determine whether or not two points are considered possible matches. This parameter should be balanced between the desired reliability and density of the extracted points. The invention uses a value of 0.80 and it's apparent from the results that this value could be treated as a general guideline. At this point it should be noted that for the specific process, reliability is not as important as density is. The reason is that even mismatches in the extracted conjugate points will be removed in the step of data fusion, before adjusting the building outlines. Therefore having an adequate amount of conjugate points, describing most of the building footprints, is an important consideration.
Summarizing the proposed parameters for the automatic stereo matching, the search window is defined equal with 20 and 5 pixels respectively along the X and Y direction. The selected correlation window size is 3x3 and the correlation coefficient limit is equal with 0.80.
Adaptive change of the previous parameters is not implemented between the levels of the image pyramids since alteration of the optimized parameters could decrease the performance over urban areas. In addition the stereo matching algorithm is applied only inside the polygon representing the study area in order to reduce the computation requirements. The results from the stereo matching are conjugate points in a 3D shapefile format and therefore no interpolation or DEM filtering is applied in the extracted features.
The extracted 3D points are illustrated in figures 3 and 4, superimposed on a Digital Surface Model created from the LiDAR data.
From figure 4 it is evident that the majority of the building outlines are adequately described from the extracted points. In general the results are acceptable and could be subsequently used for the next stages of adjusting the initial building footprints, derived from the L1DAR point cloud. The stereo matching algorithm can additionally estimate the quality of the extracted points which provides a rough indication of the overall performance. This quality statistics are in the form of percentage per category, computed from external information, such as GCPs available for the study area or the tie points calculated during the aerial triangulation.
These quality statistics are used for automating the stereo matching algorithm and minimizing any user interaction, as described in section 5.
5. Summarised process for feature extraction from optical data Feature extraction from aerial imagery Data: Multiple stereo pairs of aerial images, polygons of the areas of interest Result: Stereo matched points Begin 5.1 Select panchromatic bands for all look angles foreach interest area polygon do Back project on image space and calculate image co-ordinates If image co-ordinates are within the 140 backward view then Select 140 backward look angle for stereo matching else Select 28° forward look angle for stereo matching end Select nadir look angle for stereo matching end 5.2 foreach selected image of the stereo pair do Apply Sobel edge operator with kernel size 3x3 Merge extracted edges and change the ON values of the initial images end 5.3 foreach enhanced stereo pair do Back-project polygons of interest areas on image space Set parameters for stereo matcher Search window size (x,y) = 20,5 Correlation window (x,y) = 3x3 Correlation coefficient limit = 0.80 Apply stereo matching on overlapping areas inside interest polygons Calculate quality statistics If "suspicious percentage" < 20% then Accept results and export 3D points in a shapefile format else I Increase correlation coefficient by 0.02 and repeat from 5.3 end Continue ioop from point 5.3 until "suspicious percentageN > 20% end end 6. Classification of LIDAR data and building detection This stage was briefly outlined at the overall diagram (figure 1), but consists of multiple steps. Figure 6 provides a summary of the individual processes implemented to detect buildings from LiDAR data or from a DSM.
The invention doesn't employ any information derived from the rnultispectral imagery, not even for delineating tree canopies. The latter process of tree detection is especially contradicting with the general notion of using multispectral information (calculating NDVI) as many researchers have proposed. The first reason for not applying classification techniques in the multispectral data is that in this case the invention would be restricted only to multispectral imagery without the ability to be applied in scanned aerial photographs. Hence at this point a textural classifier would be more appropriate since it can be applied in both cases. Despite the potential use of a textural classifier there is a major disadvantage. The methods of textural classifiers and multispectral classification are extremely depended upon the seasonal conditions, in other words if the tree canopies are in a leaf-on or leaf-off condition. In a leaf-off condition it is obvious that both methods will fail to delineate tree canopies, since there are neither textural differences nor high radiometric responses in the near infrared bands.
The disadvantages mentioned above are evident not only when both datasets have been acquired in the same period, with the trees being in a leaf-off condition, but it should be considered when the optical and LIDAR data have been acquired in different seasons.
Therefore the invention is designed, taking into account the above considerations and presents reliable results even if the tree canopies in the LIDAR data are in a leaf-on or leaf-off condition.
7. Combining UDAR data from different flight paths At this stage the individual LIDAR point clouds available from the two flight paths are merged together. This procedure is performed in order to increase the density of the point cloud. The advantage of having a denser point cloud is the ability to reconstruct even smaller roof details, during the plane fitting procedure, which results a detailed polyhedral model. The results from the combination of the two LIDAR point clouds are illustrated in figure 7, where a colour-coded, shaded relief image is indicated.
8. Delineating tree canopies from the LIDAR point cloud This stage is implemented in order to apply an initial delineation of the tree canopies. The primary purpose is to perform a rough detection of large trees without necessarily detect all types of vegetation, present on the study area. For the tree detection, two generic tree models are utilized in order to scan the entire point cloud for similar structures. The structure of the tree models is indicated in figure 8. Once the scan has been completed, using the generic tree models, the invention labels the cluster of points that matches the models to the high vegetation category.
There are three parameters associated with the specific procedure that should be optimized in order to achieve optimum results. These parameters specify the shape of the tree models and include the minimum and maximum height as well as the width variation in percentage.
The width variation factor determines the width of the tree model as a function of the height.
Although one might consider that this stage requires pre-existing knowledge of the tree heights in the scene, it's not a prerequisite since the minimum and maximum range can be defined as broad as possible, in order to encompass most of the potential sizes present at the scene. The algorithm essentially scans the point cloud using successive increments between the defined tree height ranges, to match clusters of points using a similarity criterion to determine potential matches. As noted before, for each height the width is adjusted accordingly as a percentage of the search height. Hence the increase of the height range has a negative impact only at the computational requirements, since the search range is increased.
The invention utilizes a minimum tree height of 2m since features lower than 2.5m will be filtered in the subsequent steps. The maximum height for both tree models is selected equal with 40m in order to encompass a broad range of tree heights. The width percentage is defined equal with 30%. These two tree models with the previous parameters can yield satisfactory results in many different situations and can be treated as default parameters in the design of the process. The results illustrated in figure 9 provide an indication of the effectiveness of the method, with most of the tree canopies successfully detected in the study area.
Nevertheless, there are occasions were the two tree models are inadequate to detect every individual tree, present in the study area (figure 10). This is a function of the limited number of used tree models and the non optimization of the width percentage parameter. The width percentage is a critical factor, but selecting the appropriate percentage is a tedious process that is very difficult to be automated. Instead the process suggests the use of 30% as a general guideline.
Despite the existence of undetected individual trees the invention successfully filters tree canopies which are the crucial aspect of this stage. The main reason is that even if individual trees remain, in the LiDAR point cloud, they can be filtered by applying a minimum plane size criterion during the building detection process. Instead if tree canopies are not detected they could introduce significant problems during the building detection.
9. Classifying ground surface from LIDAR data This stage of classifying LIDAR points belonging to the ground surface is an important step that is required before the building detection process, as well as for classifying points with a specified relative height. The classified ground surface is also crucial in assigning elevation to the projected building outlines during the building reconstruction process.
The invention employs a process that detects ground points by iteratively building a triangulated surface model. There are four parameters that must be optimized during this procedure, which include the maximum building size, the maximum terrain angle, maximum iteration angle and distance values. The maximum building size controls the amount of initial points selected for the generation of the initial TIN model. According to this value the algorithm will assume that any area of size less than the user defined value will have at least on ground point and will select the point with the lower elevation within this area. The other factor that affects the initial selection of ground points for the initial TIN is the maximum terrain angle. The maximum terrain angle is used to restrict the selection of initial points, if the slope between them exceeds the defined value. For the specific study area a maximum terrain angle of 60° was defined.
After the generation of the initial TIN, the process iteratively adds new points to the existing TIN model. The iterative selection of new ground points uses as conditions the maximum iteration angle and maximum iteration distance. The iteration angle is the maximum angle between points, its projection on triangle plane and closest triangle node. Iteration distance prevents from abrupt vertical changes when the triangles of the TIN are large (figure48).
These two parameters prevent buildings from being classified as ground surface.
The method utilized a maximum iteration angle equal with 5° and a maximum iteration distance of im. The resulted ground surface from the filtered points is illustrated in figure 12.
Classifying the ground surface, requires a certain amount of user interaction that can be minimized if assumptions are integrated in the process. The proposed algorithm for estimating the maximum building size employs the value of 300m as a rule of thumb. The advantage using a large value is that it's suitable for smaller as well as for industrial buildings because the algorithm can populate the TIN with the iterative approach as depicted in figure 12. In contrast the maximum terrain angle is more crucial and at the same time very difficult to automatically estimate. The difficulty arises because the value must be balanced in such a way, to take into account the topography present in the scene but at the same time excluding buildings from further calculation. In most cases this requires the user to have prior knowledge of the topography and the technical characteristics of the project (flightpath, density, altitude of sensor) or as an alternative to experiment with the data and optimize the value. In an attempt to simplify the problem the invention assumes that the steepness of the slopes, present at the building facades, is only related with the density of the LiDAR point cloud or the DSM. This assumption is oversimplified because in fact the slopes of the buildings are related with the altitude of the sensor, the look angle with relation to the building height and other variables, but it serves the purpose of approaching an automated solution.
Therefore the value of 600 as the maximum terrain angle determined from the test sites is treated as being representative for any LIDAR point cloud with density of 4 points/rn2. Hence point clouds with coarser or higher density are adjusted accordingly with respect to a linear relationship. The density of the point cloud can also be determined automatically by calculating a "density map". The density map is a popular function among GIS packages that uses a kernel window of size 1 by 1 m to scan the point cloud and create a raster image of the density. Each pixel of the raster image represents the number of enclosed points. The overall density is then calculated from the average value of the floating point values, in each pixel.
The maximum iteration angle and iteration distance can be treated as a rule of thumb, since the values are very small and will avoid detecting buildings in almost every situation unrelated from the density of the LIDAR point cloud, but at the same time populate the initial TIN model. Because the selected values are relatively small there are still remaining unclassified ground surfaces as indicated in figure 13.
Despite the presence of unclassified points as ground surface, this is not a major concern since this will be solved in the following stage. Figure 13 illustrates the efficiency of the method, since parked cars were excluded from the detected ground surface.
10. Classifying low vegetation and background features from LIDAR data or DSMs Features not related with the building entities, such as parked cars and low vegetation, can be effectively removed by applying a height range above the ground surface. As a representative value the range of 0-2.5m is selected that can be used for most application.
This step will also classify the points representing the ground surface that weren't classified from the previous stage and exclude them from further calculation.
This process uses the initial ground model to create a temporary TIN model and then compare the unclassified points to estimate the height from the initial TIN model. Features and low vegetation with height equal or less than 2.5 meters are filtered. The results from the above procedure are illustrated in figure 14.
11. Classifying buildings from the LIDAR point cloud or DSM With the completion of the previous stage, the only remaining points would correspond to the buildings present on the scene. Nevertheless, as discussed previously, the stage of tree detection in most cases will not be able to filter all the individual trees. As a consequence the unclassified points at this stage include remaining individual tree crowns with height greater than 2.5m as indicated in figure 15.
The building classification process is based on a plane fitting method applied inside the void areas introduced from the classified ground points. At this point it should be noted that before the procedure of building detection, the proposed method merges the ground points with the low vegetation (height up to 2.5m) points in order to minimize void regions not related with building entities. Several conditions are introduced at this stage to optimize the performance of the process. The minimum building parameter is used in order to avoid performing the plane fitting in holes, with area smaller than the one defined for the minimum building criterion. This value should be large enough to bypass individual remaining trees (figure 15) but at the same time, taking into account smaller cottage style houses present on the scene.
The invention implements a minimum building size of 40m2 with acceptable results that can be treated as a representative value for many different situations since it is small enough to detect a variety of building sizes. This value assumes that the tree segmentation step has successfully filtered tree canopies from the study area and only individual trees remain in the unclassified LiDAR points. Another criterion used, during the selection of void areas on the ground, is the maximum building size. Although this condition is used for avoiding extremely large gaps in the ground surface, in the workflow every large void area is related with a building entity. Therefore the invention utilized a maximum building size of 30000m2.
During the plane fitting, on the unclassified points shown in figure 15, there are four conditions that must be addressed. The minimum detail parameter defines the minimum size of the desired plane. This parameter can be automatically determined with respect to the density of the LIDAR point cloud. Considering that a plane requires at least three LIDAR points, the minimum detail can be defined as three times the average spacing calculated from the density map previously.
Another important parameter is the maximum roof angle. This parameter is useful for discriminating between gable or hip type roofs and remaining tree crowns with an area greater than 40m2. The valid assumption at this point is that in most cases tree crowns will introduce steeper plane angles (depending on the tree type) than the roof planes. The invention uses a maximum roof angle of 600, since small cottage style houses usually exist in urban & rural areas. The elevation tolerance is utilized as a restriction for the least squares plane fitting process. That is, only clusters of points within the specific height tolerance will be used for estimating the least squares location of the individual planes. The elevation tolerance is defined equal with the relative vertical accuracy of the Airborne Laser Scanner or the DSM. The relative vertical accuracy is also related with the pulse rate and in the absence of such knowledge, it can be substituted by the expected minimum vertical differences of the points. The minimum vertical difference in a LIDAR point cloud can be safely assumed to be equal with the horizontal spacing between the LiDAR points. From the above procedure, an initial building detection is performed as indicated in figures 16 & 18.
12. Generating DSM from the initial building classification From the initial classification obtained, vegetation, ground surface, features with height up to 2.5m and buildings are successfully segmented. The main issue introduced at this stage are the LiDAR or DSM points representing roof details with area smaller than 40m2. Recall that for classifying buildings a restriction was used that excluded points inside void regions with area smaller than 40m2. The restriction was used so that the process does not take into account small remaining trees that haven't previously classified. Therefore, there are still few remaining unclassified points representing small roof details, as indicated in figure 17.
In order to assign the unclassified building points, a building vector hypothesis should be created to filter the desired points. The first stage toward the creation of the vector building hypothesis is to generate a raster DSM using only the detected building class. The selected spacing was equal with O.5m and the resulted DSM represented only the buildings of the study area, with every other feature assigned at the background (figure 19).
13. Reclassification and raster to vector conversion of the DSM Before the raster to vector conversion, the raster DSM has to be reclassified into a binary image. During the reclassification, pixels representing building entities are assigned in the foreground (value 1), while the surrounding features remain in the background (value 0).
The raster to vector conversion is performed, using a smoothing weight of 2, without any gap closure parameters and a void area size closure of 3 pixels. The resulted polygon layer from the raster to vector conversion is depicted in figure 20.
The polygon layer is subsequently used to filter the unclassified LIDAR points, as described in the following section.
14. Filtering and merging unclassified hOAR or DSM points to the building class The filtering is performed in a GIS environment by overlaying the unclassified LIDAR point cloud on the building polygon hypothesis. Only the LIDAR points located within the polygon layer are selected and subsequently merged with the initial building detection. A subset of the study area is depicted in figure 21 with the selected points superimposed on the OSM.
This is the final step of the invention for the building detection procedure.
15. Summarised process for the building extraction procedure Building detection from LiDAR point clouds or DSMs Data: LIDAR data from multiple flight paths or DSM, polygon of the study area Result: Classified LiDAR or DSM points representing only buildings a, Begin 14.1 foreach study area polygon do Calculate density map from LIDAR or OSM data & average density value (0) If multiple point clouds overlap then Merge points from multiple flight paths else Select point cloud where (D) is higher end end foreach selected point cloud do 14.2 Set parameters for scanning point cloud with generic tree models Minimum tree height = 2m Maximum tree height = 40m Width percentage = 30% Delineate trees and remove detected points from selected point cloud 14.3 Set parameters for classifying ground Maximum building size = 30Cm Maximum terrain angle (Ta) = "user defined" If maximum terrain angle (Ta) = unulI then Calculate Ta = 60° * D / 4 Set maximum value for (Ta) = 80° If(Ta)> 80° then I assign Ta = 80° else I maintain calculated (Ta) end else Maintain user defined (Ta) end Maximum iteration angle = 5° Maximum iteration distance 1m Detect ground surface and create ground TIN model 5.3.4 Classify points up to 2.5m above ground TIN as low features Merge points classified as low features with the ground surface If study region has area> 1km2 then I Split region into four subset areas else I Retain existing study area end end foreach subset point cloud do Calculate average point spacing from the density map (Sm) 5.3.5 Set parameters for building detection I, Minimum building size = 40m2 Maximum building size = 30000m2 Minimum detail = 4* Sm Maximum roof angle = user defined" If maximum roof angle = "null" then Set maximum roof angle = 600 else Utilize user defined value end Elevation tolerance = Sm Perform building detection and assign points to building class End Unite individual subsets to create a single classified point cloud 5.3.6 Create DSM using only the building class 5.3.7 Reclassify OSM into a binary image Perform raster to vector conversion and create building polygons 5.3.8 Overlay unclassified laser points and select remaining building points Merge selected points with initial building detection end The invention includes an additional stage of splitting the entire LIDAR point cloud into subsets, if the study area is larger than 1km2. This stage takes place before the building classification and it should be incorporated in the algorithm in order to avoid any RAM overloading on the workstation. This problem is introduced, because during the building classification process the fitted planes are stored temporarily and in general the entire process of building detection is very computationally intensive. Therefore by splitting the study area and running the procedure separately can effectively reduce the computation requirements.
16. Geodata fusion for optimizing the building footprint This section describes the proposed method for the initial generation of the building outlines and subsequently the data fusion stage with the stereo matched points, for refining and adjusting the initial building outlines. The diagram in figure 23 indicates the overall workflow of the process.
17. Roof reconstruction for extracting initial building footprints With the roof structures adequately represented, an initial building reconstruction is performed. The building reconstruction process employed is very similar with the building detection algorithm, but more simplified because parameters such as the maximum roof angle are not needed. The algorithm uses the void regions, as introduced from the merged ground points and low features, which contain the classified building points. These void areas (figure 24) represent the initial starting points for the plane detection algorithm.
Based on the classified building points, the process iteratively fits planes on the LIDAR or DSM points using a least squares estimation. The basic difference with the building detection algorithm is that the process is restricted only to the classified points without having the need to differentiate between building and vegetation. Furthermore, instead of the planes being stored temporarily at the building reconstruction stage, the planes are visualized at the user interface dialog window. The complexity and the time requirements for reconstruction is a function of the specified minimum size of the plane, that the process will try to detect, the size of the roofs and the density of the LIDAR or DSM point cloud.
An additional parameter is the possible merging between planes with a specified vertical separation. This option is useful for applying a certain level of generalization in the reconstructed roofs and potentially avoids multiple superimposed planes, representing the same roof surface. An example of a reconstructed roof is indicated in figure 25.
The next step is to extrude the roof planes on the ground surface and create the vertical facades of the buildings as illustrated in figure 26.
18. Spatial cleaning and generating topology for the 2D building footprints After the initial 3D building reconstruction, the vertical facades of the buildings are transformed in two dimensional polygons for subsequent use in a GIS environment.
The individual polygons introduced from the building reconstruction are not consistent since adjacent planes are not automatically merged (left side of figure 27). In order to overcome this problem and create a single polygon for each building, a spatial cleaning process is utilized. The spatial cleaning process is the fundamental function for topology generation.
Topology organizes the spatial relationships between features in a set of feature classes, using specific topological rules that will constrain different feature's topological relationships.
Once the participating feature classes have been added to the topology and the rules defined, the topology is validated. Two basic topology rules are used during the spatial cleaning. The first rule requires that the interior of polygons in the feature class don't overlap.
The polygons can share edges or vertices. This rule is used when an area can't belong to two or more polygons. The second rule requires that polygons not have voids within themselves or between adjacent polygons. Polygons can share edges, vertices, or interior areas.
There are two parameters that are used to validate the second rule, which include the dangle length and fuzzy tolerance.
The dangle length removes dangling arcs that are shorter than the specified dangle tolerance. The dangling arc is an arc having the same polygon on both its left and right sides and having at least one node that doesn't connect to any other arc. It often occurs where a polygon does not close properly (undershoot) or where arcs don't connect properly. The fuzzy tolerance defines small distances used to resolve inexact intersection locations. It defines the resolution of a coverage resulting from the spatial clean operation.
19. Generalization and simplification of the building outline This important stage is implemented so that the number of vertices describing the building footprint are decreased and therefore simplifying the building outline. The main assumption at this stage is that all the buildings in the scene are described by orthogonal boundaries and therefore the simplification is preserving and enhancing the orthogonality between the linear segments of the building footprint.
There are two parameters which regulate the simplification process, which include the linear tolerance and the minimum size of the polygon. In general, straight lines will be enhanced so that all linear near 90 degrees angles become exactly 90 degree. Based on the given tolerance, isolated small intrusions will be either filled up or widened. Isolated small extrusions will be filtered out. Any building or group of connected buildings with a total area smaller than the minimum area will be excluded from the result.
Despite the efficiency of the generalization algorithm, to enhance the orthogonality as indicated in figure 29, there is a potential problem of oversimplifying the footprint. An example of oversimplified building outlines is depicted in figure 28.
The oversimplification is directly related to the specified linear tolerance and therefore, the results should be evaluated from the operator in order to avoid oversimplifications in the majority of the buildings.
20. Filtering stereo matched points with simplified building outlines Apart from few occasions, were the building boundary is oversimplified, in most cases it can be considered as a good approximation ol the optimum building outline and it is subsequently used to filter the stereo matched points.
Initially this stage requires the polygon layer to be converted into a polyline feature class where it is subsequently used to create buffer regions around the generalized footprint. The size of the buffer regions is directly related with the expected planimetric accuracy of the generalized footprint. The buffer distance will also define the maximum possible planimetric correction that can be applied in the simplified footprint. Therefore this parameter is related with many different variables, like the laser footprint size on the ground, the density of points, the building height and the position and direction of the flight path. Even if the above characteristics where known, it could still be difficult to automatically determine a reliable value for the buffer size.
Instead, the invention utilizes an iterative search process to determine the buffer size for each individual linear segment. The method initially breaks the polylines into the constituent linear segments. Each linear segment is then treated independently and buffer zones with width of 25cm are created at an increment of 25cm around the linear segment (figure 30).
The assumption at this point is that the generalized building footprint will have a similar direction with the actual footprint and therefore the two building outlines will be nearly parallel. For each buffer zone of 25cm width, the process counts the number of stereo matched points that lie within the zone. If more the 5 points are located inside the zone, the process is completed and the buffer size is defined for the specific linear segment. If less the points are present, the process will create the next buffer zone between the range of 25- 50cm from the position of the linear segment and the counting will be repeated.
Because the stereo matching process may not yield conjugate points, for every segment of the building outline, the invention employs a maximum search range that is acting as a termination criterion. This termination function is useful in order to minimize the possibility of adjusting the simplified footprint using stereo matched points not related with the actual building outline. The maximum search range can be estimated approximately as 2-3 times the point spacing of the LiDAR point cloud or OSM, as calculated from the density map.
Figure 31 depicts the filtered stereo matched points selected with the procedure described above.
21. Adjustment and refinement of the building outline The final adjustment of the building footprint is performed using the filtered stereo matched points. The footprint adjustment uses the individual linear segments with a least squares adjustments. Using the linear segments instead of the entire outline for the adjustment has the advantage of avoiding distortions from the adjustment and therefore preserving the orthogonality between the lines obtained from the previous stage. The least squares adjustment is restricted in taking into account only filtered points within the buffer distance, using smooth curvature estimation. The process of adjusting the linear elements is illustrated in figure 32.
After the least squares adjustment, each individual linear segment is extended in both directions until it's intersected with a neighbouring line segment. This operation is implemented for creating topologically correct closed polylines representing the building footprint. Figure 33 indicates the differences between the adjusted outline and the simplified footprint.
22. Summarised process for the Geodata fusion stage Geodata fusion for adjusting building outline Data: classified LIDAR or DSM points representing buildings Result: Adjusted building footprint Begin Calculate average point spacing from the density map (Sm) 21.1 foreach void area larger then 1 0m2 on the classified ground points do Check if classified building points are present If classified building points exist then Set parameters for plane fitting algorithm Minimum plane size (Ps) = 4*Sm Elevation tolerance = Sm Merge planes (increase in tolerance) = Sm + (Sm/4) I Perform plane fitting and visualize roof planes else I Exclude void area from further processing end end foreach roof plane do Extrude plane boundary on the ground surface Transform vertical facades into 2D polygons 21.2 Set topology rules for spatial cleaning Dangle length tolerance = "user defined" Fuzzy tolerance = "used defined If either "dangle length" or fyzzy tolerance ="null" then Dangle length tolerance = 2*(Ps) Fuzzy tolerance = 2*(Ps) else I Utilize user defined values end Perform spatial cleaning and produce unified building polygons Convert vector polygons to vector polylines End foreach vector polyline do 21.3 Set parameters for simplification algorithm linear tolerance = "user defined" minimum area = 10m2 If linear tolerance = "nulls then Obtain lengths of the constituent linear segments Calculate average length value (Al) Select linear segments (Ls) with length< Al linear tolerance = average of (Ls) lengths else Utilize user defined values end Create simplified building footprints end foreach simplified polyline do 21.4 Break polylines into the constituent linear segments Overlay stereo matched points foreach linear segment do Estimate buffer size Set maximum search size (Ms) = 3* (Sm) Generate buffer zone rings of 25cm until (Ms) is reached Obtain number of points (Np) within buffer zonel If (Np) within buffer zonel (0-25cm) < 5 then I Repeat loop for second zone (0-50cm) until (Np) >5 else .1 I Specify buffer zone size = 25cm end Create buffer zones and filter stereo matched points 21.5 Apply least squares adjustment using filtered points Extend linear segments until intersected Create closed building footprint end end end The process is very robust by minimizing user interaction, since most of the critical parameters can be automatically defined. The invention employs automated estimation for two of the most critical parameters which include the linear tolerance, during the simplification of the footprint, and the robust estimation of the buffer size for filtering the stereo matched points.
The robust estimation for the linear tolerance of the footprint simplification initially obtains the lengths of the linear segments between each node. The lengths can be retrieved automatically from the geodatabase that has been created from the generated topology.
Then the process estimates the average length of the linear segments. Based on the average length, the algorithm in the next step, selects only the small linear segments with length below the average value. Based on the assumption that these lines will contain the unnecessary intrusions, the algorithm estimates the linear tolerance as the average length value of the selected short linear elements.
23. Building reconstruction for generating final polyhedral models This section describes the procedure of the building reconstruction for the creation of the final polyhedral building models. The building reconstruction is based on fitting planes on the LiDAR or DSM point cloud, white Boolean logic is implemented for merging adjacent planes to complete the polyhedral models. The diagram in figure 34 illustrates the overall workflow.
24. Combining building footprint with LiDAR point cloud This stage is implemented in order to merge the two dimensional building outlines with the classified LiDAR or DSM point cloud. Combing the building outlines with the classified building points will restrict the algorithm during the roof plane reconstruction and provide adequate information for the planimetric position of the vertical building facades.
The building footprints are orthographically projected on the ground surface (using the classified ground points in order to assign elevation values at the vertex of the polylines (figure 35). The vertex density depends on the density of the LIDAR data.
25. Final Building reconstruction The building reconstruction process is based on the same process described in section 16 with one crucial difference. At this stage the starting locations are not the void regions introduced in the classified ground points, but the building footprints. Furthermore the roof planes are restricted to be within the boundaries specified by the building outlines.
This stage consists of the plane reconstruction process and the generation of the vertical walls from the building outline. For the roof reconstruction the same parameters as previously are used, which include the minimum plane size and the elevation tolerance.
Figure 36 illustrates the process and the extracted roof planes for a small selection of buildings.
After the successful roof reconstruction, the vertical building facades are generated by extruding the planes on the projected building footprint, as depicted in figure 37.
26. Merging adjacent roof planes utilizing Boolean functions The final stage in the creation of polyhedral building models is the implementation of Boolean functions for merging adjacent roof planes of the same building. The Boolean merging function utilized, is essentially converting the "boundary representation" (individual planar facets) of the buildings into a solid feature. In order for this conversion to be performed the planar facets must be transformed into volume primitives and then merged together using the Constructive Solid Geometry (CSG).
The CSG modeling is used widely in computer aided design (CAD) systems, since the modeling is much more intuitive and the primitives can be parameterized. In addition CSG enables the association of the primitives with other, additional information and the determination of volumetric primitive parameters is quite robust.
The invention utilizes the CSG for each separate building entity, as defined from the optimized building footprint and therefore there isn't any possibility of merging planes that do not belong to the same building. Within each building boundary adjacent planes are extended and intersected if they are located within 2m from each other. This buffer region is calculated similarly as the "linear tolerance" during the spatial cleaning, as two times the minimum plane size specified for the plane fitting. The results from the implementation of the Boolean merging function are depicted in figure 38. With the implementation of Boolean logic, the building reconstruction process is completed. Figure 39 illustrates the reconstructed buildings in the entire study area.
27. Summarised process for the building reconstruction stage Building reconstruction from LiDAR or DSM data Data: Classified LIDAR or DSM point cloud, adjusted building outlines Result: Solid 3D building models Begin Merge building footprints with LIDAR or DSM point cloud Orthographically project building outlines on the ground surface foreach projected building footprint do Set parameters for plane fitting algorithm Minimum plane size (Ps) = 4Sm (spacing from density map) Elevation tolerance = Sm Merge planes (increase in tolerance) Sm + (Sm14) Fit planes and visualize results Create vertical facets from footprints and intersect with roof planes Convert planar facets to volumetric primitives Set parameters for Constructive Solid Geometry and Boolean merging Buffer distance = 2* (Ps) Apply Boolean union and create solid 3D building models end end The process for building reconstruction presents impressive results. The method seems to be very reliable, with most building models visually correct. In addition, the amount of reconstructed roof details is impressive since ventilation equipment, dormers and chimneys are obtained in many cases (figure4O). The level of detail at the 3D building models is directly related with the density of the point cloud, since no additional cues from the aerial photographs are used for the inner roof structures. In contrast the vertical facades of the buildings are created from the refined building footprint and therefore improving the overall planimetric accuracy of the solid models.
One of the crucial steps is the implementation of Boolean functions for implementing the Constructive Solid Geometry. In most cases the process yields reliable results with the majority of the adjacent roof planes merged together. There are although few occasions where small intrusions are introduced in the 3D models (figure 42).
28. Summary
1. The invention introduced a new innovative process for the entire workflow of 3D City Modelling using data fusing techniques to increase the plariimetric accuracy of the building models. The proposed method for the stage of feature extraction from digital airborne imagery produces reliable result, with most linear features adequately represented. Furthermore, the implementation of feature extraction is straightforward and adaptable to data, collected from different sensors.
2. The stage of building detection produced very promising results since most of the buildings are detected, independent of their size or roof type. The main advantage is the generic approach of the process that can be implemented in a variety of situations. In addition the method is not restricted from the leaf condition and even undetected individual trees can be filtered during the building classification stage.
3. The adjustment of the extracted building outlines successfully improves the planimetric accuracy of the building footprint by incorporating fusion techniques with the stereo matched points. In addition, the proposed method is fairly robust by minimizing user interaction since most of the critical parameters can be automatically defined.
4. The procedure of building reconstruction presents accurate and visually impressive results. The process seems to be very reliable, having the ability to reconstruct most of the roof details. One of the crucial steps is the use of Boolean functions for implementing the Constructive Solid Geometry. In most cases the process yields reliable results with the majority of the adjacent roof planes merged together.
References Cited
Heuel, S., Forstner, W., Lang, F., 2000. Topological and geometrical reasoning in 3D grouping for reconstructing polyhedra/surfaces. In ISPRS, volume XXIII, pages 397.404, Amsterdam.
Amen, B. and Frltsch, D., 2000. Automatic 3D building reconstruction using plane-roof structures. In ASPRS, Washington DC.
Scholze, S., Moons, T., and Van-Gool, 1. 2002. A probabilistic approach to building roof reconstruction using semantic labelling. In Proceedings of the DAGM Conference, pages 257-264, Zurich.
Willuhn, W., Van Gool, 1., 2005. Building reconstruction from aerial images using efficient semiautomatic building detection, International Society of Photogrammetry & Remote Sensing, Hannover Workshop 2005 Fischer, A., Kolbe, T., Lang, F., Cremers, A., Forstner, W., Plumer, L., and Steinhage, V., 1998.
Extracting buildings from aerial images using hierarchical aggregation in 2D and 3D. CVIU, 72(2):163.185.
Taillandier, F., Deriche, R., 2004. Automatic building reconstruction from aerial images: A generic Bayesian framework, XXth ISPRS Congress, 12-23 July 2004 Istanbul, Turkey, Commission III, WG 111/4.
Vosselman, G., Gorte, B.G.H., Sithole, G., Rabbani, T., 2004. Recognizing structure in laser scanner point clouds, International Archives of Photogrammetry, Remote Sensing and Spatial Information sciences, Vol. 46, part8/W2, Freiburg, Germany, October 4-6, pp. 33-38 Mass, H.G., and G. Vosselman, 1999. Two algorithms for extracting building models from raw laser altimetry data, ISPRS Journal of Photogrammetry & Remote Sensing, 54(2-3): 153-163.
Gorte, B., 2002. Segmentation of TIN-Structured Surface Models, In: Proceedings Joint International Symposium on Geospatial Theory, Processing and Applications, on CDROM, Ottawa, Canada, 5 p. Wang, Z., 1998. Extracting building information from LIDAR data. ISPRS Commission III Symposium on Object Recognition and Scene Classification from Multi-Spectral and Multi-Sensor Pixels, Columbus, Ohio. 0,-
then, L. C., Teo, T. A., Shao, V. C., Lai, V. C., Rau J. V., 2004. Fusion of LIDAR data and optical imagery for building modeling. International archives of Photogrammetry and Remote Sensing, Vol.35, No B2, pp. 586-591.
Schenk, 1., 1999. Digital Photo grammetry. Volume I. TerraScience, pages 52-54.
GB0704368A 2007-03-07 2007-03-07 Automatic 3D Modelling Withdrawn GB2457215A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0704368A GB2457215A (en) 2007-03-07 2007-03-07 Automatic 3D Modelling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0704368A GB2457215A (en) 2007-03-07 2007-03-07 Automatic 3D Modelling

Publications (2)

Publication Number Publication Date
GB0704368D0 GB0704368D0 (en) 2007-04-11
GB2457215A true GB2457215A (en) 2009-08-12

Family

ID=37966065

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0704368A Withdrawn GB2457215A (en) 2007-03-07 2007-03-07 Automatic 3D Modelling

Country Status (1)

Country Link
GB (1) GB2457215A (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074048A (en) * 2011-01-06 2011-05-25 天津市星际空间地理信息工程有限公司 Method for structuring and dispatching digital city model library
CN102096072A (en) * 2011-01-06 2011-06-15 天津市星际空间地理信息工程有限公司 Method for automatically measuring urban parts
CN102607460A (en) * 2012-03-13 2012-07-25 天津工业大学 Global phase filter method applied to three-dimensional measurement
WO2012117273A1 (en) 2011-03-01 2012-09-07 Aga Cad, Uab Parametric truss and roof modelling system, and method of its use
CN103903301A (en) * 2014-03-19 2014-07-02 四川川大智胜软件股份有限公司 Urban landscape modeling method based on colored image identification
CN103914881A (en) * 2013-01-09 2014-07-09 南京财经大学 Three-dimensional model typification algorithm based on minimum spanning trees
CN104049245A (en) * 2014-06-13 2014-09-17 中原智慧城市设计研究院有限公司 Urban building change detection method based on LiDAR point cloud spatial difference analysis
EP2849117A1 (en) * 2013-09-16 2015-03-18 HERE Global B.V. Methods, apparatuses and computer program products for automatic, non-parametric, non-iterative three dimensional geographic modeling
CN105225272A (en) * 2015-09-01 2016-01-06 成都理工大学 A kind of tri-dimensional entity modelling method based on the reconstruct of many outline lines triangulation network
CN105957146A (en) * 2016-04-29 2016-09-21 铁道第三勘察设计院集团有限公司 Linear engineering three-dimensional geological modeling method
CN105976433A (en) * 2016-04-29 2016-09-28 铁道第三勘察设计院集团有限公司 Surface-to-body attribute inheritance method
CN106056563A (en) * 2016-05-20 2016-10-26 首都师范大学 Airborne laser point cloud data and vehicle laser point cloud data fusion method
US9602224B1 (en) * 2011-06-16 2017-03-21 CSC Holdings, LLC Antenna placement based on LIDAR data analysis
US9613388B2 (en) 2014-01-24 2017-04-04 Here Global B.V. Methods, apparatuses and computer program products for three dimensional segmentation and textured modeling of photogrammetry surface meshes
US9639757B2 (en) 2011-09-23 2017-05-02 Corelogic Solutions, Llc Building footprint extraction apparatus, method and computer program product
CN108062793A (en) * 2017-12-28 2018-05-22 百度在线网络技术(北京)有限公司 Processing method, device, equipment and storage medium at the top of object based on elevation
CN108363983A (en) * 2018-03-06 2018-08-03 河南理工大学 A kind of Urban vegetation classification method based on unmanned plane image Yu reconstruction point cloud
CN109993783A (en) * 2019-03-25 2019-07-09 北京航空航天大学 A kind of roof and side optimized reconstruction method towards complex three-dimensional building object point cloud
CN110910446A (en) * 2019-11-26 2020-03-24 北京拓维思科技有限公司 Method and device for determining building removal area and method and device for determining indoor area of building
EP3693931A1 (en) * 2009-10-26 2020-08-12 Pictometry International Corp. Method for the automatic material classification and texture simulation for 3d models
US10962650B2 (en) 2017-10-31 2021-03-30 United States Of America As Represented By The Administrator Of Nasa Polyhedral geofences
US11954797B2 (en) 2019-01-10 2024-04-09 State Farm Mutual Automobile Insurance Company Systems and methods for enhanced base map generation

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107085219A (en) * 2017-04-28 2017-08-22 上海华测导航技术股份有限公司 A kind of automatic creation system of above-ground route data
CN107102339A (en) * 2017-04-28 2017-08-29 上海华测导航技术股份有限公司 A kind of automatic generation method of above-ground route data
CN107167815A (en) * 2017-04-28 2017-09-15 上海华测导航技术股份有限公司 The automatic creation system and method for a kind of highway road surface line number evidence
CN108037514A (en) * 2017-11-07 2018-05-15 国网甘肃省电力公司电力科学研究院 One kind carries out screen of trees safety detection method using laser point cloud
CN109509256B (en) * 2018-06-21 2023-07-18 华南理工大学 Automatic measurement and 3D model generation method for building structure based on laser radar
CN109190255B (en) * 2018-09-05 2023-04-07 武汉大学 Three-dimensional reconstruction method for urban three-dimensional property space
CN110888452B (en) * 2018-09-11 2023-03-17 杨扬 Obstacle avoidance method for autonomous flight of unmanned aerial vehicle power inspection
CN111932653B (en) * 2019-05-13 2023-12-15 阿里巴巴集团控股有限公司 Data processing method, device, electronic equipment and readable storage medium
CN110232389B (en) * 2019-06-13 2022-11-11 内蒙古大学 Stereoscopic vision navigation method based on invariance of green crop feature extraction
CN112241661A (en) * 2019-07-17 2021-01-19 临沂大学 Urban ground feature fine classification method combining airborne LiDAR point cloud data and aerial image
CN110705096B (en) * 2019-09-30 2023-06-02 南京丰恒嘉乐电子科技有限公司 Measuring and modeling system adapting to golf simulation software course and application method thereof
CN111815776A (en) * 2020-02-04 2020-10-23 山东水利技师学院 Three-dimensional building fine geometric reconstruction method integrating airborne and vehicle-mounted three-dimensional laser point clouds and streetscape images
CN111754484B (en) * 2020-06-23 2024-03-29 北京东方至远科技股份有限公司 House vector frame and PS point matching method based on InSAR big data and Hough detection
CN113139730B (en) * 2021-04-27 2022-03-11 浙江悦芯科技有限公司 Power equipment state evaluation method and system based on digital twin model
CN113190639B (en) * 2021-05-13 2022-12-13 重庆市勘测院 Comprehensive drawing method for residential area
CN113345089B (en) * 2021-05-31 2023-06-23 西北农林科技大学 Regularized modeling method based on power tower point cloud
CN113689567B (en) * 2021-07-23 2022-05-27 深圳市顺欣同创科技有限公司 Method for building in cloud end single oblique photography model
CN113687336A (en) * 2021-09-09 2021-11-23 北京斯年智驾科技有限公司 Radar calibration method and device, electronic equipment and medium
CN115861571B (en) * 2023-01-18 2023-04-28 武汉大学 Semantic perception triangle network model building entity reconstruction method
CN116246069B (en) * 2023-02-07 2024-01-16 北京四维远见信息技术有限公司 Method and device for self-adaptive terrain point cloud filtering, intelligent terminal and storage medium
CN116611026B (en) * 2023-05-25 2024-01-09 中国自然资源航空物探遥感中心 Aviation gamma energy spectrum data fusion processing method and system
CN117033536B (en) * 2023-10-10 2023-12-15 中国科学技术大学 Construction method of GIS-based urban combustible distribution database
CN117456115B (en) * 2023-12-26 2024-04-26 深圳大学 Method for merging adjacent three-dimensional entities

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010000443A (en) * 2000-09-29 2001-01-05 서정헌 Media that can record computer program sources for extracting building by fusion with photogrammetric image and lidar data, and system and method thereof
KR100545358B1 (en) * 2005-11-17 2006-01-24 한진정보통신(주) Method for restoring of three dimentional building using digital map and airborne laser surveying data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010000443A (en) * 2000-09-29 2001-01-05 서정헌 Media that can record computer program sources for extracting building by fusion with photogrammetric image and lidar data, and system and method thereof
KR100545358B1 (en) * 2005-11-17 2006-01-24 한진정보통신(주) Method for restoring of three dimentional building using digital map and airborne laser surveying data

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11657567B2 (en) 2009-10-26 2023-05-23 Pictometry International Corp. Method for the automatic material classification and texture simulation for 3D models
US11263807B2 (en) 2009-10-26 2022-03-01 Pictometry International Corp. Method for the automatic material classification and texture simulation for 3D models
EP3693931A1 (en) * 2009-10-26 2020-08-12 Pictometry International Corp. Method for the automatic material classification and texture simulation for 3d models
CN102096072A (en) * 2011-01-06 2011-06-15 天津市星际空间地理信息工程有限公司 Method for automatically measuring urban parts
CN102074048B (en) * 2011-01-06 2012-08-29 天津市星际空间地理信息工程有限公司 Method for structuring and dispatching digital city model library
CN102096072B (en) * 2011-01-06 2013-02-13 天津市星际空间地理信息工程有限公司 Method for automatically measuring urban parts
CN102074048A (en) * 2011-01-06 2011-05-25 天津市星际空间地理信息工程有限公司 Method for structuring and dispatching digital city model library
WO2012117273A1 (en) 2011-03-01 2012-09-07 Aga Cad, Uab Parametric truss and roof modelling system, and method of its use
US9602224B1 (en) * 2011-06-16 2017-03-21 CSC Holdings, LLC Antenna placement based on LIDAR data analysis
US9860770B1 (en) 2011-06-16 2018-01-02 CSC Holdings, LLC Optimizing antenna placement based on LIDAR data
US10021576B1 (en) 2011-06-16 2018-07-10 CSC Holdings, LLC Selecting transmitter sites for line of sight service
US10271229B1 (en) 2011-06-16 2019-04-23 CSC Holdings, LLC Assessing reception of line of sight radio service
US10528811B2 (en) 2011-09-23 2020-01-07 Corelogic Solutions, Llc Building footprint extraction apparatus, method and computer program product
US9639757B2 (en) 2011-09-23 2017-05-02 Corelogic Solutions, Llc Building footprint extraction apparatus, method and computer program product
CN102607460B (en) * 2012-03-13 2014-07-30 天津工业大学 Global phase filter method applied to three-dimensional measurement
CN102607460A (en) * 2012-03-13 2012-07-25 天津工业大学 Global phase filter method applied to three-dimensional measurement
CN103914881A (en) * 2013-01-09 2014-07-09 南京财经大学 Three-dimensional model typification algorithm based on minimum spanning trees
EP2849117A1 (en) * 2013-09-16 2015-03-18 HERE Global B.V. Methods, apparatuses and computer program products for automatic, non-parametric, non-iterative three dimensional geographic modeling
US9600607B2 (en) 2013-09-16 2017-03-21 Here Global B.V. Methods, apparatuses and computer program products for automatic, non-parametric, non-iterative three dimensional geographic modeling
US10534870B2 (en) 2013-09-16 2020-01-14 Here Global B.V. Methods, apparatuses and computer program products for automatic, non-parametric, non-iterative three dimensional geographic modeling
US9613388B2 (en) 2014-01-24 2017-04-04 Here Global B.V. Methods, apparatuses and computer program products for three dimensional segmentation and textured modeling of photogrammetry surface meshes
CN103903301B (en) * 2014-03-19 2017-01-11 四川川大智胜软件股份有限公司 Urban landscape modeling method based on colored image identification
CN103903301A (en) * 2014-03-19 2014-07-02 四川川大智胜软件股份有限公司 Urban landscape modeling method based on colored image identification
CN104049245A (en) * 2014-06-13 2014-09-17 中原智慧城市设计研究院有限公司 Urban building change detection method based on LiDAR point cloud spatial difference analysis
CN105225272B (en) * 2015-09-01 2018-03-13 成都理工大学 A kind of tri-dimensional entity modelling method based on the reconstruct of more contour line triangulation networks
CN105225272A (en) * 2015-09-01 2016-01-06 成都理工大学 A kind of tri-dimensional entity modelling method based on the reconstruct of many outline lines triangulation network
CN105957146B (en) * 2016-04-29 2018-11-27 中国铁路设计集团有限公司 Linear engineering three-dimensional geological modeling method
CN105976433A (en) * 2016-04-29 2016-09-28 铁道第三勘察设计院集团有限公司 Surface-to-body attribute inheritance method
CN105976433B (en) * 2016-04-29 2018-11-27 中国铁路设计集团有限公司 It is a kind of from face to the inheritance method of body attribute
CN105957146A (en) * 2016-04-29 2016-09-21 铁道第三勘察设计院集团有限公司 Linear engineering three-dimensional geological modeling method
CN106056563A (en) * 2016-05-20 2016-10-26 首都师范大学 Airborne laser point cloud data and vehicle laser point cloud data fusion method
US10962650B2 (en) 2017-10-31 2021-03-30 United States Of America As Represented By The Administrator Of Nasa Polyhedral geofences
CN108062793A (en) * 2017-12-28 2018-05-22 百度在线网络技术(北京)有限公司 Processing method, device, equipment and storage medium at the top of object based on elevation
CN108062793B (en) * 2017-12-28 2021-06-01 百度在线网络技术(北京)有限公司 Object top processing method, device, equipment and storage medium based on elevation
CN108363983A (en) * 2018-03-06 2018-08-03 河南理工大学 A kind of Urban vegetation classification method based on unmanned plane image Yu reconstruction point cloud
CN108363983B (en) * 2018-03-06 2021-05-18 河南理工大学 Urban vegetation classification method based on unmanned aerial vehicle image and reconstructed point cloud
US11954797B2 (en) 2019-01-10 2024-04-09 State Farm Mutual Automobile Insurance Company Systems and methods for enhanced base map generation
CN109993783A (en) * 2019-03-25 2019-07-09 北京航空航天大学 A kind of roof and side optimized reconstruction method towards complex three-dimensional building object point cloud
CN109993783B (en) * 2019-03-25 2020-10-27 北京航空航天大学 Roof and side surface optimization reconstruction method for complex three-dimensional building point cloud
CN110910446A (en) * 2019-11-26 2020-03-24 北京拓维思科技有限公司 Method and device for determining building removal area and method and device for determining indoor area of building

Also Published As

Publication number Publication date
GB0704368D0 (en) 2007-04-11

Similar Documents

Publication Publication Date Title
GB2457215A (en) Automatic 3D Modelling
Yu et al. Automatic 3D building reconstruction from multi-view aerial images with deep learning
Li et al. Reconstructing building mass models from UAV images
Lee et al. Fusion of lidar and imagery for reliable building extraction
Haala et al. Extraction of buildings and trees in urban environments
Awrangjeb et al. Automatic extraction of building roofs using LIDAR data and multispectral imagery
Xu et al. Reconstruction of scaffolds from a photogrammetric point cloud of construction sites using a novel 3D local feature descriptor
Cheng et al. Integration of LiDAR data and optical multi-view images for 3D reconstruction of building roofs
Li et al. Modelling of buildings from aerial LiDAR point clouds using TINs and label maps
CN109919944B (en) Combined superpixel graph-cut optimization method for complex scene building change detection
WO2010088840A1 (en) Generating three-dimensional models from images
CN115564926B (en) Three-dimensional patch model construction method based on image building structure learning
CN111652241B (en) Building contour extraction method integrating image features and densely matched point cloud features
Maltezos et al. Automatic detection of building points from LiDAR and dense image matching point clouds
Garcia-Dorado et al. Automatic urban modeling using volumetric reconstruction with surface graph cuts
Haala et al. An integrated system for urban model generation
CN103839286A (en) True-orthophoto optimization sampling method of object semantic constraint
Guinard et al. Piecewise-planar approximation of large 3d data as graph-structured optimization
Martens et al. VOX2BIM+-A Fast and Robust Approach for Automated Indoor Point Cloud Segmentation and Building Model Generation
Belton et al. Automating post-processing of terrestrial laser scanning point clouds for road feature surveys
Tao 3D Data Acquisition and object reconstruction for AEC/CAD
Salah Filtering of remote sensing point clouds using fuzzy C-means clustering
Tripodi et al. Brightearth: Pipeline for on-the-fly 3D reconstruction of urban and rural scenes from one satellite image
Li et al. Lightweight 3D modeling of urban buildings from range data
Ahmed et al. High-quality building information models (BIMs) using geospatial datasets

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)