CN115661398A - Building extraction method, device and equipment for live-action three-dimensional model - Google Patents
Building extraction method, device and equipment for live-action three-dimensional model Download PDFInfo
- Publication number
- CN115661398A CN115661398A CN202211223721.XA CN202211223721A CN115661398A CN 115661398 A CN115661398 A CN 115661398A CN 202211223721 A CN202211223721 A CN 202211223721A CN 115661398 A CN115661398 A CN 115661398A
- Authority
- CN
- China
- Prior art keywords
- building
- dimensional model
- plane
- primitive
- live
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a building extraction method, a device and equipment for a live-action three-dimensional model. And performing greedy recovery on the building elements deleted by mistake by using topological adjacency relation among the elements on the basis of the incomplete building body structure. The invention realizes the automatic extraction of the building with complete structure and no primitive deformity from the live-action three-dimensional model, has extremely high recall rate and higher automation degree, and greatly shortens the workload and the cost of the objectification and the monomer work of the live-action three-dimensional model.
Description
Technical Field
The invention belongs to the field of surveying and mapping data processing, and particularly relates to a building extraction method, device and equipment for a real-scene three-dimensional model.
Background
With the rapid development of unmanned aerial vehicle technology and optical sensor technology, airborne LiDAR and oblique photogrammetry technologies have enabled low-cost, rapid, accurate acquisition of a wide range of three-dimensional surface information. Especially, the unmanned aerial vehicle oblique photography measurement technology can effectively acquire the coordinate and texture information of the top and the side vertical surface of a building, and is increasingly important in the construction of three-dimensional digital cities and smart cities. The collected three-dimensional earth Surface information exists in a form of point clouds after being processed, and a Digital Ortho Model (DOM), a Digital Surface Model (DSM) and a live-action three-dimensional Model can be further generated by utilizing the point clouds.
Compared with two-dimensional images (such as DSMs and DOM), the real three-dimensional model contains detailed three-dimensional geometric and texture features. Compared with the three-dimensional point cloud, the live-action three-dimensional model has the advantages of space continuity, explicit adjacency and the like, and the disk and the memory occupied by the live-action three-dimensional model are smaller, and some geometrically unrelated points are filtered in the process of generating the live-action three-dimensional model by point cloud reconstruction. Compared with a three-dimensional model in the field of computer graphics, the live-action three-dimensional model represents large-range fine three-dimensional earth surface information and has more primitive quantity. Therefore, live-action three-dimensional models are widely used for various types of 3D geographic applications and spatial analysis.
A great deal of research is carried out on the aspects of ground filtering, surface feature extraction, scene segmentation, space clustering and the like of DSM (digital surface model), DOM (document object model) and point cloud data, however, the research on the work of real-scene three-dimensional model data segmentation, surface feature extraction and the like which are the most important in three-dimensional digital cities is rare. In addition, the full-automatic mechanism of real-scene modeling software such as Photomesh, contextCapture and the like is to construct a continuous and integral grid model, so that the generated real-scene three-dimensional model has a 'one-skin' phenomenon, namely all ground objects are represented by a three-dimensional grid, so that semantic query and analysis are difficult, and diversified application requirements cannot be met. Therefore, the objectification and materialization of the realistic three-dimensional model are an urgent requirement in the construction of three-dimensional digital cities, and a mature realistic three-dimensional model building extraction method does not exist at home and abroad at present.
Building extraction, which has been applied to DOM and point clouds, can be classified into supervised and unsupervised types. Usually, the extraction precision of the supervised method (such as convolutional neural network and graph neural network) is greatly better than that of the unsupervised method, but the supervised method needs a large number of samples, and the labeling process of the samples is extremely time-consuming and labor-consuming. The buildings can not be completely extracted by both a supervision method and an unsupervised method, and the buildings are easy to be incomplete or the boundaries are incomplete. Therefore, the invention provides a building extraction method for a live-action three-dimensional model, which can realize high-precision extraction of buildings of large-range live-action three-dimensional models, ensures the integrity and the regular boundary of the extracted buildings and does not need manual labeling.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a building extraction method, a building extraction device and building extraction equipment for a live-action three-dimensional model, which can realize high-precision extraction of buildings of large-range live-action three-dimensional models, ensure the integrity and the regular boundary of the extracted buildings and do not need manual labeling. The invention is realized by the following technical scheme:
a building extraction method for a live-action three-dimensional model comprises the following steps:
step 1, preprocessing and analyzing the live-action three-dimensional model to splice all tiles of the live-action three-dimensional model into a complete three-dimensional model and analyze the live-action three-dimensional model into a geometric primitive set
Step 2, separating the live-action three-dimensional modelGround and non-ground primitives of (2), assembling the primitivesSeparation into ground primitive setsAnd non-ground primitive set
Step 3, non-ground primitive setPerforming over-segmentation to cluster a plurality of primitives into a set of k clusters with uniform properties and regular boundaries
Step 4, clustering the clustersPerforming plane feature detection to quickly generate a set of l planes with regular boundaries
Step 5, the plane set is alignedAnd performing greedy elimination on non-building planes, namely eliminating the non-building planes such as vegetation, urban furniture, trees, vehicles and the like to the maximum extent. In the removing process, on the basis of ensuring the integrity of the main structure of the building, the plane of a part of the building is allowed to be removed by mistake. Getting the plane set of the main structure of the building after eliminatingAnd non-building plane sets
Step 6, obtaining a building body structure plane set after greedy eliminationAll primitives containedBased on the above, greedy recovery is performed by using topological adjacency relation between primitives, so as to recover the primitive set contained in the building plane that is mistakenly eliminated in the step 5Thereby obtaining a final set of building elements with integrity taken into account
Step 7, output or saveThe three-dimensional model of the building scene formed byAnd forming a non-building real-scene three-dimensional model.
Further, in the step 2, a ground element and a non-ground element of the real scene three-dimensional model are separated by adopting a material simulation method facing the elements.
Further, the specific implementation manner of step 2 is as follows;
s201: vertically turning the live-action three-dimensional model along the Z coordinate direction, i.e. collectingThe Z coordinates of the vertices of all the primitives in (1) take opposite values;
s202: simulating the falling of a cloth material consisting of particles above the turned real-scene three-dimensional model, wherein the initial heights of all the particles constituting the cloth material are the highest points of the turned primitives, the initial horizontal position is determined by the cloth material resolution and an outer surrounding box of the real-scene three-dimensional model, and the cloth material slowly falls under the action of gravity;
s203: gradually stopping moving after the particles of the cloth contact the live-action three-dimensional model, realizing the contact of the cloth and the live-action three-dimensional model through collision detection and judgment based on ray intersection, and setting the cloth particles as immovable if the current vertical height of the cloth particles is lower than the collision point of the cloth particles and the live-action three-dimensional model;
s204: finally, the shape of the static cloth is similar to the terrain, then the Euclidean space distance from each element to the cloth is calculated, and if the Euclidean space distance exceeds a set threshold value, the Euclidean space distance is added to the clothIf it is within the threshold value, add it to
Further, the non-ground primitive set in step 3An over-segmentation method is carried out, wherein an element-based over-segmentation method can be selected to cluster a plurality of elements into a k cluster set with uniform properties and regular boundariesThe method comprises the following steps:
s301, comprehensively considering the space proximity characteristic, the surface characteristic and the color characteristic of the primitive, constructing a primitive heterogeneity distance formula D (p) i ,p j )=μ 1 D s (p i ,p j )+μ 2 D e (p i ,p j )+μ 3 D c (p i ,p j ). In the formula (I), the compound is shown in the specification, andrespectively normalized spatial proximity distance, surface feature difference distance and color difference distance, mu, between two elements 1 、μ 2 And mu 3 Respectively are the weight factors corresponding to the three,is p i The (q) th vertex of (a),andare each p i And p j Normal vector of (i), i.e.p i And p j Color difference distance D between two primitives c (p i ,p j ) Is calculated in the CIE Lab linear color space,as element p i Average texture color values in CIE Lab space;
s302, constructing a heterogeneous cost function of the cluster based on the primitive heterogeneous distance formula D (·) And determining constraint conditions thereof for judging the sum of heterogeneous costs of all clusters, whereinIf r ij =1 representing primitive p i The center primitive of a cluster can be represented and this cluster contains all the satisfied r ij Non-center primitive of =0; j (r) ij ) With the constraint ofWherein I (-) is an exponential function, k represents the number of expected clusters;
s303, constructing and solving an energy optimization function based on the heterogeneous cost function J (-) and the constraint conditions thereof Thereby over-dividing the live-action three-dimensional model into a set of k clusters with uniform properties and regular boundariesA bottom-up merging-based energy minimization method may be selected for the solution of the energy equations toCenter primitive setEach primitive outside according to the mapping functionIt is allocated to each cluster, whereinD(p j ,cp i ) As element p j And primitive cp i Heterogeneous distance between each other, and making each primitiveCluster-centric primitives classified therewithThe sum of the heterogeneity distance of (a) is minimal.
Further, the bottom-up merging-based energy minimization method, specifically, first optimizes the function E (r) at energy ij ) Add a regularization term, i.e.:in the formula, λ is a regularization parameter, and the initial value of the regularization parameter λ is set as the median of the minimum heterogeneity distance value between each primitive and its adjacent primitive, and then each iteration is increased by two times; at the beginning, all the primitives are set as the central primitive of the cluster, and the central primitives are combined from bottom to top continuously until the number of the clusters is reduced to k.
Further, three vertices v of a triangle primitive are used 1 ,v 2 And v 3 The spatial coordinates of (a) compute the normal vector of the primitive:
average color texture valueThe calculating method comprises the following steps: computing primitives p in Adobe RGB color space i In the space range of the y direction and the number of the scanning lines, from top to bottom, for any scanning line, intersecting all edges of the primitive with the scanning line, and sequencing the abscissa obtained by intersection from left to right, wherein the edge intersected for odd times of the scanning line is an incoming edge, and the edge intersected for even times of the scanning line is an outgoing edge; then, the space coordinates of the pixels on the scanning line between the incoming edge and the outgoing edge are calculated through interpolation, the UV coordinates of all pixel points in the element are calculated through a gravity center coordinate method, and the U coordinates and the V coordinates are respectively Wherein S a V being the coordinates of a pixel point and the primitive 1 And v 2 Area of triangle formed by vertexes, S b V for pixel points and primitives 1 And v 3 Area of triangle formed by vertexes, S c V for pixel points and primitives 2 And v 3 Area of triangle formed by vertices, U 1 、U 2 、U 3 、V 1 、V 2 And V 3 Are each v 1 ,v 2 And v 3 UV coordinate of (2), S t =S a +S b +S c (ii) a And after all the UV coordinates are solved, acquiring texture values from texture images corresponding to the primitives by using the UV coordinates, taking the average value of all the texture values as the texture values of the primitives, and then converting the Adobe RGB color space into the CIE Lab color space.
Further, clustering clustersThe method for detecting the plane features can select a cluster-based plane feature detection method, thereby quickly generating a set of l planes with regular boundariesThe method comprises the following steps:
s401: for the cluster setSelecting a setAs a planeAnd clustering the seed cluster from the setIn the middle of removingWherein the plane S m Is made up of a subset of the cluster psi;
s402: computing k around the seed cluster 1 A set of adjacent clustersForAccording to the similarity criterion, judging whether the adjacent clusters have the same property with the seed cluster, and if the similarity criterion meets the plane similarity criterion, merging the adjacent clusters into a plane S where the seed cluster is located m From the set at the same timeRemoving the adjacent cluster, and if the plane similarity criterion is not met, not performing any operation on the cluster;
s403: newly incorporating the S402 process into the plane S m As a plane S one by one m Iteratively performing the step S402 until none of the clusters in ψ satisfy the plane similarity criterion;
s404: the S401-S403 processes are executed iteratively untilFor null, the plane S detected in each iteration is saved m To form a candidate plane feature set S;
s405: post-processing the candidate plane feature set S, wherein the number of clusters in the removed S is less than k 2 The candidate plane of (1).
Further, to the plane setThe non-building plane greedy eliminating method can be selected from the following greedy eliminating methods:
s501, green vegetation primitive elimination based on color features: calculating the average value of the texture values of all the primitives in each plane as the texture value of the plane, and calculating the over-green and over-red index of each plane: exG-ExR =3g-2.4r-b, where r, g and b are the color components of the plane, respectively. Then, automatically calculating an optimal elimination threshold t of the ExG-ExR by using a maximum inter-class variance method (OTSU), and if the exG-ExR of the plane is greater than the threshold t, regarding the plane as a green vegetation plane and eliminating the green vegetation plane;
s502, filtering out short terrain primitives based on relative ground elevation: a centroid is calculated for each ground primitive in the set of ground primitives. Calculating relative ground elevation of each element in each plane, i.e. the elevation of centroid of element and its nearest k 3 Difference between average elevation values of centroids of individual ground primitives. Taking the maximum value of the relative ground elevations of all elements in a plane as the relative ground elevation value of the plane, and if the relative ground elevation value of the plane is less than a threshold value k 4 Then it is considered as a low object plane and rejected. At this time, the rest of the elements are almost all buildings;
further, in the greedy restoration process by using the topological adjacency relation among the primitives, a stack-based depth-first search algorithm can be adopted to search all the topologically reachable primitives of each primitive. In addition, to preventThe non-building elements are recovered by error in a large range due to the existence of a very small part of non-building planes, and can be corrected in advanceAnd (3) uniformly partitioning the topological relation in space, and traversing all the topologically reachable primitives of the primitives on the partitions where the primitives are located so as to prevent the non-building primitives from being recovered by excessive errors.
The invention also provides a building extraction device for the live-action three-dimensional model, which comprises the following modules:
the live-action three-dimensional model analysis module: the system is used for inputting a real three-dimensional model, splicing all tiles of the obtained real three-dimensional model into a complete three-dimensional model, and analyzing the real three-dimensional model into a geometric primitive set;
a ground filtering module: the method comprises the steps of inputting parameters of ground filtering and a primitive set, and separating the input primitive set into a ground primitive set and a non-ground primitive set;
an over-segmentation module: the parameter and element set used for inputting over-segmentation, cluster the element set input into the set of the cluster with uniform property and regular boundary;
a plane feature detection module: inputting parameters and cluster or element sets for plane feature detection by a user, and clustering the input cluster sets or element sets into plane sets with regular boundaries;
non-building plane greedy rejection module: the system is used for inputting greedy eliminating parameters and non-ground element sets, and eliminating non-building elements such as green vegetation and urban furniture greedy to obtain a set representing the main structure of a building and a non-building plane set;
building element greedy restoration module: the building element selection module is used for acquiring parameters for greedy recovery, representing a set of building body structures and a non-building plane set, performing greedy recovery on the building body structure set based on topological adjacency relations among elements, recovering building elements mistakenly deleted in the non-building plane greedy removing module, and obtaining a final building element set considering integrity;
an output module: and the path used for inputting model storage and outputting the building realistic three-dimensional model and the non-building realistic three-dimensional model.
An electronic device comprising a distributed memory, a processor and a computer program in the memory and executable in the processor, the processor implementing the steps of the building extraction method for live-action three-dimensional models according to the above aspects when executing the computer program.
A computer readable storage medium storing computer software instructions for implementing the steps of a building extraction method for live-action three-dimensional models according to the above-described aspects.
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1) Aiming at the problem that an effective method is not provided at present for building extraction of a real-scene three-dimensional model. The invention provides a building extraction method for a live-action three-dimensional model, which can realize the rapid and high-precision extraction of buildings.
(2) The method comprehensively utilizes the space proximity characteristic, the curved surface characteristic and the color characteristic of the real-scene three-dimensional model to realize the extraction of the building model, adopts a bidirectional greedy strategy, and greedy rejects non-building elements such as ground green vegetation, urban furniture, trees, vehicles and the like on the basis of the obtained non-ground elements after filtering the ground elements so as to obtain a set representing the body structure of the building. And then, greedy restoration is carried out on the building elements deleted by mistake on the basis of the incomplete building main body structure to obtain a final building model taking the integrity into consideration, the complete extraction of the building is realized, the recall rate of the building is more than 97 percent, the poor visualization and analysis effects caused by the incomplete building are avoided, and the boundary of the provided building is regular.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of a building extraction method for a live-action three-dimensional model according to an embodiment of the present invention.
Fig. 2 is a live-action three-dimensional model used in an embodiment of the invention.
FIG. 3 is a set of non-ground primitives after filtering the ground for a live-action three-dimensional model in an embodiment of the present invention.
Fig. 4 is a series of primitive clusters with uniform internal properties generated after local clustering in the embodiment of the present invention.
FIG. 5 is a plane obtained from cluster-based plane feature detection in an embodiment of the present invention.
Fig. 6 is a incomplete building main body structure obtained after green vegetation primitive elimination and low feature primitive extraction are carried out on a plane in the embodiment of the invention.
Fig. 7 is a structural three-dimensional model of a building extracted by using an automatic extraction method of a complete building for a live-action three-dimensional model according to an embodiment of the present invention.
Fig. 8 is a flow chart of a primitive-oriented cloth simulation method according to another embodiment of the present invention.
FIG. 9 is a flow chart of a primitive-based over-segmentation method provided in yet another embodiment of the present invention.
Fig. 10 is a flowchart of a cluster-based planar feature detection method according to another embodiment of the present invention.
Fig. 11 is a block diagram of a building extraction apparatus for a live-action three-dimensional model according to still another embodiment of the present invention.
Fig. 12 is a block diagram showing the construction of a building extracting apparatus for a live-action three-dimensional model according to still another embodiment of the present invention.
Fig. 13 is a schematic data processing diagram of a building extraction apparatus for a live-action three-dimensional model according to still another embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples for the purpose of facilitating understanding and practice of the invention by those of ordinary skill in the art, and it is to be understood that the present invention has been described in the illustrative embodiments and is not to be construed as limited thereto.
It will be understood that various modifications may be made to the embodiments disclosed herein. The following description is, therefore, not to be taken in a limiting sense, but is made merely as an exemplification of embodiments. Other modifications will occur to those skilled in the art within the scope and spirit of the disclosure.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the disclosure and, together with a general description of the disclosure given above, and the detailed description of the embodiments given below, serve to explain the principles of the disclosure.
These and other characteristics of the invention will become apparent from the following description of a preferred form of embodiment, given as a non-limiting example, with reference to the accompanying drawings.
It should also be understood that, although the invention has been described with reference to some specific examples, a person of skill in the art shall certainly be able to achieve many other equivalent forms of the invention, having the characteristics as set forth in the claims and hence all coming within the field of protection defined thereby.
However, it is to be understood that the disclosed embodiments are merely examples of the disclosure that may be embodied in various forms. Well-known and/or repeated functions and constructions are not described in detail to avoid obscuring the disclosure in unnecessary or unnecessary detail. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure in virtually any appropriately detailed structure.
The specification may use the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the disclosure.
The embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present invention provides a building extraction method for a live-action three-dimensional model, including:
s1, model preprocessing and analysis: reading the live-action three-dimensional model in the osgb or 3D Tiles format as shown in FIG. 2, splicing all Tiles with original resolution of the live-action three-dimensional model into a complete three-dimensional model, analyzing the live-action three-dimensional model, and analyzing the live-action three-dimensional model into a geometric primitive setn represents the number of primitives read; each element stores the index, vertex coordinate and texture mapping coordinate information;
s2, real-scene three-dimensional model ground filtering: separating ground elements and non-ground elements of the real-scene three-dimensional model by adopting a material distribution simulation method based on elements, and collecting the elementsSeparation into sets of ground primitivesAnd non-ground primitive setThe resulting set of non-ground primitives is shown in fig. 3. Finally, the shape of the static cloth is similar to the terrain, the space distance from each element to the cloth is calculated, and if the space distance exceeds a set threshold value, the space distance is added to the clothIf it is within the threshold value, add it to
S3, over-segmentation: using over-segmentation method based on primitive to assemble non-ground primitivesPerforming over-segmentation to cluster a plurality of primitives into a set of k clusters with uniform properties and regular boundariesThe over-segmentation result is shown in fig. 4, where the value of the number k of clusters is converted into a cluster resolution R =1 meter, which is convenient to set, i.e., in the formula, X max ,X min ,Y max And Y min Are respectively a set of primitivesMaximum and minimum of the X coordinate of (2), and the primitiveCollectionMaximum and minimum values of the Y coordinate of (a);
s4, detecting plane features based on the clusters: cluster to clusterPerforming plane feature detection to quickly generate a set of l candidate planes with regular boundariesRemoving candidate planes without plane features, wherein the number of the removed clusters is less than k 2 The candidate plane of =7 is composed of some clusters having no plane feature. Secondly, removing the candidate planes with the area smaller than the expected value (20 square meters), and taking the remaining candidate planes as a final plane detection result, wherein the plane feature detection result is shown in fig. 5;
s5, removing green vegetation based on color characteristics: the plane obtained through the steps is provided with some interference ground objects such as short vegetation, automobiles, trees and the like besides buildings, and firstly, the main interference ground objects, namely green vegetation, are removed based on color characteristics. Calculating the texture value of all the primitives in each plane, taking the average value as the texture value of the plane, and calculating the over-green and over-red index of each plane: exG-ExR =3g-2.4r-b, where r, g and b are the color components of the plane, respectively. Then, automatically calculating an optimal elimination threshold t of the ExG-ExR by using a maximum inter-class variance method (OTSU), and if the ExG-ExR of the plane is greater than the threshold t, regarding the plane as a green vegetation plane and eliminating the green vegetation plane;
s6, removing short objects based on the relative ground elevation: after the steps, the green vegetation which is the main interference ground object is removed, but small parts of low ground objects such as automobiles, urban furniture and the like still exist, and the low ground objects are filtered by utilizing the relative ground elevation: and (4) calculating the mass center of each ground primitive in the ground primitive set separated in the step (S3), and establishing a KDTree spatial index structure. Calculate each ofThe relative ground elevation of each element in the plane, i.e. the elevation of the centroid of the element and its nearest k 3 Difference between average elevation values of centroids of 10 ground cells, k thereof 3 The adjacent ground primitive centroids are retrieved through a KDTree spatial index structure. Taking the maximum value of the relative ground elevations of all elements in a plane as the relative ground elevation value of the plane, and if the relative ground elevation value of the plane is less than a threshold value k 4 ,k 4 If the height is set to be 2 meters, the height is regarded as a low ground object plane and is removed, and a building main body structure plane set is obtained after the low ground object plane is removedAnd non-building plane setsThe resulting incomplete building body structure is shown in fig. 6;
s7, carrying out greedy recovery on the mistakenly deleted building elements based on topology: after the above steps, the remaining plane is almost only a building, which represents the main structure of the building, but inevitably, a part of the building elements are deleted by mistake in the previous processing procedure, so the deleted building elements are recovered by using the topological adjacency relation, and the concrete procedure is as follows: building body structure plane set obtained by greedy deletionAll the elements in the set areThe building plane which is rejected by errors contains a primitive set ofThereby obtaining a final set of building elements with integrity taken into accountFor eachSearching a set of primitives topologically contiguous theretoWill be provided withThe elements in (1) are marked as building elements, i.e.In the same way, forEach primitive in the tree searches for its topologically contiguous primitives and marks them as buildings, and the search continues recursively until a building is reachedEvery primitive in the system searches out all the primitives with reachable topology. Wherein, with p i Primitives that are topologically contiguous may be considered to be p i Primitives that share vertices may also be considered to be p i Primitives that share edges; in addition, to preventThe non-building elements caused by the existence of a very small part of non-building elements are recovered by error in a large rangeThe method comprises the steps of uniformly partitioning a topological relation in space, traversing all topology reachable primitives of the primitives on the partitions where the primitives are located, and preventing non-building primitives from being recovered by excessive errors;
s8, outputting a model: output or store byThe three-dimensional model of the building scene formed byAnd forming a non-building real-scene three-dimensional model. The final output building model is shown in fig. 7.
As shown in fig. 8, another embodiment of the present invention provides a primitive-oriented cloth simulation method, including:
s201: vertically turning the live-action three-dimensional model along the Z coordinate direction, i.e. collectingThe Z coordinates of the vertexes of all the primitives in (1) take opposite values;
s202: and simulating the falling of a cloth material consisting of particles above the turned real-scene three-dimensional model, wherein the initial heights of all the particles constituting the cloth material are the highest points of the turned elements, and the initial horizontal positions are determined by the cloth material resolution (set to be 0.5 m) and an outer enclosure box of the real-scene three-dimensional model. The cloth slowly falls under the action of gravity;
s203: when particles of cloth contact the live-action three-dimensional model, the cloth gradually stops moving, the contact of the cloth and the live-action three-dimensional model is achieved through collision detection judgment based on ray intersection, and the efficiency of collision detection can be improved through a hierarchical bounding Box (BVH) tree structure. If the current vertical height of the cloth particles is lower than the collision point between the cloth particles and the real three-dimensional model, the cloth particles are set to be immovable;
s204: the final stationary cloth morphology is approximated to the topography, and then the Euclidean distance from each element to the cloth is calculated and added if a set threshold (set to 0.5 m) is exceededIf it is within the threshold value, add it to
As shown in fig. 9, another embodiment of the present invention also provides a primitive-based over-segmentation method, including:
s301, comprehensively considering the spatial proximity characteristic, the surface characteristic and the color characteristic of the elementCharacterizing, constructing the primitive heterogeneity distance formula D (p) i ,p j )=μ 1 D s (p i ,p j )+μ 2 D e (p i ,p j )+μ 3 D c (p i ,p j ). In the formula (I), the compound is shown in the specification, andrespectively normalized surface feature difference distance, spatial proximity distance and color difference distance, mu, between two elements 1 、μ 2 And mu 3 The weighting factors are respectively corresponding to the three, and the value range of the three is [0,1 ]]The user can specify the three parameters according to the requirement, and the values are 0.5, 0.2 and 1 in the embodiment respectively.Is p i If the primitive type is triangle primitive, q is maximum 3,andare each p i And p j Normal vector of (i), i.e.p i And p j Color difference distance D between two primitives c (p i ,p j ) Is calculated in the CIE Lab linear color space,as element p i Average texture color values in CIE Lab space;
wherein, the first and the second end of the pipe are connected with each other,the normal vector calculation method of each element in the system comprises the following steps: taking a triangle primitive as an example, three vertices v of the primitive are used 1 ,v 2 And v 3 The spatial coordinates of (a) compute the normal vector of the primitive:
wherein the element p i Average color texture value ofThe calculation of (c) may be based on a scan line algorithm, specifically: taking the color texture value calculation under Adobe RGB color space as an example, the primitive p is calculated i Spatial extent in the y-direction and number of scan lines. From top to bottom, for any scan line, all edges of the primitive intersect that scan line. And (4) sequencing the abscissa obtained by intersection from left to right, wherein the edge crossed for odd times of the scanning line is an incoming edge, and the edge crossed for even times is an outgoing edge. Then, the spatial coordinates of the pixels on the scanning line between the incoming edge and the outgoing edge are interpolated. Calculating the UV coordinates of all pixel points in the element by using a gravity center coordinate method, wherein the U coordinates and the V coordinates are respectivelyWherein S a V being the coordinates of a pixel point and the primitive 1 And v 2 Area of triangle formed by vertexes, S b V for pixel points and primitives 1 And v 3 Area of triangle formed by vertexes, S c V as pixel points and primitives 2 And v 3 Area of triangle formed by vertices, U 1 、U 2 、U 3 、V 1 、V 2 And V 3 Are each v 1 、v 2 And v 3 UV coordinate of (1), S t =S a +S b +S c . After all UV coordinates are solved, the UV coordinates are used for obtaining texture values from the texture image corresponding to the primitive, and all the texture values are obtainedThe average value of (2) is used as the texture value of the primitive, and then the Adobe RGB color space is converted into CIE Lab color space;
s302, constructing a heterogeneous cost function of the cluster based on the primitive heterogeneous distance formula D (·) And determining constraint conditions thereof for judging the sum of heterogeneous costs of all clusters, whereinIf r is ij =1 representing primitive p i The center primitive of a cluster can be represented and this cluster contains all the satisfied r ij Non-central primitive of =0; j (r) ij ) Is subject to the constraint ofWhere I (·) is an exponential function, taking x as an example, if x =1, I (x) =1, and conversely, I (x) =0; k represents the number of clusters expected;
s303, constructing and solving an energy optimization function based on the heterogeneity cost function J (-) and the constraint condition thereof Thereby over-dividing the live-action three-dimensional model into a set of k clusters with uniform properties and regular boundariesA bottom-up merging-based energy minimization method may be selected for the solution of the energy equations toCenter primitive setEach primitive outside according to the mapping functionIt is allocated to each cluster, whereinD(p j ,cp i ) As element p j And primitive cp i Heterogeneous distance between each other, and making each primitiveCluster-centric primitives classified therewithHas a minimum sum of heterogeneity distances of (1);
the energy minimization method based on combination from bottom to top is characterized in that a function E (r) is firstly optimized in energy ij ) Add a regularization term, i.e.:where λ is a regularization parameter, a larger value of λ will result in a smaller deviation of the final cluster number from k, but will reduce the weight of the heterogeneous distance metric. The initial value of the regularization parameter λ is set to the median of the lowest heterogeneity distance values between each primitive and its neighbors, after which each iteration grows by a factor of two. At the beginning, all the primitives are set as the central primitive of the cluster, and the central primitives are combined from bottom to top continuously until the number of the clusters is reduced to k.
As shown in fig. 10, another embodiment of the present invention also provides a cluster-based planar feature detection method, including:
s401: for the cluster setSelecting a setAs a plane S m And clustering the seed cluster from the setIn which the plane S is removed m Is assembled by clustersIs formed from a subset of (a). The present embodiment is a set of pairsCalculates its centroid, then calculates the curvature for all cluster centroids, and pairs from small to large according to the centroid curvatureSorting, then clustering from clustersSequentially selecting seed clusters;
s402: calculate the set of 8 neighboring clusters around the seed clusterFor theAccording to the cosine similarity measurement criterion, judging whether each adjacent cluster has the same property with the seed cluster, if the cosine similarity measurement criterion is met, the adjacent cluster is judged to have the same property with the seed clusterThe neighboring cluster is incorporated into the plane S in which the seed cluster lies m From the set at the same timeRemoving the neighboring cluster, and if the cosine similarity measure criterion is not satisfied, not performing any operation on the clusterOperating, wherein θ is an angle threshold;
wherein the cosine similarity measure criterion isIn the formula (I), the compound is shown in the specification,andare respectively a clusterHezhou clusterThe normal vector of the cluster is calculated in a mode of the average value of the normal vectors of all the elements in the cluster;
s403: newly merging the S402 process into the plane S m As a plane S one by one m Until said step S402 is executed iteratively until a new seed cluster is obtainedNone of the clusters satisfies a cosine similarity measure criterion;
s404: the process of S401-S403 is iteratively executed untilFor null, the plane S detected in each iteration is saved m To form a candidate planar feature set S. And (4) post-processing the candidate plane feature set S, removing the candidate planes of which the number of clusters in the S is less than 3, and taking the remaining candidate planes as a final plane detection result.
As shown in fig. 11, another embodiment of the present invention further provides a building extraction apparatus for a live-action three-dimensional model, including:
the live-action three-dimensional model analysis module: the system comprises a three-dimensional input module, a three-dimensional output module, a three-dimensional input module, a three-dimensional output module and a three-dimensional output module, wherein the three-dimensional input module is used for inputting a three-dimensional model of a real scene, splicing all tiles of the obtained three-dimensional model of the real scene into a complete three-dimensional model, and analyzing the three-dimensional model of the real scene into a geometric primitive set;
a ground filtering module: the parameters and primitive set for ground filtering are input, and the input primitive set is separated into a ground primitive set and a non-ground primitive set;
an over-segmentation module: the parameters and the primitive set for inputting over-segmentation are used for clustering the input primitive set into a cluster set with uniform properties and regular boundaries;
the plane feature detection module: inputting parameters and cluster or element sets for plane feature detection by a user, and clustering the input cluster sets or element sets into plane sets with regular boundaries;
non-building plane greedy rejection module: the system is used for inputting greedy eliminating parameters and non-ground element sets, and eliminating non-building elements such as green vegetation and urban furniture greedy to obtain a set representing the main structure of a building and a non-building plane set;
building element greedy restoration module: the building element selection module is used for selecting parameters for greedy restoration, representing a set of building body structures and a non-building plane set, carrying out greedy restoration on the building body structure set based on topological adjacency relations among elements, restoring building elements which are mistakenly deleted in the non-building plane greedy removing module, and obtaining a final building element set which takes the integrity into consideration;
an output module: and the path used for inputting the model storage and outputting the building realistic three-dimensional model and the non-building realistic three-dimensional model.
Wherein, the real-scene three-dimensional model analysis module includes:
an input unit: according to a file path of a live-action three-dimensional model input by a user, if the model is stored in a paging LOD (level of detail) tile mode, recursively traversing to obtain all leaf node tiles of the model, namely three-dimensional model tiles with the highest resolution, and splicing the tiles into a complete live-action three-dimensional model;
a model analysis unit: analyzing the spliced live-action three-dimensional model into a geometric primitive set, and storing information such as indexes, vertex coordinates and texture mapping coordinates of each primitive;
an output unit: and outputting the analyzed primitive set.
A floor filtration module comprising:
an input unit: the geometric primitive set is used for inputting parameters of ground filtering and a real three-dimensional model;
a calculation unit: according to the input parameters and primitive set, executing ground filtering calculation;
an output unit: and outputting the calculated ground primitive set and the non-ground primitive set.
An over-segmentation module comprising:
an input unit: the geometric primitive set which is used for inputting over-segmented parameters and is not on the ground is input;
a calculation unit: performing over-segmentation calculation according to the input parameters and the primitive set;
an output unit: and outputting a cluster set with uniform calculated properties and regular boundaries.
A planar feature detection module comprising:
an input unit: a set of parameters and clusters for inputting the plane feature detection module;
a calculation unit: according to the input parameters and the primitive set, performing plane feature detection calculation;
an output unit: and outputting the calculated plane feature set with regular boundaries.
A non-building planar greedy culling module, comprising:
an input unit: the parameter, the plane set and the ground primitive set are used for inputting greedy elimination;
a calculation unit: according to the input parameters and the primitive set, executing green vegetation elimination based on color features and low land feature elimination calculation based on relative ground elevation to obtain a primitive set representing the main structure of the building;
an output unit: and outputting the calculated primitive set representing the building main body structure.
A building cell greedy restoration module, comprising:
an input unit: the method comprises the steps of inputting parameters for greedy recovery, a primitive set representing a building body structure and a non-building primitive set;
a calculation unit: executing building element greedy restoration calculation based on topological adjacency relation according to the input parameters and element set to obtain a final building geometric element with a complete structure;
an output unit: and outputting the building geometric primitive with complete structure.
As shown in fig. 12 and 13, another embodiment of the present invention further provides a building extraction apparatus for a live-action three-dimensional model, the apparatus includes a distributed memory, a processor and a computer program in the memory and executable in the processor, the processor when executing the computer program realizes the steps of the building extraction method for a live-action three-dimensional model, including:
step 1, preprocessing and analyzing the live-action three-dimensional model to splice all tiles of the live-action three-dimensional model into a complete three-dimensional model, and analyzing the live-action three-dimensional model into a geometric primitive set
Step 2, separating the ground elements and non-ground elements of the real three-dimensional model, and collecting the elementsSeparation into sets of ground primitivesAnd non-ground primitive set
Step 3, non-ground primitive setPerforming over-segmentation to cluster a large number of primitives into individual propertiesSet of symmetric, edge-normalized k clusters
Step 4, clustering the clustersPerforming plane feature detection to rapidly generate a set of l planes with regular boundaries
Step 5, the plane set is alignedAnd performing greedy elimination on non-building planes, namely eliminating the non-building planes such as vegetation, urban furniture, trees, vehicles and the like to the maximum extent. In the removing process, on the basis of ensuring the integrity of the main structure of the building, the plane of a part of the building is allowed to be removed by mistake. Obtaining a building main body structure plane set after eliminationAnd non-building plane sets
Step 6, obtaining a building body structure plane set after greedy eliminationAll primitives containedBased on the above, greedy recovery is performed by using topological adjacency relation between primitives, so as to recover the primitive set contained in the building plane that is mistakenly eliminated in the step 5Thereby obtainingFinal integrity-considered building element set
Step 7, output or saveThe three-dimensional model of the building scene formed byAnd forming a non-building real-scene three-dimensional model.
It should be noted that: the building extraction apparatus for a realistic three-dimensional model provided in the above embodiments is only exemplified by the division of the above program modules when performing building extraction for a realistic three-dimensional model, and in practical applications, the above processing allocation may be completed by different program modules according to needs, that is, the internal structure of the device is divided into different program modules to complete all or part of the above-described processing. In addition, the building extraction device for the live-action three-dimensional model and the building extraction method for the live-action three-dimensional model provided in the above embodiments belong to the same concept, and the specific implementation process thereof is detailed in the method embodiment and will not be described herein again.
The memory in embodiments of the present invention is used to store various types of data to support the operation of the building extraction apparatus for live-action three-dimensional models. Examples of such data include: any computer program for operating on a building extraction electronic device for a live-action three-dimensional model.
The building extraction method for the live-action three-dimensional model disclosed by the embodiment of the invention can be applied to a processor or realized by the processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the building extraction method for the live-action three-dimensional model may be implemented by instructions in the form of integrated logic circuits of hardware or software in the processor. The processor may be a general purpose processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The processor may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed by the embodiment of the invention can be directly implemented by a hardware decoding processor, or can be implemented by combining hardware and software modules in the decoding processor. The software module may be located in a storage medium located in the memory, and the processor reads the information in the memory, and completes the steps of the building extraction method for the live-action three-dimensional model provided in the embodiment of the present invention in combination with hardware thereof.
The building extraction Device for the live-action three-dimensional model in the exemplary embodiment may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, programmable Logic Devices (PLDs), complex Programmable Logic Devices (CPLDs), FPGAs, general purpose processors, controllers, micro Controllers (MCUs), microprocessors (microprocessors), or other electronic components for performing the aforementioned methods.
It will be appreciated that the memory can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), synchronous Static Random Access Memory (SSRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), enhanced Synchronous Dynamic Random Access Memory (ESDRAM), enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), synchronous Dynamic Random Access Memory (SLDRAM), direct Memory (DRmb Access), and Random Access Memory (DRAM). The described memory for embodiments of the present invention is intended to comprise, without being limited to, these and any other suitable types of memory.
In an exemplary embodiment, the embodiment of the present invention further provides a storage medium, specifically a computer storage medium, which may be a computer readable storage medium, for example, a memory storing a computer program, which is executable by a processor of a building extraction apparatus for a realistic three-dimensional model, and the steps of the building extraction method for a realistic three-dimensional model. The computer readable storage medium may be a ROM, PROM, EPROM, EEPROM, flash Memory, magnetic surface Memory, optical disk, or CD-ROM, among others.
In conclusion, the building extraction method realizes automatic extraction of the building from the live-action three-dimensional model, the extracted building has a complete structure and no primitive defects, the building has extremely high recall rate and high automation degree, and the objectification and the individuation manual workload and the cost of the live-action three-dimensional model are greatly reduced.
It should be understood that parts of the specification not set forth in detail are of the prior art.
It should be understood that the above description is illustrative of embodiments and is not to be construed as limiting the scope of the invention, which is defined by the appended claims. Without departing from the scope of the invention as defined in the claims. Any modification, equivalent replacement, improvement and the like made by the method fall into the protection scope of the invention, and the protection scope of the invention is subject to the appended claims.
Claims (12)
1. A building extraction method for a live-action three-dimensional model is characterized by comprising the following steps:
step 1, preprocessing and analyzing the live-action three-dimensional model to splice all tiles of the live-action three-dimensional model into a three-dimensional model, and analyzing the live-action three-dimensional model into a geometric primitive set
Step 2, separating ground elements and non-ground elements of the live-action three-dimensional model, and collecting the elementsSeparation into sets of ground primitivesAnd non-ground primitive set
Step 3, for non-ground primitive setPerforming over-segmentation to cluster a plurality of primitives into a set of k clusters with uniform properties and regular boundaries
Step 4, clustering the clustersPerforming plane feature detection to rapidly generate a set of l planes with regular boundaries
Step 5, the plane set is alignedCarrying out greedy elimination on non-building planes to obtain a building body structure plane setAnd non-building plane sets
Step 6, obtaining a building body structure plane set after greedy eliminationAll primitives containedBased on the above, greedy recovery is performed by using topological adjacency relation between primitives, so as to recover the primitive set contained in the building plane that is mistakenly eliminated in the step 5Thereby obtaining a final set of building elements with integrity taken into account
2. The building extraction method for the live-action three-dimensional model according to claim 1, characterized in that: step 2, separating ground elements and non-ground elements of the live-action three-dimensional model by adopting an element-oriented cloth simulation method, wherein the specific implementation mode is as follows;
s201: vertically turning the live-action three-dimensional model along the Z coordinate direction, i.e. collectingThe Z coordinates of the vertexes of all the primitives in (1) take opposite values;
s202: simulating the falling of a cloth material formed by particles above the turned real-scene three-dimensional model, wherein the initial heights of all the particles forming the cloth material are the highest points of the elements after turning, the initial horizontal position is determined by the cloth material resolution and an outer enclosure box of the real-scene three-dimensional model, and the cloth material slowly falls under the action of gravity;
s203: gradually stopping moving after the particles of the cloth contact the live-action three-dimensional model, realizing the contact of the cloth and the live-action three-dimensional model through collision detection and judgment based on ray intersection, and setting the cloth particles as immovable if the current vertical height of the cloth particles is lower than the collision point of the cloth particles and the live-action three-dimensional model;
s204: finally, the shape of the static cloth is similar to the terrain, then the Euclidean space distance from each element to the cloth is calculated, and if the Euclidean space distance exceeds a set threshold value, the Euclidean space distance is added to the clothIf it is within the threshold value, add it to
3. The building extraction method for the live-action three-dimensional model according to claim 1, characterized in that: the specific implementation manner of the step 3 is as follows;
s301, comprehensively considering the spatial proximity characteristic, the surface characteristic and the color characteristic of the element, and constructing an element heterogeneity distance formula D (·);
s302, constructing a heterogeneous cost function of the cluster based on the primitive heterogeneous distance formula D (·) And determining constraint conditions thereof for judging the sum of heterogeneous costs of all clusters, whereinIf r ij =1 representing primitive p i The center primitive of a cluster can be represented and this cluster contains all the satisfied r ij Non-center primitive of =0; j (r) ij ) With the constraint ofWherein I (-) is an exponential function, k represents the number of expected clusters; d (p) i ,p j ) Representing the ith primitive p i And the jth primitive p j A heterogeneity distance formula between the elements, wherein n is the number of the elements;
4. The building extraction method for the live-action three-dimensional model according to claim 3, characterized in that: the primitive heterogeneity distance formula D (-) is calculated as follows;
D(p i ,p j )=μ 1 D s (p i ,p j )+μ 2 D e (p i ,p j )+μ 3 D c (p i ,p j )
in the formula (I), the compound is shown in the specification,and respectively normalized surface feature difference distance, spatial proximity distance and color difference distance, mu, between two elements 1 、μ 2 And mu 3 The weight factors are respectively corresponding to the three, and the value interval of the three is [0,1 ]];Is p i If the primitive type is triangle primitive, thenThe maximum q is 3,andare each p i And p j Normal vector of (i), i.e.p i And p j Color difference distance D between two primitives c (p i ,p j ) Calculated in the CIE Lab linear color space,as element p i Mean texture color values in CIE Lab space.
5. The building extraction method for the live-action three-dimensional model according to claim 4, characterized in that: using three vertices v of a triangle primitive 1 ,v 2 And v 3 The spatial coordinates of (a) calculate the normal vector of the primitive:
average color texture valueThe calculating method comprises the following steps: computing primitives p in Adobe RGB color space i In the space range of the y direction and the number of the scanning lines, from top to bottom, for any scanning line, intersecting all edges of the primitive with the scanning line, and sequencing the abscissa obtained by intersection from left to right, wherein the edge intersected for odd times of the scanning line is an incoming edge, and the edge intersected for even times of the scanning line is an outgoing edge; then, the space coordinates of the pixels on the scanning line between the incoming edge and the outgoing edge are calculated through interpolation, the UV coordinates of all pixel points in the element are calculated through a gravity center coordinate method, and the U coordinates and the V coordinates are respectively Wherein S a V being the coordinates of a pixel point and the primitive 1 And v 2 Area of triangle formed by vertexes, S b V as pixel points and primitives 1 And v 3 Area of triangle formed by vertexes, S c V as pixel points and primitives 2 And v 3 Area of triangle formed by vertices, U 1 、U 2 、U 3 、V 1 、V 2 And V 3 Are each v 1 ,v 2 And v 3 UV coordinate of (2), S t =S a +S b +S c (ii) a And after all the UV coordinates are solved, acquiring texture values from texture images corresponding to the primitives by using the UV coordinates, taking the average value of all the texture values as the texture values of the primitives, and then converting the Adobe RGB color space into the CIE Lab color space.
6. The building extraction method for the live-action three-dimensional model according to claim 3, characterized in that: solving an energy optimization function by using a bottom-up energy minimization method based on combination to solveCenter primitive setEach primitive outside according to the mapping functionIt is allocated to each cluster, wherein D(p j ,cp i ) Is a primitive p j And primitive cp i Heterogeneous distance between them, and having each primitiveCluster-centric primitives classified therewithHas a minimum sum of heterogeneity distances of (1);
the bottom-up merging-based energy minimization method is implemented by first optimizing a function E (r) in energy ij ) Add a regularization term, namely:in the formula, λ is a regularization parameter, and the initial value of the regularization parameter λ is set to the median of the lowest heterogeneity distance values between each primitive and its neighboring primitives, and then each iteration is increased by two times; at the beginning, all the primitives are set as the central primitive of the cluster, and the central primitives are combined from bottom to top continuously until the number of the clusters is reduced to k.
7. The building extraction method for the live-action three-dimensional model according to claim 1, characterized in that: the specific implementation manner of the step 4 is as follows;
s401: cluster to clusterSelecting a setAs a plane S m And clustering the seed cluster from the setIn which is removed, whereinPlane S m Is assembled by clustersA subset of (2);
in S401, the sets are pairedCalculates its centroid, then calculates the curvature for all cluster centroids, and pairs from small to large according to the centroid curvatureSorting, then clustering from clustersSequentially selecting seed clusters;
s402: calculate the set of n1 neighboring clusters around the seed clusterFor theAccording to the cosine similarity measurement criterion, judging whether each adjacent cluster has the same property with the seed cluster, if the cosine similarity measurement criterion is met, the adjacent cluster is judged to have the same property with the seed cluster The neighboring cluster is merged into the plane S in which the seed cluster lies m Simultaneously from the setRemoving the neighboring cluster, if the cosine similarity measure criterion is not satisfied, not performing any operation on the cluster, where θ is an angle threshold,represents a cluster, D s Representing a cosine similarity measure;
s403: newly merging the S402 process into the plane S m As a plane S one by one m Until said step S402 is executed iteratively until a new seed cluster is obtainedNone of the clusters satisfies a cosine similarity measure criterion;
s404: the process of S401-S403 is iteratively executed untilFor null, the plane S detected in each iteration is saved m And forming a candidate plane feature set S, carrying out post-processing on the candidate plane feature set S, removing the candidate planes of which the number of clusters in S is less than n2, and taking the remaining candidate planes as a final plane detection result.
8. The building extraction method for the live-action three-dimensional model according to claim 1, characterized in that: the specific implementation manner of the step 5 is as follows;
firstly, the green vegetation is removed based on color characteristics, and the method specifically comprises the following steps: calculating the texture value of all the primitives in each plane, taking the average value as the texture value of the plane, and calculating the over-green and over-red index of each plane: exG-ExR =3g-2.4r-b, where r, g and b are the color components of the plane, respectively; then, automatically calculating the optimal elimination threshold t of the ExG-ExR by using a maximum inter-class variance method, and if the ExG-ExR of the plane is greater than the threshold t, regarding the plane as a green vegetation plane and eliminating the green vegetation plane;
then, the height of the relative ground is utilized to filter out short objects, which specifically comprises the following steps: calculating the mass center of each ground element in the ground element set separated in the step 3, and establishing a KDTree spatial index structure; calculating the relative ground elevation value of each element in each plane, namely the element mass center elevation valueK nearest thereto 3 Difference between average elevation values of centroids of individual ground elements, k 3 The mass center of the adjacent ground element is retrieved through a KDTree spatial index structure; taking the maximum value of the relative ground elevations of all elements in a plane as the relative ground elevation value of the plane, and if the relative ground elevation value of the plane is less than a threshold value k 4 Then, the building main body structure plane set is obtained by regarding the building main body structure plane as a low ground object plane, eliminating the low ground object plane and obtaining the building main body structure plane set after eliminating the low ground object planeAnd non-building plane sets
9. The building extraction method for the live-action three-dimensional model according to claim 1, characterized in that: the specific implementation manner of the step 6 is as follows;
for eachSearching a set of primitives topologically contiguous theretoWill be provided withThe elements in (1) are marked as building elements, i.e.In the same way, forEach primitive in the tree searches for its topologically contiguous primitives and marks them as buildings, and the search continues recursively until a building is reachedSearching out all primitives with reachable topology from each primitive; wherein, with p i Topological adjacent primitive is p i Primitives sharing a vertex, or with p i Primitives that share edges; to prevent fromNon-building elements are largely recovered by error due to the existence of a very small part of non-building elementsThe method comprises the steps of uniformly partitioning the topological relation in space, and traversing all topological reachable primitives of the primitives on the partitions where the primitives are located so as to prevent the non-building primitives from being recovered by excessive errors.
10. A building extraction device for a live-action three-dimensional model is characterized by comprising the following modules:
the live-action three-dimensional model analysis module: the system comprises a three-dimensional model input module, a three-dimensional model output module, a three-dimensional model input module, a three-dimensional model output module and a three-dimensional model output module, wherein the three-dimensional model is used for inputting a real three-dimensional model, splicing all tiles of the obtained real three-dimensional model into one three-dimensional model, and analyzing the real three-dimensional model into a geometric primitive set;
a ground filtering module: the method comprises the steps of inputting parameters of ground filtering and a primitive set, and separating the input primitive set into a ground primitive set and a non-ground primitive set;
an over-segmentation module: the parameters and the primitive set for inputting over-segmentation are used for clustering the input primitive set into a cluster set with uniform properties and regular boundaries;
the plane feature detection module: inputting parameters and cluster or element sets detected by plane features by a user, and clustering the input cluster sets or element sets into plane sets with regular boundaries;
non-building plane greedy culling module: the system comprises a data processing unit, a data processing unit and a data processing unit, wherein the data processing unit is used for inputting a greedy rejection parameter and a non-ground primitive set, and greedy rejecting non-building primitives to obtain a set representing a building body structure and a non-building plane set;
building element greedy recovery module: the building element recovery method comprises the steps of inputting parameters for greedy recovery, a set representing a building body structure and a non-building plane set, performing greedy recovery on the building body structure set based on topological adjacency relations among elements, recovering building elements mistakenly deleted in a non-building plane greedy removing module, and obtaining a final consideration building element set;
an input module: and the path used for inputting model storage and outputting the building realistic three-dimensional model and the non-building realistic three-dimensional model.
11. An electronic device comprising a distributed memory, a processor, and a computer program executable in the processor in the memory, characterized in that: the processor, when executing the computer program, performs the steps of a building extraction method for live-action three-dimensional models as claimed in any one of claims 1 to 9.
12. A computer readable storage medium storing computer software instructions, characterized in that: the computer software instructions implementing the steps of a building extraction method for live-action three-dimensional models as claimed in any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211223721.XA CN115661398A (en) | 2022-10-08 | 2022-10-08 | Building extraction method, device and equipment for live-action three-dimensional model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211223721.XA CN115661398A (en) | 2022-10-08 | 2022-10-08 | Building extraction method, device and equipment for live-action three-dimensional model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115661398A true CN115661398A (en) | 2023-01-31 |
Family
ID=84985545
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211223721.XA Pending CN115661398A (en) | 2022-10-08 | 2022-10-08 | Building extraction method, device and equipment for live-action three-dimensional model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115661398A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116109522A (en) * | 2023-04-10 | 2023-05-12 | 北京飞渡科技股份有限公司 | Contour correction method, device, medium and equipment based on graph neural network |
-
2022
- 2022-10-08 CN CN202211223721.XA patent/CN115661398A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116109522A (en) * | 2023-04-10 | 2023-05-12 | 北京飞渡科技股份有限公司 | Contour correction method, device, medium and equipment based on graph neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110570428B (en) | Method and system for dividing building roof sheet from large-scale image dense matching point cloud | |
CN112595258B (en) | Ground object contour extraction method based on ground laser point cloud | |
CN110717983B (en) | Building elevation three-dimensional reconstruction method based on knapsack type three-dimensional laser point cloud data | |
CN112070769B (en) | Layered point cloud segmentation method based on DBSCAN | |
CN111598916A (en) | Preparation method of indoor occupancy grid map based on RGB-D information | |
CN114332366B (en) | Digital urban single house point cloud elevation 3D feature extraction method | |
CN116310192A (en) | Urban building three-dimensional model monomer reconstruction method based on point cloud | |
CN111652241B (en) | Building contour extraction method integrating image features and densely matched point cloud features | |
CN111260668A (en) | Power line extraction method, system and terminal | |
CN112070870B (en) | Point cloud map evaluation method and device, computer equipment and storage medium | |
Awrangjeb et al. | Rule-based segmentation of LIDAR point cloud for automatic extraction of building roof planes | |
CN114332134B (en) | Building facade extraction method and device based on dense point cloud | |
CN114241217B (en) | Trunk point cloud efficient extraction method based on cylindrical features | |
WO2011085435A1 (en) | Classification process for an extracted object or terrain feature | |
CN112396641A (en) | Point cloud global registration method based on congruent two-baseline matching | |
CN114119902A (en) | Building extraction method based on unmanned aerial vehicle inclined three-dimensional model | |
CN115661398A (en) | Building extraction method, device and equipment for live-action three-dimensional model | |
WO2024125434A1 (en) | Regional-consistency-based building principal angle correction method | |
Zhao et al. | A 3D modeling method for buildings based on LiDAR point cloud and DLG | |
CN111861946B (en) | Adaptive multi-scale vehicle-mounted laser radar dense point cloud data filtering method | |
WO2011085433A1 (en) | Acceptation/rejection of a classification of an object or terrain feature | |
CN113345072A (en) | Multi-view remote sensing topographic image point cloud reconstruction method and system | |
CN116071530B (en) | Building roof voxelized segmentation method based on airborne laser point cloud | |
WO2011085434A1 (en) | Extraction processes | |
CN116824379A (en) | Laser point cloud building contour progressive optimization method based on multidimensional features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |