CN115330662A - Model fusion method and device, electronic equipment and readable storage medium - Google Patents

Model fusion method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN115330662A
CN115330662A CN202110512003.3A CN202110512003A CN115330662A CN 115330662 A CN115330662 A CN 115330662A CN 202110512003 A CN202110512003 A CN 202110512003A CN 115330662 A CN115330662 A CN 115330662A
Authority
CN
China
Prior art keywords
model
target
live
action
bim
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110512003.3A
Other languages
Chinese (zh)
Inventor
余伟巍
平聪聪
孙翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Glodon Co Ltd
Original Assignee
Glodon Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Glodon Co Ltd filed Critical Glodon Co Ltd
Priority to CN202110512003.3A priority Critical patent/CN115330662A/en
Publication of CN115330662A publication Critical patent/CN115330662A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the technical field of BIM model adjustment, and discloses a model fusion method and device, electronic equipment and a readable storage medium. Wherein, the method comprises the following steps: acquiring a target BIM (building information modeling) model and a target live-action model corresponding to the three-dimensional live-action image, wherein the target live-action model is provided with a target cutting area corresponding to the target BIM model; constructing a tensile body model based on the target BIM model; determining a target terrain surface model corresponding to the target real-scene model based on the characteristics of the target real-scene model; cutting the stretching body model based on the boundary of the target terrain surface model to obtain a target stretching body model; and superposing the target BIM model and the target stretched body model in a target live-action model containing a target cutting area to obtain a fusion model of the target live-action model and the target BIM model. By implementing the method, the automatic seamless fusion of the live-action model and the BIM model is realized, the dependence on operators is avoided, the labor cost is saved, the synthesis time is reduced, and the mold closing effect is ensured.

Description

Model fusion method and device, electronic equipment and readable storage medium
Technical Field
The invention relates to the technical field of BIM model adjustment, in particular to a model fusion method and device, an electronic device and a readable storage medium.
Background
At each stage in the field of construction engineering, the fusion of the multi-source heterogeneous model can provide visual experience and decision basis with image and intuition in multiple dimensions such as planning and design, removal quantification, earthwork calculation, space decision analysis, reporting, observation and the like. The fusion of the heterogeneous models can provide information data with different dimensions, particularly the die assembly processing of the real scene model and the BIM, and can give people strong three-dimensional visual perception and combine different professional backgrounds to make more accurate judgment.
At the present stage, the mainstream practice is mainly two types: firstly, a flattening or excavating effect is formed on a display layer of a Web end by temporarily adjusting the grid vertex of a local area to a plane, or the vertex of the area is directly deleted in a cache to form an opening, and then a BIM model is placed; and secondly, the desktop end completes model fusion by performing operations such as model import, manual boundary line addition, grid vertex adjustment, manual bridging, vulnerability repair and the like in a mould repairing mode by means of the model editing capability of tools such as 3ds Max, geomagic and the like. However, the operation of the Web end is only the position adjustment of the display layer for the existing vertex, the model data itself has no change, and the edited model cannot be exported for later design and processing; the operation of the desktop end needs to be manually subjected to format conversion by using various modeling software, the mold closing effect depends on the experience of operators, and the problem of untight model fitting is easily caused. Therefore, the existing model fusion method has the problems of long operation time consumption, non-ideal mold closing effect and the like.
Disclosure of Invention
In view of this, the embodiment of the present invention provides a model fusion method to solve the problems of long time consumption and unsatisfactory effect in mold closing operation.
According to a first aspect, an embodiment of the present invention provides a model fusion method, including: acquiring a target BIM (building information modeling) model and a target live-action model corresponding to a three-dimensional live-action image, wherein the target live-action model is provided with a target cutting area corresponding to the target BIM model; constructing a tensile body model based on the target BIM model; determining a target terrain surface model corresponding to the target real-scene model based on the characteristics of the target real-scene model; cutting the stretched body model based on the boundary of the target terrain surface model to obtain a target stretched body model; and superposing the target BIM model and the target stretched body model in the target live-action model containing the target cutting area to obtain a fusion model of the target live-action model and the target BIM model.
According to the model fusion method provided by the embodiment of the invention, the target live-action model and the target BIM model are obtained, the target cutting area corresponding to the target BIM model is arranged in the target live-action model, the stretching body model is constructed on the basis of the target BIM model, the target terrain surface model is generated on the basis of the characteristics of the target live-action model, the stretching body model is cut by utilizing the boundary of the target terrain surface model to obtain the target stretching body model, and then the target BIM model and the target stretching body model are superposed and placed in the target cutting area, so that the fusion model of the target live-action model and the target BIM model is obtained. According to the method, the fusion of the target stretching body model and the target live-action model on the positions is ensured through the separation calculation of the target terrain surface model and the target live-action model without the help of the model editing capacity of other tools, and finally, the separation calculation results are superposed, so that the automatic seamless fusion of the target live-action model and the target BIM model is realized, the dependence on operators is avoided, the labor cost is saved, the synthesis time is shortened, and the mold closing effect is ensured.
With reference to the first aspect, in a first implementation manner of the first aspect, the clipping the stretched body model based on the boundary of the target terrain surface model to obtain a target stretched body model includes: acquiring a terrain area corresponding to the target terrain surface model; determining a boundary corresponding to the target terrain surface model based on the terrain area; and performing Boolean operation on the boundary of the target terrain surface model and the stretched body model to obtain a target stretched body model after the stretched body model is cut.
With reference to the first implementation manner of the first aspect, in a second implementation manner of the first aspect, determining a boundary corresponding to the target terrain surface model based on the terrain area includes: extracting topographic point cloud data corresponding to the topographic area; and calculating a boundary corresponding to the terrain point cloud data based on the terrain point cloud data, and taking the boundary corresponding to the terrain point cloud data as a boundary corresponding to the target terrain surface model.
According to the model fusion method provided by the embodiment of the invention, the terrain area corresponding to the target terrain surface model is obtained, the terrain point cloud data corresponding to the terrain area is extracted, the boundary corresponding to the terrain point cloud data is calculated based on the terrain point cloud data, the boundary corresponding to the terrain point cloud data is used as the boundary corresponding to the target terrain surface model, boolean operation is carried out by combining the boundary of the target terrain surface model and the stretching body model, and the target stretching body model after the stretching body model is cut is obtained. Compared with the existing real scene surface model which is generated by a graph cutting method, the model has a large number of self-intersecting and discrete isolated surface patches and even non-popular surface patches, the grid quality can not be directly used for Boolean operation, manual interaction parameter adjustment is needed to restore the grid quality, the influence of low-quality surface patches is eliminated, and the normal operation of the automatic processing flow is difficult to ensure.
With reference to the first aspect, in a third implementation manner of the first aspect, the determining, based on the features of the target real-scene model, a target terrain surface model corresponding to the target real-scene model includes: acquiring characteristic data of the three-dimensional live-action image, wherein the characteristic data is used for reconstructing a terrain mesh model; drawing the target cutting area from the three-dimensional live-action image based on the target live-action model; extracting target point cloud data corresponding to the target cutting area and a point cloud data boundary corresponding to the target point cloud data from the feature data; filtering the target point cloud data to obtain topographic point cloud data; constructing the terrain surface model based on the terrain point cloud data; and cutting the terrain surface model based on the point cloud data boundary to obtain a target terrain surface model.
According to the model fusion method provided by the embodiment of the invention, the feature data of the three-dimensional live-action image is obtained, the terrain area to be cut is marked from the three-dimensional live-action image, and the target point cloud data and the point cloud data boundary corresponding to the terrain area to be cut are extracted from the feature data, wherein the feature data is used for reconstructing the terrain mesh model. And performing filtering processing on the target point cloud data to obtain topographic point cloud data, constructing a topographic surface model based on the topographic point cloud data, and cutting the topographic surface model based on the point cloud data boundary to obtain the target topographic surface model. According to the method, the grid surface quality can be ensured by separating the topographic point cloud data and reconstructing the topographic surface model, and the topographic surface model is cut to obtain the target topographic surface model, so that the target topographic surface model can replace the real scene surface model to perform Boolean operation.
With reference to the first aspect, in a fourth implementation manner of the first aspect, acquiring a three-dimensional live-action image includes: acquiring a real scene model corresponding to the three-dimensional real scene image; extracting a closed boundary of the target BIM model, and determining a target cutting area of the real scene model; and cutting the live-action model based on the target cutting area to obtain a target live-action model, wherein the target live-action model comprises a target cutting model corresponding to the target cutting area.
With reference to the fourth implementation manner of the first aspect, in a fifth implementation manner of the first aspect, the extracting a closed boundary of the target BIM model and determining a target clipping area of the real scene model includes: obtaining model point cloud data corresponding to the target BIM model; extracting a model point cloud boundary corresponding to the model point cloud data, wherein the model point cloud boundary comprises a plurality of straight line segments; fitting the model point cloud boundary, and deleting redundant straight line segments to obtain a closed boundary corresponding to the model point cloud boundary; and comparing the closed boundary with the live-action model, and determining that the closed boundary corresponds to a target cutting area of the live-action model.
According to the model fusion method provided by the embodiment of the invention, the live-action model corresponding to the three-dimensional live-action image and the model point cloud data corresponding to the target BIM model are obtained, the model point cloud boundary corresponding to the model point cloud data is extracted, the model point cloud boundary is subjected to fitting treatment, redundant straight line segments are deleted, the closed boundary corresponding to the model point cloud boundary is obtained, the target cutting area of the closed boundary corresponding to the live-action model is determined, and the live-action model is cut based on the target cutting area, so that the target live-action model is obtained. According to the method, the live-action model is cut and holed to obtain the target live-action model, and finally the target BIM and the target stretched body model are superposed to the target live-action model, so that the target live-action model and the target BIM are automatically and seamlessly fused, and the display and measurement requirements are met.
With reference to the first aspect, in a sixth implementation manner of the first aspect, the constructing a tensile body model based on the target BIM model includes: acquiring a minimum tight bounding box corresponding to the target BIM model; determining a lower bottom surface position of the stretching body based on the size information of the minimum tight bounding box; constructing a tensile body model corresponding to the target BIM model based on the subsurface locations.
According to the model fusion method provided by the embodiment of the invention, the minimum tight bounding box corresponding to the target BIM model is obtained, the lower bottom surface position of the stretching body is determined based on the size information of the minimum tight bounding box, the stretching body model corresponding to the target BIM model is constructed based on the lower bottom surface position, and the three-dimensional body corresponding to the target BIM model is obtained, so that the fusion of the target real scene model and the target BIM model is realized.
According to a second aspect, an embodiment of the present invention provides a model fusion apparatus, including: the acquisition module is used for a target BIM (building information modeling) model and a target real scene model corresponding to the three-dimensional real scene image, and the target real scene model is provided with a target cutting area corresponding to the target BIM model; a construction module for constructing a tensile body model based on the target BIM model; the determining module is used for determining a target terrain surface model corresponding to the target real scene model based on the characteristics of the target real scene model; the cutting module is used for cutting the stretched body model based on the boundary of the target terrain surface model to obtain a target stretched body model; and the fusion module is used for superposing the target BIM model and the target stretching body model in the target real-scene model containing the target cutting area to obtain a fusion model of the target real-scene model and the target BIM model.
According to the model fusion device provided by the embodiment of the invention, the target live-action model and the target BIM model are obtained, the target cutting area corresponding to the target BIM model is arranged in the target live-action model, the stretching body model is constructed on the basis of the target BIM model, the target terrain surface model is generated on the basis of the characteristics of the target live-action model, the stretching body model is cut by utilizing the boundary of the target terrain surface model to obtain the target stretching body model, and then the target BIM model and the target stretching body model are superposed and placed in the target cutting area, so that the fusion model of the target live-action model and the target BIM model is obtained. The device ensures the fusion of the target stretching body model and the target live-action model at the two positions through the separation calculation of the target terrain surface model and the target live-action model without the help of the model editing capability of other tools, and finally superposes the separation calculation results, thereby realizing the automatic seamless fusion of the target live-action model and the target BIM model, avoiding the dependence on operators, saving the labor cost, reducing the synthesis time and ensuring the mold closing effect.
According to a third aspect, an embodiment of the present invention provides an electronic device, including: a memory and a processor, the memory and the processor being communicatively connected to each other, the memory storing therein computer instructions, and the processor executing the computer instructions to perform the model fusion method according to the first aspect or any embodiment of the first aspect.
According to a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, which stores computer instructions for causing a computer to execute the model fusion method according to the first aspect or any embodiment of the first aspect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a method of fusion of models according to an embodiment of the invention;
FIG. 2 is another flow chart of a method of fusion of models according to an embodiment of the present invention;
FIG. 3 is another flow chart of a method of fusion of models according to an embodiment of the invention;
FIG. 4 is another flow chart of a method of fusion of models according to an embodiment of the present invention;
FIG. 5 is a block diagram of a model fusion apparatus according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The existing mold closing treatment of the live-action model and the BIM is generally carried out by two methods: firstly, a flattening or excavating effect is formed on a display layer of a Web end by temporarily adjusting the grid vertex of a local area to a plane, or the vertex of the area is directly deleted in a cache to form an opening, and then a BIM model is placed; and secondly, the desktop end completes model fusion by performing operations such as model import, manual boundary line addition, grid vertex adjustment, manual bridging, vulnerability repair and the like in a mould repairing mode by means of the model editing capability of tools such as 3ds Max, geomagic and the like. However, the operation of the Web end is only the position adjustment of the display layer for the existing vertex, the model data itself has no change, and the edited model cannot be exported for later design and processing; the operation of the desktop end needs to be manually subjected to format conversion by using various modeling software, the mold closing effect depends on the experience of operators, and the problem of untight model fitting is easily caused.
Based on the technical scheme, the fusion position of the BIM corresponding to the target stretching body model and the target real-scene model is calculated through the separation calculation of the target terrain surface model and the target real-scene model, and finally, the separation calculation results are superposed, so that the automatic seamless fusion of the target real-scene model and the BIM is realized, the dependence on operators is not needed, the labor cost is saved, the synthesis time is shortened, and the mold closing effect is ensured.
In accordance with an embodiment of the present invention, there is provided a model fusion method embodiment, it being noted that the steps illustrated in the flowchart of the figure may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than here.
In this embodiment, a model fusion method is provided, which may be used in electronic devices, such as a mobile phone, a computer, a tablet computer, and the like, fig. 1 is a flowchart of a model fusion method according to an embodiment of the present invention, and as shown in fig. 1, the flowchart includes the following steps:
s11, a target BIM model and a target live-action model corresponding to the three-dimensional live-action image are obtained, wherein the target live-action model is provided with a target cutting area corresponding to the target BIM model.
The three-dimensional live-action image is a photo reflecting a real scene, a book video and the like acquired by different acquisition equipment. The target live-action model is a three-dimensional model (such as obj and osgb formats) which reflects a real scene and is generated from data such as a plurality of pictures, videos or point clouds scanned by laser and the like from different sources, wherein the target live-action model comprises a target cutting area corresponding to the BIM model. The target BIM model is BIM model data generated by using a CAD modeling tool (3 Dmax, bimAKe, etc. modeling software). The electronic device may compare the generated BIM model data with the target live-action model, automatically correct the position of the target BIM model and the position of the target live-action model, and output the position-corrected target BIM model. And cutting the live-action model corresponding to the three-dimensional live-action image based on the closed boundary of the target BIM model to obtain a cut live-action model, namely the target live-action model comprising the target cutting area.
And S12, constructing a tensile body model based on the target BIM model.
The stretching body model is a three-dimensional body corresponding to the target BIM model. After the electronic equipment acquires the target BIM model, the closed boundary of the target BIM model can be extracted, the bounding box corresponding to the closed boundary of the target BIM model is determined according to the closed boundary of the target BIM model, the position of the lower bottom surface of the bounding box is calculated, and the stretching body model corresponding to the target BIM model is constructed by adopting a stretching algorithm.
And S13, determining a target terrain surface model corresponding to the target real scene model based on the characteristics of the target real scene model.
The target terrain surface model is a terrain surface model inside the region of interest obtained by a mesh model polygon clipping algorithm. Specifically, the reconstruction of the terrain surface model is completed by extracting the characteristic data corresponding to the three-dimensional live-action image, plotting the terrain area to be cut corresponding to the target live-action model, extracting the point cloud data corresponding to the terrain area to be cut from the characteristic data and adopting a Poisson algorithm. And cutting the terrain surface model through the boundary corresponding to the point cloud data to obtain a target terrain surface model corresponding to the target real-scene model.
And S14, cutting the stretching body model based on the boundary of the target terrain surface model to obtain the target stretching body model.
And performing Boolean operation on the target terrain surface model and the stretched body model based on a grid model Boolean operation algorithm to obtain a stretched body model which is cut by the boundary of the target terrain surface model, namely a target stretched body model, wherein the target stretched body model is an underground model corresponding to the target real scene model.
And S15, superposing the target BIM model and the target stretching body model in the target live-action model containing the target cutting area to obtain a fusion model of the target live-action model and the target BIM model.
The fusion model is BIM model data with different formats, types and characteristics, which are synthesized by different acquisition equipment (such as a high-definition camera, laser scanning equipment and the like) and CAD modeling tools (such as 3Dmax, BIMMAKE and other modeling software). And superposing the target BIM model, the target stretching body model and the target real-scene model, and completing the fusion processing of the target BIM model and the target real-scene model to obtain a fusion model.
In the model fusion method provided in this embodiment, a target live-action model and a target BIM model are obtained, a target clipping area corresponding to the target BIM model is provided in the target live-action model, a stretched body model is constructed based on the target BIM model, a target terrain surface model is generated based on characteristics of the target live-action model, the stretched body model is clipped by using a boundary of the target terrain surface model to obtain a target stretched body model, and the target BIM model and the target stretched body model are placed in the target clipping area in an overlapping manner, so that a fusion model of the target live-action model and the target BIM model is obtained. According to the method, the fusion of the target stretching body model and the target live-action model on the positions is ensured through the separation calculation of the target terrain surface model and the target live-action model without the help of the model editing capacity of other tools, and finally, the separation calculation results are superposed, so that the automatic seamless fusion of the target live-action model and the target BIM model is realized, the dependence on operators is avoided, the labor cost is saved, the synthesis time is shortened, and the mold closing effect is ensured.
In this embodiment, a model fusion method is provided, which can be used in electronic devices, such as mobile phones, computers, tablet computers, and the like, fig. 2 is a flowchart of a model fusion method according to an embodiment of the present invention, and as shown in fig. 2, the flowchart includes the following steps:
s21, acquiring a target BIM and a target live-action model corresponding to the three-dimensional live-action image, wherein the target live-action model is provided with a target cutting area corresponding to the target BIM. For a detailed description, refer to the related description of step S11 corresponding to the above embodiment, which is not repeated herein.
And S22, constructing a tensile body model based on the BIM model. For detailed description, reference is made to the related description of step S12 corresponding to the above embodiment, and details are not repeated here.
And S23, determining a target terrain surface model corresponding to the target real-scene model based on the characteristics of the target real-scene model.
Specifically, the step S23 may include the following steps:
s231, acquiring characteristic data of the three-dimensional live-action image, wherein the characteristic data is used for reconstructing a terrain mesh model.
The feature data is data reflecting the features of the three-dimensional live-action image and is used for reconstructing a terrain mesh model, wherein the feature data can be point cloud data. The electronic equipment can extract point cloud data used for reconstructing the live-action model from the three-dimensional live-action image to complete reconstruction of the live-action model.
And S232, drawing a target cutting area from the three-dimensional live-action image based on the target live-action model.
The target cutting area is an area needing to be cut from the real scene model. A reconstruction range region of the target terrain surface model corresponding to the target real world model may be defined by plotting the terrain regions. Because the target live-action model includes a target cutting area corresponding to the target BIM model, and the constructed target terrain surface model needs to be matched with the position of the target cutting area, the terrain area to be cut needs to be marked out from the three-dimensional live-action image when the target terrain surface model is constructed.
And S233, extracting target point cloud data corresponding to the target cutting area and a point cloud data boundary corresponding to the target point cloud data from the feature data.
The target point cloud data is point cloud data of an interesting area, namely point cloud data corresponding to a terrain area to be cut, and a point cloud data boundary corresponding to the target point cloud data, namely an outline boundary of the target point cloud data, is obtained by using a convex hull algorithm.
And S234, filtering the target point cloud data to obtain topographic point cloud data.
Filtering the target point cloud data by a point cloud ground point Filter (CSF) algorithm, and filtering out topographic point cloud (ground point cloud) data and non-topographic point cloud (ground object point cloud) data.
And S235, constructing a terrain surface model based on the terrain point cloud data.
And (3) reconstructing a terrain surface model by using a Poisson (poisson) algorithm to the terrain point cloud data. The grid surface quality can be ensured by reconstructing a terrain surface model by using the CSF algorithm to separate the terrain point cloud data and the poisson algorithm.
And S236, cutting the terrain surface model based on the point cloud data boundary to obtain the target terrain surface model.
And cutting the obtained terrain surface model according to the point cloud data boundary by adopting a mesh model polygon cutting algorithm to obtain a terrain surface model inside the region of interest, namely a target terrain surface model corresponding to the target cutting region.
And S24, cutting the stretching body model based on the boundary of the target terrain surface model to obtain the target stretching body model.
Specifically, the step S24 may include the following steps:
and S241, acquiring a terrain area corresponding to the target terrain surface model.
After the target terrain surface model is obtained, a terrain area corresponding to the target terrain surface model may be determined.
And S242, determining a boundary corresponding to the target terrain surface model based on the terrain area.
Specifically, the step S242 includes the following steps:
(1) And extracting the corresponding topographic point cloud data of the topographic area.
The terrain point cloud data is dense point cloud of a terrain area corresponding to the target terrain surface model. The electronic device can extract terrain point cloud data corresponding to the terrain area from the terrain area corresponding to the target terrain surface model.
(2) And calculating a boundary corresponding to the terrain point cloud data based on the terrain point cloud data, and taking the boundary corresponding to the terrain point cloud data as a boundary corresponding to the target terrain surface model.
And calculating the contour boundary of the terrain point cloud data by adopting a convex hull algorithm, wherein the boundary of the terrain point cloud data is the boundary of the target terrain surface model.
And S243, performing Boolean operation on the boundary of the target terrain surface model and the stretching body model to obtain a target stretching body model after the stretching body model is cut.
And performing Boolean operation on the target terrain surface model and the stretched body model by adopting a grid model Boolean operation algorithm to obtain a target stretched body model (an underground model corresponding to a target real scene model) cut by the boundary of the target terrain surface model.
In the target Web end processing or the model repairing processing process by means of three-party software, a live-action model and an underground model part are considered as an integral grid model, and the grid vertex position of a selected area or the vertex of a separated grid is associated through a bridging operation, so that the phenomenon of cavities in uneven layers occurs during the later-stage model LOD processing, the grid quality needs to be artificially optimized, and a large amount of work is additionally added. According to the method, the live-action model and the underground model are separated, the problem caused by LOD processing is avoided, the mesh quality of the live-action model is not affected, the target stretched body model (the underground model) can be determined in a Boolean operation mode, the cut part of the stretched body model is matched with the cut part of the live-action model, and the number of mesh patches is reduced to the maximum extent.
And S25, superposing the target BIM model and the target stretching body model in the target live-action model containing the target cutting area to obtain a fusion model of the target live-action model and the target BIM model. For a detailed description, refer to the related description of step S15 corresponding to the above embodiment, which is not repeated herein.
Compared with the existing live-action surface model which is generated by a graph cutting method, the model has a large number of self-intersecting discrete isolated surface patches and even non-popular surface patches, the grid quality cannot be directly used for Boolean operation, manual interaction is needed to adjust parameters and restore the grid quality, the influence of low-quality surface patches is eliminated, and the normal operation of an automatic processing flow is difficult to ensure. The Boolean operation is completed by adopting the target terrain surface model to replace the live-action surface model, and the result is superposed on the target live-action model, so that the Boolean operation effect with the live-action model is realized, manual interaction parameter adjustment is not needed, and the automatic processing flow is ensured.
In this embodiment, a model fusion method is provided, which may be used in an electronic device, such as a mobile phone, a computer, a tablet computer, and the like, and fig. 3 is a flowchart of the model fusion method according to the embodiment of the present invention, and as shown in fig. 3, the flowchart includes the following steps:
s31, acquiring a target BIM and a target real-scene model corresponding to the three-dimensional real-scene image, wherein the target real-scene model has a target cutting area corresponding to the BIM.
Specifically, the step S31 may include the steps of:
and S310, acquiring a target BIM model. For the detailed description of obtaining the target BIM model, refer to the related description corresponding to the above embodiments, which is not repeated herein.
And S311, acquiring a real scene model corresponding to the three-dimensional real scene image.
The real scene model is a three-dimensional model which is generated from data such as a plurality of photos, videos or point clouds scanned by laser and reflects a real scene from different sources. The three-dimensional live-action image can be acquired through acquisition equipment such as a high-definition camera or laser scanning equipment, and the electronic equipment can complete the construction of the live-action model by extracting the characteristics of the three-dimensional live-action image.
And S312, extracting the closed boundary of the target BIM model and determining a target cutting area of the real scene model.
The closed boundary of the target BIM model is the outline boundary of the target BIM model, namely a closed polygon. And cutting the live-action model through the closed polygon to obtain the target live-action model. And the area of the closed polygon corresponding to the real scene model is the target clipping area.
Specifically, the step S312 includes the following steps:
(1) And obtaining model point cloud data corresponding to the target BIM model.
After the electronic device obtains the target BIM, the model point cloud data corresponding to the target BIM can be extracted so as to determine the contour boundary of the target BIM.
(2) And extracting a model point cloud boundary corresponding to the model point cloud data, wherein the model point cloud boundary comprises a plurality of straight line segments.
The target BIM model is a point cloud model corresponding to the target BIM model through discretization, and a model point cloud boundary corresponding to the model point cloud data is calculated through a convex hull algorithm, wherein the model point cloud boundary comprises a plurality of tiny straight line segments.
(3) And fitting the model point cloud boundary, and deleting redundant straight line segments to obtain a closed boundary corresponding to the model point cloud boundary.
And fitting a plurality of tiny straight line segments contained in the model point cloud boundary by adopting a straight line fitting processing algorithm, and deleting redundant straight line segments to obtain a closed polygon formed by the plurality of straight line segments, namely a closed boundary corresponding to the model point cloud boundary.
(4) And comparing the closed boundary with the live-action model, and determining that the closed boundary corresponds to the target cutting area of the live-action model.
And combining the closed boundary of the target BIM model with the real scene model to determine that the closed boundary of the target BIM model corresponds to the target cutting area in the real scene model.
And S313, cutting the real scene model based on the target cutting area to obtain a target real scene model, wherein the target real scene model comprises a target cutting model corresponding to the target cutting area.
And (3) cutting (excavating) the real scene model according to a target cutting area corresponding to the closed boundary of the target BIM model by adopting a mesh model polygon cutting algorithm to obtain the cut target real scene model.
And S32, constructing a tensile body model based on the target BIM model.
Specifically, the step S32 may include the following steps:
s321, obtaining a minimum tight bounding box corresponding to the target BIM model.
The minimum tight bounding box next to the BIM model is calculated using a bounding box algorithm.
And S322, determining the position of the lower bottom surface of the stretching body based on the size information of the minimum tight bounding box.
After the minimum tight bounding box is obtained, the dimension information of the minimum tight bounding box is calculated, and the position of the lower bottom surface of the minimum tight bounding box can be determined, wherein the position of the lower bottom surface of the minimum tight bounding box is the position of the lower bottom surface of the constructed stretching body.
S323, constructing a stretched body model corresponding to the target BIM model based on the lower floor position.
And stretching according to the minimum tight bounding box by adopting a stretching algorithm to obtain a three-dimensional body model corresponding to the target BIM model, namely a stretched body model corresponding to the target BIM model.
And S33, determining a target terrain surface model corresponding to the target real scene model based on the characteristics of the target real scene model. For a detailed description, refer to the related description of step S23 corresponding to the above embodiment, which is not repeated herein.
And S34, cutting the stretching body model based on the boundary of the target terrain surface model to obtain the target stretching body model. For a detailed description, refer to the related description of step S24 corresponding to the above embodiment, which is not repeated herein.
And S35, superposing the target BIM model and the target stretched body model in a target live-action model containing a target cutting area to obtain a fusion model of the target live-action model and the target BIM model. For detailed description, refer to the related description of step S25 corresponding to the above embodiment, which is not repeated herein, and a preferred fusion flow of the target real-world model and the target BIM model is shown in fig. 4.
In the model fusion method provided in this embodiment, the minimum tight bounding box corresponding to the BIM model is obtained, the lower bottom surface position of the stretching body is determined based on the size information of the minimum tight bounding box, and the stretching body model corresponding to the BIM model is constructed based on the lower bottom surface position, so as to obtain the three-dimensional body corresponding to the BIM model, thereby implementing the fusion of the target real-scene model and the BIM model. The real scene model is cut and holed to obtain a target real scene model, and finally the BIM model and the target stretching body model are overlapped to the target real scene model, so that the target real scene model and the BIM model are automatically and seamlessly fused to meet the display and measurement requirements.
In this embodiment, a model fusion apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and the description of the apparatus is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware or a combination of software and hardware is also possible and contemplated.
The present embodiment provides a model fusion apparatus, as shown in fig. 5, including:
the obtaining module 41 is configured to obtain a target BIM model and a target real-scene model corresponding to the three-dimensional real-scene image, where the target real-scene model has a target clipping region corresponding to the target BIM model. For a detailed description, reference is made to the corresponding related description of the above method embodiments, which is not repeated herein.
And a construction module 42 for constructing the tensile body model based on the target BIM model. For a detailed description, reference is made to the corresponding related description of the above method embodiments, which is not repeated herein.
And the determining module 43 is configured to determine a target terrain surface model corresponding to the target real-scene model based on the characteristics of the target real-scene model. For detailed description, reference is made to the corresponding related description of the above method embodiments, and details are not repeated herein.
And the cutting module 44 is configured to cut the stretched object model based on the boundary of the target terrain surface model, so as to obtain a target stretched object model. For a detailed description, reference is made to the corresponding related description of the above method embodiments, which is not repeated herein.
And the fusion module 45 is configured to superimpose the target BIM model and the target stretched body model in the target real-scene model including the target clipping region, so as to obtain a fusion model of the target real-scene model and the target BIM model. For a detailed description, reference is made to the corresponding related description of the above method embodiments, which is not repeated herein.
The model fusion device provided in this embodiment obtains the target live-action model and the target BIM model, and has a target clipping area corresponding to the BIM model in the target live-action model, and constructs the stretched body model based on the BIM model, and clips the stretched body model based on the target terrain surface model generated by the characteristics of the target live-action model, so as to obtain the target stretched body model, and then superimposes the BIM model and the target stretched body model in the target clipping area, thereby obtaining the fusion model of the target live-action model and the BIM model. The device does not need to help model editing capacity of other tools, ensures the fusion of the target stretching body model and the target real scene model on the positions of the target stretching body model and the target real scene model through the separation calculation of the target terrain surface model and the target real scene model, and finally superposes the separation calculation result, thereby realizing the automatic seamless fusion of the target real scene model and the BIM model, avoiding depending on operators, saving labor cost, reducing synthesis time and ensuring the mold closing effect.
The means for fusing the models in this embodiment are in the form of functional units, where a unit refers to an ASIC circuit, a processor and memory that execute one or more software or fixed programs, and/or other devices that can provide the above-described functionality.
Further functional descriptions of the modules are the same as those of the corresponding embodiments, and are not repeated herein.
An embodiment of the present invention further provides an electronic device, which includes the model fusion apparatus shown in fig. 5.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to an alternative embodiment of the present invention, and as shown in fig. 6, the electronic device may include: at least one processor 501, such as a CPU (Central Processing Unit), at least one communication interface 503, memory 504, and at least one communication bus 502. Wherein a communication bus 502 is used to enable connective communication between these components. The communication interface 503 may include a Display (Display) and a Keyboard (Keyboard), and the optional communication interface 503 may also include a standard wired interface and a standard wireless interface. The Memory 504 may be a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The memory 504 may optionally be at least one storage device located remotely from the processor 501. Wherein the processor 501 may be in connection with the apparatus described in fig. 5, an application program is stored in the memory 504, and the processor 501 calls the program code stored in the memory 504 for performing any of the above-mentioned method steps.
The communication bus 502 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The communication bus 502 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 6, but that does not indicate only one bus or one type of bus.
The memory 504 may include a volatile memory (RAM), such as a random-access memory (RAM); the memory may also include a non-volatile memory (e.g., flash memory), a hard disk (HDD) or a solid-state drive (SSD); the memory 504 may also comprise a combination of the above types of memory.
The processor 501 may be a Central Processing Unit (CPU), a Network Processor (NP), or a combination of a CPU and an NP.
The processor 501 may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), general Array Logic (GAL), or any combination thereof.
Optionally, the memory 504 is also used to store program instructions. The processor 501 may call program instructions to implement a model fusion method as shown in the embodiments of fig. 1 to 4 of the present application.
The embodiment of the invention also provides a non-transitory computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions can execute the processing method of the model fusion method in any method embodiment. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk Drive (Hard Disk Drive, abbreviated as HDD), or a Solid State Drive (SSD); the storage medium may also comprise a combination of memories of the kind described above.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (10)

1. A model fusion method is characterized by comprising the following steps:
acquiring a target BIM (building information modeling) and a target live-action model corresponding to a three-dimensional live-action image, wherein the target live-action model is provided with a target cutting area corresponding to the target BIM;
constructing a tensile body model based on the target BIM model;
determining a target terrain surface model corresponding to the target real-scene model based on the characteristics of the target real-scene model;
cutting the stretched body model based on the boundary of the target terrain surface model to obtain a target stretched body model;
and superposing the target BIM model and the target stretched body model in the target live-action model containing the target cutting area to obtain a fusion model of the target live-action model and the target BIM model.
2. The method of claim 1, wherein said cropping the stretched volume model based on the boundaries of the target terrain surface model to obtain a target stretched volume model comprises:
acquiring a terrain area corresponding to the target terrain surface model;
determining a boundary corresponding to the target terrain surface model based on the terrain area;
and performing Boolean operation on the boundary of the target terrain surface model and the stretched body model to obtain a target stretched body model after the stretched body model is cut.
3. The method of claim 2, wherein determining the boundaries to which the target terrain surface model corresponds based on the terrain area comprises:
extracting topographic point cloud data corresponding to the topographic area;
and calculating a boundary corresponding to the terrain point cloud data based on the terrain point cloud data, and taking the boundary corresponding to the terrain point cloud data as a boundary corresponding to the target terrain surface model.
4. The method of claim 1, wherein determining the target terrain surface model corresponding to the target real-world model based on the features of the target real-world model comprises:
acquiring characteristic data of the three-dimensional live-action image, wherein the characteristic data is used for reconstructing a terrain mesh model;
drawing the target cutting area from the three-dimensional live-action image based on the target live-action model;
extracting target point cloud data corresponding to the target cutting area and a point cloud data boundary corresponding to the target point cloud data from the feature data;
filtering the target point cloud data to obtain topographic point cloud data;
constructing the terrain surface model based on the terrain point cloud data;
and cutting the terrain surface model based on the point cloud data boundary to obtain a target terrain surface model.
5. The method of claim 1, wherein obtaining a target live-action model corresponding to the three-dimensional live-action image comprises:
acquiring a real scene model corresponding to the three-dimensional real scene image;
extracting a closed boundary of the target BIM model, and determining a target cutting area of the real scene model;
and cutting the live-action model based on the target cutting area to obtain a target live-action model, wherein the target live-action model comprises a target cutting model corresponding to the target cutting area.
6. The method according to claim 5, wherein the extracting the closed boundary of the target BIM model and determining the target clipping region of the real scene model comprises:
obtaining model point cloud data corresponding to the target BIM model;
extracting a model point cloud boundary corresponding to the model point cloud data, wherein the model point cloud boundary comprises a plurality of straight line segments;
fitting the model point cloud boundary, and deleting redundant straight line segments to obtain a closed boundary corresponding to the model point cloud boundary;
and comparing the closed boundary with the live-action model, and determining that the closed boundary corresponds to a target cutting area of the live-action model.
7. The method of claim 1, wherein constructing a tensile body model based on the target BIM model comprises:
acquiring a minimum tight bounding box corresponding to the target BIM model;
determining a lower bottom surface position of the stretching body based on the size information of the minimum tight bounding box;
constructing a tensile body model corresponding to the target BIM model based on the subsurface location.
8. A model fusion apparatus, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a target BIM (building information modeling) and a target real scene model corresponding to a three-dimensional real scene image, and the target real scene model is provided with a target cutting area corresponding to the target BIM;
a construction module for constructing a tensile body model based on the target BIM model;
the determining module is used for determining a target terrain surface model corresponding to the target real scene model based on the characteristics of the target real scene model;
the cutting module is used for cutting the stretched body model based on the boundary of the target terrain surface model to obtain a target stretched body model;
and the fusion module is used for superposing the target BIM model and the target stretched body model in the target live-action model containing the target cutting area to obtain a fusion model of the target live-action model and the target BIM model.
9. An electronic device, comprising:
a memory and a processor, the memory and the processor being communicatively connected to each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the model fusion method according to any one of claims 1 to 7.
10. A computer-readable storage medium having stored thereon computer instructions for causing a computer to perform the method of fusion of models of any one of claims 1-7.
CN202110512003.3A 2021-05-11 2021-05-11 Model fusion method and device, electronic equipment and readable storage medium Pending CN115330662A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110512003.3A CN115330662A (en) 2021-05-11 2021-05-11 Model fusion method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110512003.3A CN115330662A (en) 2021-05-11 2021-05-11 Model fusion method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN115330662A true CN115330662A (en) 2022-11-11

Family

ID=83912566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110512003.3A Pending CN115330662A (en) 2021-05-11 2021-05-11 Model fusion method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115330662A (en)

Similar Documents

Publication Publication Date Title
JP6898534B2 (en) Systems and methods to reduce data storage in machine learning
US11847738B2 (en) Voxelization of mesh representations
KR100914845B1 (en) Method and apparatus for 3d reconstructing of object by using multi-view image information
US8471854B2 (en) Geospatial modeling system providing user-selectable building shape options and related methods
US7613539B2 (en) System and method for mesh and body hybrid modeling using 3D scan data
WO2022016310A1 (en) Point cloud data-based three-dimensional reconstruction method and apparatus, and computer device
US9373192B2 (en) Shape preserving mesh simplification
US8099264B2 (en) Geospatial modeling system providing inpainting and error calculation features and related methods
US20090089018A1 (en) Geospatial modeling system providing building generation based upon user input on 3d model and related methods
KR101559896B1 (en) Terrain relief ortho image editing system confirming the shadow by up and down
WO2021098567A1 (en) Method and apparatus for generating panorama having depth information, and storage medium
CN115797592B (en) Method and device for automatically generating building block based on oblique photography three-dimensional model
WO2009052053A1 (en) Geospatial modeling system using void filling and related methods
DE102023124813A1 (en) MODIFYING TWO-DIMENSIONAL IMAGES USING ITERATIVE THREE-DIMENSIONAL MESHES OF THE TWO-DIMENSIONAL IMAGES
CN111353957A (en) Image processing method, image processing device, storage medium and electronic equipment
CN117274535B (en) Method and device for reconstructing live-action three-dimensional model based on point cloud density and electronic equipment
CN117274605B (en) Method and device for extracting water area outline from photo shot by unmanned aerial vehicle
CN116051980B (en) Building identification method, system, electronic equipment and medium based on oblique photography
CN115330662A (en) Model fusion method and device, electronic equipment and readable storage medium
DE102023124586A1 (en) CREATING ADAPTIVE THREE-DIMENSIONAL MESHES FROM TWO-DIMENSIONAL IMAGES
DE102023124805A1 (en) MODIFYING TWO-DIMENSIONAL IMAGES USING SEGMENTED THREE-DIMENSIONAL OBJECT MESHES OF THE TWO-DIMENSIONAL IMAGES
CN108734671B (en) Three-dimensional texture modification method and system, automatic mapping method and system
CN111243062A (en) Manufacturing method for converting planar mural into three-dimensional high-definition digital mural
CN115512330A (en) Object detection method based on image segmentation and laser radar point cloud completion
CN112580123A (en) Editing method and device of beam graphics primitives, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination