CN116342783B - Live-action three-dimensional model data rendering optimization method and system - Google Patents

Live-action three-dimensional model data rendering optimization method and system Download PDF

Info

Publication number
CN116342783B
CN116342783B CN202310595080.9A CN202310595080A CN116342783B CN 116342783 B CN116342783 B CN 116342783B CN 202310595080 A CN202310595080 A CN 202310595080A CN 116342783 B CN116342783 B CN 116342783B
Authority
CN
China
Prior art keywords
model
monomer
monomer element
dimensional
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310595080.9A
Other languages
Chinese (zh)
Other versions
CN116342783A (en
Inventor
程方
杨健
何洋洋
黄金森
关雨
秦自成
黄栊箭
黄梓杰
池晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Geospace Information Technology Co ltd
Original Assignee
Geospace Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Geospace Information Technology Co ltd filed Critical Geospace Information Technology Co ltd
Priority to CN202310595080.9A priority Critical patent/CN116342783B/en
Publication of CN116342783A publication Critical patent/CN116342783A/en
Application granted granted Critical
Publication of CN116342783B publication Critical patent/CN116342783B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a real-scene three-dimensional model data rendering optimization method and a system, wherein the method comprises the following steps: extracting different types of monomer element models from the live-action three-dimensional model data, wherein the different types of monomer element models comprise a building monomer element model, a water area monomer element model, a tree monomer element model, a road monomer element model and a city small-scale monomer element model; and respectively identifying corresponding monomer elements from each monomer element model, rendering each monomer element in the illusion engine, and obtaining a real-scene three-dimensional model scene rendered by each monomer element model. According to the invention, different types of monomer elements are extracted from the live-action three-dimensional model data, different methods are adopted for preprocessing the different types of monomer elements, the processing process is basically full-automatic, and the elements and the complete three-dimensional scene after final processing can be rendered in the illusion engine to achieve the effect of stronger overall texture by processing each type of elements on the model and the materials.

Description

Live-action three-dimensional model data rendering optimization method and system
Technical Field
The invention relates to the field of three-dimensional model rendering, in particular to an optimization method and an optimization system for rapidly preprocessing live-action three-dimensional data and further improving the data rendering effect.
Background
The oblique photography three-dimensional model is the core basic geographic scene data in urban level and component level live-action three-dimensional construction. The generation of oblique photography three-dimensional models is generally divided into two steps: firstly, taking aerial photos, wherein most aerial photos are acquired by adopting unmanned aerial vehicles, carrying a plurality of cameras on one unmanned aerial vehicle, and acquiring aerial photos from five different angles; secondly, based on the acquired aerial photo, through professional automatic modeling software (such as a full-automatic modeling system DP-Smart of astronomical aviation), a three-dimensional oblique photography model is automatically constructed and formed after a series of operations such as space three-position calculation and the like.
The oblique photography three-dimensional model constructed in this manner has two problems: on the one hand, it can be understood that a piece of skin of the real world at a certain moment is a static restoration of a real scene (texture mapping of a real photo) and cannot be "live; on the other hand, due to various limitations, the effect of construction is poor due to short and compact elements such as some trees in cities.
In the mapping industry, rendering and displaying of three-dimensional data (such as a three-dimensional oblique photography three-dimensional model of a live-action) is mainly based on an open source engine (OSG of a desktop, cesium of Web and the like), such as DasViewer of great wisdom, SXEarth of Chengxing technology and the like, and only the three-dimensional data is displayed, so that the whole is completely unaesthetic, and the pursuit of people on the esthetic cannot be satisfied.
Disclosure of Invention
Aiming at the technical problems existing in the prior art, the invention provides a real-scene three-dimensional model data rendering optimization method and system.
According to a first aspect of the present invention, there is provided a real-scene three-dimensional model data rendering optimization method, comprising:
extracting different types of monomer element models from the live-action three-dimensional model data, wherein the different types of monomer element models comprise a building monomer element model, a water area monomer element model, a tree monomer element model, a road monomer element model and a city small-scale monomer element model;
and respectively identifying corresponding monomer elements from each monomer element model, rendering each monomer element in the illusion engine, and obtaining a real-scene three-dimensional model scene rendered by each monomer element model.
According to a second aspect of the present invention, there is provided a real-scene three-dimensional model data rendering optimization system comprising:
the extraction module is used for extracting different types of monomer element models from the live-action three-dimensional model data, wherein the monomer element models of different types comprise a building monomer element model, a water area monomer element model, a tree monomer element model, a road monomer element model and a city small-scale monomer element model;
the identification module is used for identifying corresponding monomer elements from each monomer element model respectively;
and the rendering module is used for rendering each monomer element in the illusion engine and obtaining a real-scene three-dimensional model scene rendered by each monomer element model.
According to the real-scene three-dimensional model data rendering optimization method and system, different types of monomer elements are extracted from the real-scene three-dimensional data, different methods are adopted for preprocessing the different types of monomer elements, the processing process is basically full-automatic, and each type of element is processed on the model and the materials, so that the finally processed element can be rendered in the illusion engine to achieve the effect of stronger overall texture.
Drawings
FIG. 1 is a flow chart of a real-scene three-dimensional model data rendering optimization method provided by the invention;
FIG. 2 is a schematic diagram of a rendering optimization flow of a building element model;
FIG. 3 is a schematic diagram of a building element model extraction flow;
FIG. 4-1 is a schematic view of the size and distribution of the planar area of a residential building at different cutting heights;
FIG. 4-2 is a schematic view of the size and distribution of the cutting plane area of the skirt building at different cutting heights;
FIGS. 4-3 are schematic illustrations of cut plan areas and distributions of non-buildings at different cut heights;
FIG. 5 is a schematic view of a generated architectural window mask image;
FIG. 6 is a schematic diagram of a rendering optimization flow of a water area monomer element model;
FIG. 7 is a schematic diagram of a rendering optimization flow of a tree monomer element model;
FIG. 8 is a schematic flow diagram of generating DOM and DEM based on live three-dimensional model data;
FIG. 9 is a schematic diagram of a rendering optimization flow of a road monomer element model;
FIG. 10 is a schematic flow chart of generating a road model;
FIG. 11 is a schematic diagram of a rendering optimization flow of a city small-scale monomer element model;
FIG. 12 is a schematic structural diagram of a real-scene three-dimensional model rendering optimization system provided by the invention;
fig. 13 is a schematic hardware structure of one possible electronic device according to the present invention;
fig. 14 is a schematic hardware structure of a possible computer readable storage medium according to the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention. In addition, the technical features of each embodiment or the single embodiment provided by the invention can be combined with each other at will to form a feasible technical scheme, and the combination is not limited by the sequence of steps and/or the structural composition mode, but is necessarily based on the fact that a person of ordinary skill in the art can realize the combination, and when the technical scheme is contradictory or can not realize, the combination of the technical scheme is not considered to exist and is not within the protection scope of the invention claimed.
With the advent of digital twinning, three-dimensional GIS has been cross-border fused with game engines (e.g., UE4, U3D, etc.), beginning to build parallel worlds of digital twinning. The game engine has natural rendering advantages including BPR, physical simulation, particle system, weather special effect, virtual light effect, light shadow effect, real-time ray tracing, lens glare and the like. If the two can be effectively combined, an immersive three-dimensional experience can be provided. However, the rendering of the game engine is very dependent on the three-dimensional model data itself, and even if the rendering engine is bulky again, only the texture map (oblique photography three-dimensional model) cannot achieve the desired rendering effect.
In order to improve the rendering effect of the live-action three-dimensional model in the game engine, the original oblique photography three-dimensional model is mostly referred to at present, fine models are manufactured one by one in professional three-dimensional modeling software (such as 3dmax, maya and the like) through manual operation, and PBR materials and the like can be attached in the manufacturing process of the artificial fine models. In this way, the overall effort and cost is very high for small scenes (e.g., buildings or parks, etc.) and for large scenes (e.g., entire cities). Taking Shanghai city as an example, the Shanghai city area is 6340.5 square kilometers, and according to the production cost price (average price: 3.5 ten thousand yuan per square kilometer) of the three-dimensional fine model in the industry, only the cost price needs 2.2 hundred million. It is clear that oblique photography three-dimensional model data is basically abandoned in this way, and manual modeling is performed completely, and although the effect can reach the requirement, the cost is very high and is not cost-effective. Therefore, most of the scenes adopt local key areas to construct artificial fine models, and other parts still adopt oblique photography three-dimensional models. However, based on the mode, on one hand, the whole aesthetic property of the whole large scene cannot be ensured, especially the inclination effect of urban trees (trees and the like) is poor, the urban trees are basically a group of pieces, and the urban trees are basically invisible; on the other hand, the live-action three-dimensional scene cannot be "live-action", and is completely static and not vivid, for example: in the daytime, the building glass window, the water surface and the like cannot be combined with weather and illumination, and the effects of glass reflection, water surface wave light, a cooling effect, tree windage and grass blowing and the like cannot be truly expressed; at night, the lamp is basically black, and the condition of the city universal lamp fire cannot be displayed.
The invention mainly solves the problem of rendering optimization of a live-action three-dimensional model on a visual layer, and provides a compromise method, wherein the problem that a static one-piece leather of the live-action three-dimensional model cannot be alive can be solved and the production cost can be greatly reduced by combining the powerful rendering capability of a three-dimensional game engine through the pre-processing of data in the earlier stage. Particularly in the three-dimensional construction of urban live-action, the method can be well applied to present a vivid twin parallel urban scene.
Fig. 1 is a flowchart of a real-scene three-dimensional model data rendering optimization method provided by the invention, and as shown in fig. 1, the method includes:
step 1, extracting different types of monomer element models from live-action three-dimensional model data, wherein the different types of monomer element models comprise a building monomer element model, a water area monomer element model, a tree monomer element model, a road monomer element model and a city small-scale monomer element model.
And 2, respectively identifying corresponding monomer elements from each monomer element model, rendering each monomer element in the illusion engine, and obtaining a real-scene three-dimensional model scene rendered by each monomer element model.
It can be understood that when the embodiment of the invention renders the live-action three-dimensional model data, various different types of monomer element models are extracted from the live-action three-dimensional model, and different modes are adopted to process and render the different types of monomer element models, so that a live-action three-dimensional model scene with very good effect is finally formed.
As an embodiment, when the monomer element model is a building monomer element model, the identifying corresponding monomer elements from each monomer element model and rendering each monomer element in the illusion engine includes: extracting a building monomer model in the live-action three-dimensional model data, and carrying out three-dimensional surface expansion on the building monomer model to form an original wall picture of a building; identifying a window from the building original wall picture, and generating a building window mask picture; based on the building window mask picture, building window materials are manufactured in a illusion engine, and building window mapping is generated; and attaching window materials to the building monomer model.
It can be appreciated that since the live-action three-dimensional building model data is a snapshot of a moment (typically a day shot with better weather will be selected) based on aerial technology, the effect presented by the live-action three-dimensional model is typically not very good.
For building type elements in the live-action three-dimensional model data, the rendering optimization flow can be seen in fig. 2, and mainly comprises the following steps:
and S1, extracting features of live-action three-dimensional model data, mainly tilt model data (commonly used OSGB mode organization), to form a building monomer model. At present, a plurality of methods for building in a single way are available in the market, and most commonly, three-dimensional point clouds are built through oblique model data, three-dimensional point cloud labeling is carried out, then a point cloud identification network is carried out, semantic results are formed, and a single building model is output. This approach, while effective with high precision, is time consuming and requires reliance on a large number of three-dimensional point cloud labels. The invention mainly optimizes the rendering effect of the full scene building, does not need to monomer all the building, and can realize the wanted effect by most of the building, so the invention provides a horizontal cutting method which has higher efficiency.
As an embodiment, the extracting the building monomer model in the live-action three-dimensional model data includes: obtaining the highest elevation and the lowest elevation of the real three-dimensional model data, setting cutting precision, and horizontally cutting the real three-dimensional model data from top to bottom; recording the intersection points of each time with the inclined triangular net of the live-action three-dimensional model data in the cutting process; constructing a vertical geometric body elevation based on all the intersection points and the cutting precision, and judging whether the geometric body elevation accords with the elevation of the building structure; and if the three-dimensional model data accords with the three-dimensional model data, cutting the three-dimensional model data through space three-dimensional operation based on the solid geometrical surface of the building structure to form a building monomer model.
The specific flow of the horizontal cutting method is shown in fig. 3, and specifically includes:
(1) acquiring the highest elevation Hmax and the lowest elevation Hmin of all model elements in an inclined model (live-action three-dimensional model data);
(2) setting the cutting Precision, the higher the Precision setting is, the more accurate, but instead the more time consuming, the more generally the setting is recommended to be 0.5m;
(3) and adopting a horizontal plane cutting inclination model, wherein the height of the horizontal plane firstly adopts the highest elevation Hmax, then gradually decreases, the height of the horizontal plane is Hmax- (n-1) Precision (wherein n represents the cutting times), and when the height of the horizontal plane is smaller than Hmin, cutting is stopped. In the cutting process, recording points P1 and P2 … Pn intersecting the inclined triangular net each time;
(4) based on P1, P2 … Pn and Precision (as height), building vertical geometry facades (similar to stacked wood approach);
(5) whether the constructed geometric facade accords with the building structure or not is judged, and whether the building is roughly judged through the area size and the area distribution of each plane. The judgment basis is as follows: first, the area of each plane is at least greater than 2 square meters (discarding if not); and secondly, judging whether the plane areas corresponding to different cutting heights conform to the distribution of the main stream buildings or not, wherein the X axis is the different cutting heights (in the order from high to low), and the Y axis is the plane area corresponding to the cutting height. Wherein the building in fig. 4-1 is generally a residential building, the building in fig. 4-2 is generally a skirt building, fig. 4-3 is not a building, possibly a tree, and fig. 4-1, 4-2 and 4-3 respectively show a schematic view of the planar area distribution of different objects after being cut at different cutting heights, wherein the horizontal axis is the cutting height in m (meters), the vertical axis is the area difference between the current cutting plane and the cutting plane of the previous layer in m 2 (square meters).
(6) Based on the above-mentioned judgment result, three-dimensional model data is cut out by spatial three-dimensional operation with the building structure solid geometry surface, and finally a individualized building model (OBJ format) is formed.
And S2, based on the generated monomer building model, carrying out three-dimensional surface expansion to form a wall surface picture (PNG format) of a wall surface three-dimensional surface.
And S3, inputting a building model plane expansion diagram to automatically identify the AI of the window based on the trained window identification sample library, classifying through a deep V < 3+ > neural network, classifying pixels of a wall photo by using the network, acquiring the pixels determined as the window, and finally forming a grid of the window. With different color differentiation, white represents windows and black represents non-window areas, a masking picture (PNG format) of the architectural window is ultimately generated, as shown in fig. 5.
And step S4, importing the mask picture into SD (Substance Designer) software based on the automatically generated mask picture. PNG pictures have four RGBA channels, with R channels to distinguish between window and non-window areas, G channels to record roughness, and B channels to record metalization. Because the window is generally glass, the roughness can be set to be very low, and the metallization is set to be higher; instead of the window area being a wall surface, the roughness can be set higher and the metallicity can be set lower. After SD processing, the map is derived, and then a specific building window PBR material ball is made in UE4 (in the illusion engine), and the effect of night can be set.
And S5, attaching the manufactured PBR material to the building model.
Wherein, PBR material refers to a physical rendering-based material that accurately represents real world material using a realistic shadow/illumination model and measured surface values.
The steps S1, S2, S3 and S5 can be automatically realized by software, so that based on the steps, automatic processing can be basically realized, and finally, the effects of strong texture such as light reflection of the window in the daytime and lighting of the window at night of the building can be achieved.
When the monomer element model is a water area monomer element model, identifying corresponding monomer elements from each monomer element model, and rendering each monomer element in the illusion engine, wherein the method comprises the following steps: acquiring a water area elevation value based on the live three-dimensional model data and the basic water area; pulling down the box according to the elevation values of the foundation water area and the water area to construct a water area vertical geometrical body; and (3) manufacturing a water area PBR material, and attaching the manufactured water area PBR material to the constructed water area vertical geometry through the illusion engine.
It can be understood that the water area (river, lake, etc.) is often poor in effect and the water is relatively turbid based on the aerial photographing technology. In the embodiment of the invention, the rendering optimization flow of the water area monomer element model can be seen in fig. 6, and mainly comprises the following steps:
step S1, acquiring elevation values of corresponding inclinations of a water area surface based on live-action three-dimensional model data (an inclination model OSGB organization) and a basic water area surface (SHP format). As the water area (river, lake, etc.) tends to be horizontal, the elevation value corresponding to a point is selected at will. The method comprises the steps of selecting an inner wrapping point X of a water area, then performing ray intersection with an inclination model, and obtaining an elevation value H at an intersection point, namely the elevation value H of the water area.
Step S2, based on the basic water area and the water area height value H, the box is pulled downwards, and the proposal is made of 0.5m, so that a vertical geometric body is constructed.
And S3, manufacturing the PBR material of the water area surface based on the illusion engine.
And S4, attaching the vertical geometric body constructed in the step S2 with the material manufactured in the step S3, and then merging the vertical geometric body into a scene.
The steps S1, S2 and S4 can be automatically realized by software, so that based on the steps, automatic processing can be basically realized, and finally, the effect of strong texture such as the effect of the water surface glistening can be achieved.
When the monomer element model is a tree monomer element model, identifying corresponding monomer elements from each monomer element model, and rendering each monomer element in the illusion engine, including: generating an orthophoto DOM and a digital elevation model DEM based on the live three-dimensional model data; based on a tree identification sample library, identifying the orthographic image DOM and outputting a range vector surface of a tree; according to the range vector surface of the generated tree, randomly and uniformly scattering points to generate vector point data of the tree; carrying out ray intersection according to vector point data of the tree and live three-dimensional model data to obtain the highest elevation, and searching the corresponding lowest elevation in the corresponding digital elevation model; and calculating the tree height according to the highest elevation and the lowest elevation, matching corresponding tree model assets in the three-dimensional model asset library according to the tree height, and planting corresponding points on the matched tree model assets through the illusion engine.
It can be appreciated that, also based on aerial technologies, city trees are generally relatively short and aerial photographs are difficult to clearly photograph, so that they are generally very ambiguous. For tree type elements, the flow of rendering optimization can be seen in fig. 7, and mainly comprises the following steps:
step S1, generating a DOM (orthophoto) and a DEM (digital elevation model) based on live-action three-dimensional model data (oblique model OSGB organization), wherein the specific flow of generating the DOM and the DEM is shown in fig. 8:
(1) oblique photography data preprocessing: correcting and correcting oblique photographic data, removing bad pictures, retaining high-definition pictures meeting requirements, and performing operations such as full-color adjustment, matching, fusion and the like on the pictures to form a panoramic oblique photographic image set;
(2) feature point extraction and matching: extracting a large number of characteristic points from the oblique photographic image set by using a SIFT, SURF, ORB characteristic point extraction algorithm, matching and screening the characteristic points by using a KNN, FLANN, RANSAC algorithm, and determining correct and stable key point matching;
(3) generating a DEM: calculating projection matrix and camera attitude parameters by utilizing the key point matching, dividing an oblique photographic image set into a plurality of small areas by using a vertex triangle subdivision method (Delaunay triangle grid algorithm), establishing a three-dimensional point cloud model, and then generating DEM elevation data by adopting an interpolation algorithm (Kriging interpolation, IDW interpolation and the like);
(4) generating a DOM: registering and fusing the DEM elevation data and the oblique photographic image set, and correcting the registered image on a ground coordinate system by utilizing a rasterization algorithm to form a distortion-free digital orthophoto DOM;
(5) post-treatment (optional): and denoising, smoothing, edge enhancement and the like are performed on DOM and DEM data, so that the data quality and precision are improved, and finally, high-quality and reliable digital terrain and image products are obtained.
And S2, based on the generated DOM, carrying out AI identification according to the tree identification sample library, and finally outputting the range vector surface (SHP format) of the tree.
And S3, randomly and uniformly scattering points based on the range vector surface of the generated data to generate point data (SHP format) of the tree. In order to be more realistic, a poisson (poisson) distribution algorithm is adopted to uniformly scatter points, so that point data of the tree is generated.
S4, performing ray intersection according to the generated vector points of the data and the coordinate point positions (x, y) and the inclination model to obtain the highest height Hmax; inquiring the corresponding elevation Hmin of the DEM, and recording the elevation Hmin as an attribute in a field of the vector point SHP.
And S5, calculating the tree height H=Hmax-Hmin, matching corresponding tree model assets in a three-dimensional model asset library (precondition: a large number of three-dimensional model assets of the prepared common tree) according to the tree height, and planting the matched tree model assets on the corresponding points to form the large-area effect of real tree planting.
The steps S1, S2, S3, S4, and S5 mentioned above may be automatically implemented by software. Based on the above steps, an automatic treatment is basically achieved, finally achieving the aforementioned effect of the tree being very clear.
When the monomer element model is a road element model, the identifying the corresponding monomer element from each monomer element model and rendering each monomer element in the illusion engine includes: generating an orthophoto DOM and a digital elevation model DEM based on the live three-dimensional model data; based on a road identification sample library, identifying the orthographic image DOM, and outputting road center line vector data, wherein the road center line vector data comprises road width information; and automatically generating a ground-attached road model through the illusion engine based on the road centerline vector data and the digital elevation model DEM.
It can be understood that the road is lower than the urban tree based on the aerial technology, and the road is good for some open main roads and has less shielding; but for some secondary thoroughfares and some minor streets inside cities, the problem is serious and the road is basically invisible. In the embodiment of the present invention, referring to fig. 9, the rendering optimization flow for the road type element mainly includes the following steps:
step S1, generating DOM (orthophotos) and DEM (digital elevation model) based on live three-dimensional model data (oblique model OSGB organization). Relevant details are mentioned in the processing manner of the tree, and are not further described here.
Step S2, based on the generated DOM (orthographic image), AI identification is performed according to the road identification sample library, and finally, road center line vector data (SHP format) including road width attributes (acquired according to the road surface identified by AI) is output.
And step S3, automatically generating a ground-attached road model based on the road center line vector data and the DEM (digital elevation model).
As an embodiment, automatically generating a road model for ground attachment based on the road centerline vector data and the digital elevation model DEM includes: according to the digital elevation model and the sampling precision, interpolating and expanding points of the road center points, acquiring elevation information of each point, and recording the elevation information into a vector graph of the road center line; generating a road model according to the road width information and the elevation information of each point; and based on the road width information, matching specific road materials on the road surface in the road model through the illusion engine, and generating a rendered road model.
Referring to fig. 10, to generate a schematic diagram of a road model for a ground, comprising:
(1) according to the DEM and the sampling precision (which can be set to be 2 meters), interpolation point expansion is carried out on the central line of the road, then elevation information corresponding to each point is obtained and is used as a Z value to be recorded in a vector graph of the central line of the road;
(2) generating a road model (white film) by pulling down 0.5m according to the road width and Z value (the Z value is only used for representing road fluctuation in the current step);
(3) based on the road width information, the road surface is matched with specific road materials (such as the number of lanes, etc.), and finally a road model is generated.
The steps S1, S2 and S3 mentioned above can be automatically implemented by software, the road model can be integrated into the scene after being generated, if the effect of the whole scene is more realistic, the traffic flow can be dynamically increased in the illusion engine (the road model is loaded into the illusion engine to create a vehicle blueprint, the vehicle form track is added, the blueprint is duplicated in multiple copies and is respectively placed on different roads, so that the traffic flow operation of the road is simulated).
When the monomer element model is a city small article element model, the steps of identifying corresponding monomer elements from each monomer element model and rendering each monomer element in the illusion engine include: generating an orthophoto DOM and a digital elevation model DEM based on the live three-dimensional model data; identifying the generated orthographic image DOM according to a city small-article identification sample library, and outputting point vector data of the city small-article, wherein the point vector data comprises a city small-article type attribute field, and the city small-article type comprises a street lamp, a guideboard, a garbage station and a sentry box; searching corresponding city commodity assets in a three-dimensional asset library of the city commodity according to the city commodity type; and automatically placing the urban small-article asset into the three-dimensional scene through the illusion engine according to the point location information.
It can be understood that, based on the aerial photography technology as well, urban small products are generally attached to the ground, and are basically invisible through a live-action three-dimensional model.
For the city commodity, the rendering optimization flow of the city commodity element model according to the embodiment of the present invention may refer to fig. 11, and mainly includes:
step S1, generating DOM (orthophotos) and DEM (digital elevation model) based on live three-dimensional model data (oblique model OSGB organization). Relevant details are mentioned in the processing manner of the tree, and are not further described here.
And S2, based on the generated DOM (orthographic image), carrying out AI identification according to a small article identification sample library, and finally outputting point vector data (SHP format) of the urban small article, wherein the vector data comprises a small article type attribute field. The city commodity mainly comprises: street lamps, guideboards, garbage stations, sentry boxes and the like.
And S3, searching the corresponding urban small-article assets in the small-article three-dimensional asset library according to the small-article types, and then automatically putting the small-article assets into the three-dimensional scene according to the point location information.
The steps S1, S2 and S3 mentioned above may be automatically implemented by software.
Summarizing the processing modes of the various elements, finally, the corresponding elements can be automatically built based on the live-action three-dimensional model, and the real, clear and strong sense of texture can be provided for people by combining the rendering advantages of the game engine.
Referring to fig. 12, a real-scene three-dimensional model data rendering optimization system provided by the invention comprises an extraction module 101, an identification module 102 and a rendering module 103, wherein:
the extraction module 101 is configured to extract different types of monomer element models from the live-action three-dimensional model data, where the different types of monomer element models include a building monomer element model, a water area monomer element model, a tree monomer element model, a road monomer element model and a city small-scale monomer element model;
the identifying module 102 is configured to identify corresponding monomer elements from each monomer element model;
and the rendering module 103 is used for rendering each monomer element in the illusion engine and obtaining the real-scene three-dimensional model scene rendered by each monomer element model.
It can be understood that the real-scene three-dimensional model data rendering optimization system provided by the invention corresponds to the real-scene three-dimensional model data rendering optimization method provided by the foregoing embodiments, and the relevant technical features of the real-scene three-dimensional model data rendering optimization system can refer to the relevant technical features of the real-scene three-dimensional model data rendering optimization method, which are not described herein again.
Referring to fig. 13, fig. 13 is a schematic diagram of an embodiment of an electronic device according to an embodiment of the invention. As shown in fig. 13, an embodiment of the present invention provides an electronic device 1500, including a memory 1510, a processor 1520, and a computer program 1511 stored in the memory 1510 and executable on the processor 1520, wherein the processor 1520 implements steps of a method for rendering and optimizing data of a live three-dimensional model when executing the computer program 1511.
Referring to fig. 14, fig. 14 is a schematic diagram of a computer readable storage medium according to an embodiment of the invention. As shown in fig. 14, the present embodiment provides a computer-readable storage medium 1600 having stored thereon a computer program 1611, which computer program 1611, when executed by a processor, implements the steps of a live-action three-dimensional model data rendering optimization method.
According to the real-scene three-dimensional model data rendering optimization method and system provided by the embodiment of the invention, different types of single elements are extracted from the real-scene three-dimensional model data, different methods are adopted for preprocessing the different types of single elements, the processing process is basically full-automatic, and each type of elements are processed on the model and the material, so that the finally processed elements can be rendered in the illusion engine to achieve the effect of stronger overall texture.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and for those portions of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (5)

1. The real-scene three-dimensional model rendering optimization method is characterized by comprising the following steps of:
extracting different types of monomer element models from the live-action three-dimensional model data, wherein the different types of monomer element models comprise a building monomer element model, a water area monomer element model, a tree monomer element model, a road monomer element model and a city small-scale monomer element model;
identifying corresponding monomer elements from each monomer element model respectively, rendering each monomer element in a illusion engine, and obtaining a real-scene three-dimensional model scene rendered by each monomer element model;
when the monomer element model is a building monomer element model, the steps of identifying corresponding monomer elements from each monomer element model and rendering each monomer element in the illusion engine include:
extracting a building monomer model in the live-action three-dimensional model data, and carrying out three-dimensional surface expansion on the building monomer model to form an original wall picture of a building;
identifying a window from the building original wall picture, and generating a building window mask picture;
building window materials are manufactured in the illusion engine based on the building window mask pictures, building window pictures are generated, and window materials are attached to the building single model;
when the monomer element model is a water area monomer element model, the method identifies corresponding monomer elements from each monomer element model and renders each monomer element in the illusion engine, and comprises the following steps:
acquiring a water area elevation value based on the live three-dimensional model data and the basic water area;
pulling down the box according to the elevation values of the foundation water area and the water area to construct a water area vertical geometrical body;
manufacturing a water area PBR material, attaching the manufactured water area PBR material to the constructed water area vertical geometry body through the illusion engine, and forming a rendered water area monomer element model;
when the monomer element model is a tree monomer element model, the steps of identifying corresponding monomer elements from each monomer element model and rendering each monomer element in the illusion engine include:
generating an orthophoto DOM and a digital elevation model DEM based on the live three-dimensional model data;
based on a tree identification sample library, identifying the orthographic image DOM and outputting a range vector surface of a tree;
according to the range vector surface of the generated tree, randomly and uniformly scattering points to generate vector point data of the tree;
carrying out ray intersection according to vector point data of the tree and live three-dimensional model data to obtain the highest elevation, and searching the corresponding lowest elevation in the corresponding digital elevation model;
calculating tree heights according to the highest elevation and the lowest elevation, matching corresponding tree model assets in a three-dimensional model asset library according to the tree heights, and planting corresponding points on the matched tree model assets through an illusion engine;
when the single element model is a road element model, the identifying corresponding single elements from each single element model and rendering each single element in the illusion engine includes:
generating an orthophoto DOM and a digital elevation model DEM based on the live three-dimensional model data;
based on a road identification sample library, identifying the orthographic image DOM, and outputting road center line vector data, wherein the road center line vector data comprises road width information;
automatically generating a ground-attached road model through an illusion engine based on the road centerline vector data and the digital elevation model DEM;
when the monomer element model is a city small article element model, the steps of identifying corresponding monomer elements from each monomer element model and rendering each monomer element in the illusion engine include:
generating an orthophoto DOM and a digital elevation model DEM based on the live three-dimensional model data;
identifying the generated orthographic image DOM according to a city small-article identification sample library, and outputting point vector data of the city small-article, wherein the point vector data comprises a city small-article type attribute field, and the city small-article type comprises a street lamp, a guideboard, a garbage station and a sentry box;
searching corresponding city commodity assets in a three-dimensional asset library of the city commodity according to the city commodity type;
and automatically placing the urban small-article asset into the three-dimensional scene through the illusion engine according to the point location information.
2. The method for rendering and optimizing a live-action three-dimensional model according to claim 1, wherein the step of extracting the building monomer model in the live-action three-dimensional model data comprises the steps of:
obtaining the highest elevation and the lowest elevation of the real three-dimensional model data, setting cutting precision, and horizontally cutting the real three-dimensional model data from top to bottom;
recording the intersection points of the horizontal plane of each cut and the inclined triangular net of the real-scene three-dimensional model data in the cutting process;
constructing a vertical geometric body elevation based on all the intersection points and the cutting precision, and judging whether the geometric body elevation accords with the elevation of the building structure;
and if the three-dimensional model data accords with the three-dimensional model data, cutting the three-dimensional model data through space three-dimensional operation based on the solid geometrical surface of the building structure to form a building monomer model.
3. The method of claim 2, wherein determining whether the geometric facade corresponds to a building structure facade comprises:
and judging whether the geometric body elevation accords with the building structure elevation or not based on the size of the plane area after the cutting at different cutting heights and the distribution condition of the plane area.
4. The real-scene three-dimensional model rendering optimization method according to claim 1, wherein the automatically generating a ground-attached road model based on the road centerline vector data and a digital elevation model DEM comprises:
according to the digital elevation model and the sampling precision, interpolating and expanding points of the road center points, acquiring elevation information of each point, and recording the elevation information into a vector graph of the road center line;
generating a road model according to the road width information and the elevation information of each point;
and based on the road width information, matching specific road materials on the road surface in the road model through the illusion engine, and generating a rendered road model.
5. A live-action three-dimensional model data rendering optimization system, comprising:
the extraction module is used for extracting different types of monomer element models from the live-action three-dimensional model data, wherein the monomer element models of different types comprise a building monomer element model, a water area monomer element model, a tree monomer element model, a road monomer element model and a city small-scale monomer element model;
the identification module is used for identifying corresponding monomer elements from each monomer element model respectively;
the rendering module is used for rendering each monomer element in the illusion engine and obtaining a real scene three-dimensional model scene rendered by each monomer element model;
when the monomer element model is a building monomer element model, identifying corresponding monomer elements from each monomer element model, and rendering each monomer element in the illusion engine, wherein the method comprises the following steps:
extracting a building monomer model in the live-action three-dimensional model data, and carrying out three-dimensional surface expansion on the building monomer model to form an original wall picture of a building;
identifying a window from the building original wall picture, and generating a building window mask picture;
building window materials are manufactured in the illusion engine based on the building window mask pictures, building window pictures are generated, and window materials are attached to the building single model;
when the monomer element model is a water area monomer element model, identifying corresponding monomer elements from each monomer element model, and rendering each monomer element in the illusion engine, wherein the method comprises the following steps:
acquiring a water area elevation value based on the live three-dimensional model data and the basic water area;
pulling down the box according to the elevation values of the foundation water area and the water area to construct a water area vertical geometrical body;
manufacturing a water area PBR material, attaching the manufactured water area PBR material to the constructed water area vertical geometry body through the illusion engine, and forming a rendered water area monomer element model;
when the monomer element model is a tree monomer element model, identifying corresponding monomer elements from each monomer element model, and rendering each monomer element in the illusion engine, wherein the method comprises the following steps:
generating an orthophoto DOM and a digital elevation model DEM based on the live three-dimensional model data;
based on a tree identification sample library, identifying the orthographic image DOM and outputting a range vector surface of a tree;
according to the range vector surface of the generated tree, randomly and uniformly scattering points to generate vector point data of the tree;
carrying out ray intersection according to vector point data of the tree and live three-dimensional model data to obtain the highest elevation, and searching the corresponding lowest elevation in the corresponding digital elevation model;
calculating tree heights according to the highest elevation and the lowest elevation, matching corresponding tree model assets in a three-dimensional model asset library according to the tree heights, and planting corresponding points on the matched tree model assets through an illusion engine;
when the monomer element model is a road element model, identifying corresponding monomer elements from each monomer element model, and rendering each monomer element in the illusion engine, wherein the method comprises the following steps:
generating an orthophoto DOM and a digital elevation model DEM based on the live three-dimensional model data;
based on a road identification sample library, identifying the orthographic image DOM, and outputting road center line vector data, wherein the road center line vector data comprises road width information;
automatically generating a ground-attached road model through an illusion engine based on the road centerline vector data and the digital elevation model DEM;
when the monomer element model is a city small article element model, identifying corresponding monomer elements from each monomer element model, and rendering each monomer element in the illusion engine, wherein the method comprises the following steps:
generating an orthophoto DOM and a digital elevation model DEM based on the live three-dimensional model data;
identifying the generated orthographic image DOM according to a city small-article identification sample library, and outputting point vector data of the city small-article, wherein the point vector data comprises a city small-article type attribute field, and the city small-article type comprises a street lamp, a guideboard, a garbage station and a sentry box;
searching corresponding city commodity assets in a three-dimensional asset library of the city commodity according to the city commodity type;
and automatically placing the urban small-article asset into the three-dimensional scene through the illusion engine according to the point location information.
CN202310595080.9A 2023-05-25 2023-05-25 Live-action three-dimensional model data rendering optimization method and system Active CN116342783B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310595080.9A CN116342783B (en) 2023-05-25 2023-05-25 Live-action three-dimensional model data rendering optimization method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310595080.9A CN116342783B (en) 2023-05-25 2023-05-25 Live-action three-dimensional model data rendering optimization method and system

Publications (2)

Publication Number Publication Date
CN116342783A CN116342783A (en) 2023-06-27
CN116342783B true CN116342783B (en) 2023-08-08

Family

ID=86891489

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310595080.9A Active CN116342783B (en) 2023-05-25 2023-05-25 Live-action three-dimensional model data rendering optimization method and system

Country Status (1)

Country Link
CN (1) CN116342783B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116805277B (en) * 2023-08-18 2024-01-26 吉奥时空信息技术股份有限公司 Video monitoring target node pixel coordinate conversion method and system
CN116992599B (en) * 2023-09-25 2024-01-09 天津市普迅电力信息技术有限公司 Mechanical model effect improving method based on Cesium physical rendering

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663800A (en) * 2012-04-26 2012-09-12 北京师范大学 City building complex and rendering method considering city image
CN104766366A (en) * 2015-03-31 2015-07-08 东北林业大学 Method for establishing three-dimensional virtual reality demonstration
CN110827402A (en) * 2020-01-13 2020-02-21 武大吉奥信息技术有限公司 Method and system for simplifying three-dimensional model of similar building based on rasterization technology
CN112465976A (en) * 2020-12-14 2021-03-09 广州港数据科技有限公司 Storage yard three-dimensional map establishing method, inventory management method, equipment and medium
CN112785710A (en) * 2021-01-28 2021-05-11 湖北省国土测绘院 Rapid unitization method, system, memory and equipment for OSGB three-dimensional model building
CN113343346A (en) * 2021-08-09 2021-09-03 速度时空信息科技股份有限公司 Three-dimensional traffic scene rapid modeling method based on high-precision map
CN113706698A (en) * 2021-10-25 2021-11-26 武汉幻城经纬科技有限公司 Live-action three-dimensional road reconstruction method and device, storage medium and electronic equipment
CN114066768A (en) * 2021-11-24 2022-02-18 武汉大势智慧科技有限公司 Building facade image restoration method, device, equipment and storage medium
CN114170393A (en) * 2021-11-30 2022-03-11 上海埃威航空电子有限公司 Three-dimensional map scene construction method based on multiple data
CN114417489A (en) * 2022-03-30 2022-04-29 宝略科技(浙江)有限公司 Building base contour refinement extraction method based on real-scene three-dimensional model
CN115423953A (en) * 2022-07-12 2022-12-02 中国电建集团中南勘测设计研究院有限公司 Water pollutant visualization method and terminal equipment
CN115471634A (en) * 2022-10-28 2022-12-13 吉奥时空信息技术股份有限公司 Modeling method and device for urban green plant twins

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7233337B2 (en) * 2001-06-21 2007-06-19 Microsoft Corporation Method and apparatus for modeling and real-time rendering of surface detail
US20140362082A1 (en) * 2011-05-03 2014-12-11 Google Inc. Automated Overpass Extraction from Aerial Imagery

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663800A (en) * 2012-04-26 2012-09-12 北京师范大学 City building complex and rendering method considering city image
CN104766366A (en) * 2015-03-31 2015-07-08 东北林业大学 Method for establishing three-dimensional virtual reality demonstration
CN110827402A (en) * 2020-01-13 2020-02-21 武大吉奥信息技术有限公司 Method and system for simplifying three-dimensional model of similar building based on rasterization technology
CN112465976A (en) * 2020-12-14 2021-03-09 广州港数据科技有限公司 Storage yard three-dimensional map establishing method, inventory management method, equipment and medium
CN112785710A (en) * 2021-01-28 2021-05-11 湖北省国土测绘院 Rapid unitization method, system, memory and equipment for OSGB three-dimensional model building
CN113343346A (en) * 2021-08-09 2021-09-03 速度时空信息科技股份有限公司 Three-dimensional traffic scene rapid modeling method based on high-precision map
CN113706698A (en) * 2021-10-25 2021-11-26 武汉幻城经纬科技有限公司 Live-action three-dimensional road reconstruction method and device, storage medium and electronic equipment
CN114066768A (en) * 2021-11-24 2022-02-18 武汉大势智慧科技有限公司 Building facade image restoration method, device, equipment and storage medium
CN114170393A (en) * 2021-11-30 2022-03-11 上海埃威航空电子有限公司 Three-dimensional map scene construction method based on multiple data
CN114417489A (en) * 2022-03-30 2022-04-29 宝略科技(浙江)有限公司 Building base contour refinement extraction method based on real-scene three-dimensional model
CN115423953A (en) * 2022-07-12 2022-12-02 中国电建集团中南勘测设计研究院有限公司 Water pollutant visualization method and terminal equipment
CN115471634A (en) * 2022-10-28 2022-12-13 吉奥时空信息技术股份有限公司 Modeling method and device for urban green plant twins

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
尹长林等.倾斜摄影三维建筑模型自动单体化与轮廓提取方法.《测绘工程》.2023,第第32卷卷(第第3期期),全文. *

Also Published As

Publication number Publication date
CN116342783A (en) 2023-06-27

Similar Documents

Publication Publication Date Title
CN116342783B (en) Live-action three-dimensional model data rendering optimization method and system
CN108919944B (en) Virtual roaming method for realizing data lossless interaction at display terminal based on digital city model
CN109883401B (en) Method and system for measuring visual field of city mountain watching
Frueh et al. Automated texture mapping of 3D city models with oblique aerial imagery
CN109242862B (en) Real-time digital surface model generation method
CN110648389A (en) 3D reconstruction method and system for city street view based on cooperation of unmanned aerial vehicle and edge vehicle
CN111915726B (en) Construction method of three-dimensional scene of overhead transmission line
CN110660125B (en) Three-dimensional modeling device for power distribution network system
CN115641401A (en) Construction method and related device of three-dimensional live-action model
BRPI0715010A2 (en) geospatial modeling system and method
CN115841559A (en) Urban large scene reconstruction method based on nerve radiation field
Gao et al. Large-scale synthetic urban dataset for aerial scene understanding
CN115187647A (en) Vector-based road three-dimensional live-action structured modeling method
JPH11120374A (en) Method and device for generating three-dimensional city scenery information
Hu et al. Building modeling from LiDAR and aerial imagery
Andújar et al. Inexpensive reconstruction and rendering of realistic roadside landscapes
CN117150755A (en) Automatic driving scene simulation method and system based on nerve point rendering
Zhou et al. True orthoimage generation in urban areas with very tall buildings
McAlinden et al. Procedural reconstruction of simulation terrain using drones
CN113838199B (en) Three-dimensional terrain generation method
Zhuo et al. A novel vehicle detection framework based on parallel vision
Habib et al. Integration of lidar and airborne imagery for realistic visualization of 3d urban environments
KR20220085369A (en) Panoramic texture mapping method with semantic object matching and the system thereof
Huang et al. TPMT based Automatic Road Extraction from 3D Real Scenes
WO2024009126A1 (en) A method for generating a virtual data set of 3d environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant