CN112927352A - Three-dimensional scene local area dynamic flattening method and device based on flattening polygon - Google Patents

Three-dimensional scene local area dynamic flattening method and device based on flattening polygon Download PDF

Info

Publication number
CN112927352A
CN112927352A CN202110200278.3A CN202110200278A CN112927352A CN 112927352 A CN112927352 A CN 112927352A CN 202110200278 A CN202110200278 A CN 202110200278A CN 112927352 A CN112927352 A CN 112927352A
Authority
CN
China
Prior art keywords
flattening
polygon
flattened
camera
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110200278.3A
Other languages
Chinese (zh)
Inventor
孙瑞
张宇航
祝炜
胡斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Normal University
Original Assignee
Nanjing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Normal University filed Critical Nanjing Normal University
Priority to CN202110200278.3A priority Critical patent/CN112927352A/en
Publication of CN112927352A publication Critical patent/CN112927352A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a three-dimensional scene local area dynamic flattening method and a device based on a flattening polygon, wherein the flattening polygon is defined in a user coordinate system to determine a flattening area; creating a flattening camera according to the flattening polygon, and setting an observation matrix, a projection matrix and a viewport matrix; generating a flattened polygon depth map using a flattening camera; transmitting the observation matrix, the projection matrix and the flattened polygon depth map of the flattening camera into a GPU programmable pipeline; in a vertex shader of the GPU, judging the inclusion relation between the flattening polygon and the vertex, and performing displacement flattening on the vertex which is positioned in the flattening polygon and is higher than the flattening polygon. The invention has lower limitation on the flattened area, is not limited to the horizontal plane, can be any inclined plane, and can define the flattened area by adopting an inaccurate coplanar space polygon according to the needs of users.

Description

Three-dimensional scene local area dynamic flattening method and device based on flattening polygon
Technical Field
The invention belongs to the field of spatial information, and particularly relates to a three-dimensional scene local area dynamic flattening method and device based on a flattening polygon.
Background
The three-dimensional model is an important data basis of three-dimensional visualization, the accuracy of the three-dimensional model is closely related to the visualization effect of a three-dimensional scene, the fine model can improve the quality of the three-dimensional visualization, and the appearance of the low-quality model is influenced and even influences the application effect. The fine model is generally modeled by manual modeling or lidar modeling, but because the production cost is higher, a fine modeling method is generally adopted only for key targets. Oblique photography is a full-element three-dimensional modeling method, has low production cost and high production speed, can realize batch three-dimensional reconstruction of large scenes containing geometric structures and textures, but is limited by the prior art, has poor modeling effect on fine and scattered targets such as trees, electric poles and the like, and is easy to cause the generation of a large number of low-quality models.
Specialized retouching software can be employed to remove low quality models to improve visualization. The removal operation is essentially a secondary modeling of the original model, requiring a re-acquisition of data (e.g., texture images) to achieve a three-dimensional reconstruction of the removed region. At present, some trimming software such as mesh mixer, wish3D and the like provide a local flattening function of the model, and directly flatten the low-quality model (such as a tree) to the ground without data acquisition again, so that the requirement of removing the low-quality model is indirectly met. However, this method has the following problems: (1) the flattening requires professional user operation and also requires professional mold repair software support, and is basically not feasible for ordinary users except for the addition of extra workload. (2) Mould repair software flattening is essentially physical damage to a three-dimensional model, and changes the problem of reflecting the high and low quality of the model into the problem of reflecting the existence of the model of '0-1', so that a virtual scene and an actual scene are new and inconsistent, and the operation is not reversible, so that the method is unacceptable in partial application.
Disclosure of Invention
The purpose of the invention is as follows: the invention provides a three-dimensional scene local area dynamic flattening method and device based on a flattening polygon, which have the advantages of low limitation on a flattening area, no limitation on a horizontal plane, capability of being any inclined plane, strong interactivity, capability of conveniently checking an actual flattening effect and dynamic adjustment.
The technical scheme is as follows: the invention provides a three-dimensional scene local area dynamic flattening method based on a flattening polygon, which specifically comprises the following steps:
(1) defining a flattening polygon in a user coordinate system to determine a flattening area: defining a flattening polygon according to the target to be flattened, so that the flattening polygon is tightly attached to a flattening area of the target to be flattened;
(2) creating a flattening camera according to the flattening polygon, and setting an observation matrix, a projection matrix and a viewport matrix;
(3) generating a flattened polygon depth map using a flattening camera;
(4) transmitting the observation matrix, the projection matrix and the flattened polygon depth map of the flattening camera into a GPU programmable pipeline;
(5) in a vertex shader of the GPU, judging the inclusion relation between the flattening polygon and the vertex, and performing displacement flattening on the vertex which is positioned in the flattening polygon and is higher than the flattening polygon.
Further, the step (2) comprises the steps of:
(21) calculating a flattened polygonal bounding box: the bounding box of the flattened polygon is a minimum axis-aligned cube containing all vertices of the flattened polygon, wherein the upper base Z value bTop is equal to the maximum of all vertex Z values, the lower base Z value bBottom is equal to the minimum of all vertex Z values, and the modified bTop is max (bTop, bBottom + f), where f is any value greater than 0;
(22) setting a flattening camera observation matrix: determining a straight line L by flattening the centers of the upper bottom surface and the lower bottom surface of the polygonal bounding box, selecting any space point higher than the bounding box on the L as an observation coordinate system origin O, defining X, Y and Z axis of an observation coordinate system to be consistent with the directions of X, Y and Z axis of a user coordinate system respectively, establishing an observation coordinate system, and setting a flattening camera observation matrix according to the observation coordinate system;
(23) setting a flattening camera projection matrix: setting the flattening polygon bounding box as an observation space of a flattening camera, and then setting a projection matrix of the flattening camera according to the orthogonal projection type and the observation space of the flattening camera;
(24) setting a flattening camera viewport matrix: setting a viewport width W and a height H of the flattening camera, wherein W and H are both greater than 0; a viewport matrix of the flattened camera is set according to the viewport width and height.
Further, the step (3) includes the steps of:
(31) decomposing the flattened polygon into a triangular mesh;
(32) outputting a flattened polygon depth map: and closing the color cache, opening the depth cache, inputting the decomposed triangulation network into a GPU, and generating a flattened polygon depth map.
Further, the step (5) includes the steps of:
(51) in a vertex shader, transforming vertex coordinates V0(x0, y0 and z0) in a user coordinate system into a flattening camera texture space according to an observation matrix and a projection matrix of a flattening camera, and setting the transformed coordinates as V1(x1, y1 and z 1);
(52) if x1 and y1 are both in the range of [0, 1], sampling the flattened polygon depth map with coordinates (x1, y1) to obtain depth value z2, and if z2<1.0, inverse transforming coordinates (x1, y1, z2) to the user coordinate system according to the view matrix, projection matrix of the flattened camera to obtain coordinates (x3, y3, z 3): if z3< z0, then the vertex needs to be flattened, modifying the z value of V0 to be z 3; otherwise, keeping the VO unchanged;
(53) and V0 is used for participating in the normal rendering process.
Based on the same inventive concept, the invention further provides a three-dimensional scene local area dynamic flattening device based on a flattening polygon, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the computer program realizes the three-dimensional scene local area dynamic flattening method based on the flattening polygon when being loaded to the processor.
Has the advantages that: compared with the prior art, the invention has the beneficial effects that: compared with the traditional flattening mode in which the model file needs to be modified, the method does not need to modify the three-dimensional model, has strong interactivity, and can conveniently check the actual flattening effect and dynamically adjust the actual flattening effect; meanwhile, the invention has lower limitation on the flattened area, is not limited to the horizontal plane, can be any inclined plane, and can define the flattened area by adopting an inaccurate coplanar space polygon according to the needs of users.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a visual effect diagram before the electric pole is flattened;
FIG. 3 is a diagram of an interactive definition of flattened polygons;
FIG. 4 is a schematic view of a flattening camera setup;
fig. 5 is a visual effect diagram of the electric pole after being flattened.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The invention provides a three-dimensional scene local area dynamic flattening method based on a flattening bounding sphere, which specifically comprises the following steps of:
step 1: defining a flattening polygon in a user coordinate system to determine a flattening area: and defining a flattening polygon according to the target to be flattened, so that the flattening polygon is tightly attached to a flattening area of the target to be flattened.
Fig. 2 is a visual effect diagram before the electric pole is flattened, and as shown in fig. 3, screen coordinates of vertices of a flattened polygon are obtained in a screen point-taking manner, and the flattened polygon is required to be closely attached to a flattening area of a target to be flattened. The flattening area, i.e. the reference plane to which the model is to be flattened to the bottom, generally coincides with the ground, the flattening polygon is a user-defined spatial polygon approximating the flattening area, and then the screen coordinates are transformed to the user coordinate system by means of rendering a viewport matrix, a projection matrix and a view matrix under the camera viewpoint. In the 3D rendering engine library OpenGL, it can be implemented using the glununproject function.
Step 2: a flattening camera is created from the flattened polygon, and an observation matrix, a projection matrix, and a viewport matrix are set.
(2.1) calculating a flattened polygon bounding box: as shown in fig. 4, the bounding box of the flattened polygon is a minimum axis-aligned cube containing all vertices of the flattened polygon, wherein the upper base Z-value bTop is equal to the maximum Z-value of all vertices, and the lower base Z-value bBottom is equal to the minimum Z-value of all vertices, and in order to avoid the failure of the bounding box due to the coplanarity of the upper and lower bases, the modified bTop is max (bTop, bBottom + f), where f is an adjustment factor, and is an arbitrary value greater than 0. Since the flattening polygon is close to horizontal, if the adjustment factor f is 1, the final bTop is bBottom + 1.
(2.2) setting a flattening camera observation matrix: as shown in fig. 4, a straight line L is determined by flattening the centers of the upper and lower bottom surfaces of the polygonal bounding box, any space point higher than the bounding box on L is selected as an observation coordinate system origin O, X, Y and the Z axis defining the observation coordinate system are respectively consistent with the directions of X, Y and the Z axis of the user coordinate system, the observation coordinate system is established, and a flattened camera observation matrix is set according to the observation coordinate system. In the 3D rendering engine library OpenGL, an observation matrix may be automatically set according to the observation coordinate system information using a gluloogat function.
(2.3) setting a flattening camera projection matrix: the flattened polygon bounding box is set to the viewing space of the flattening camera, and then the projection matrix of the flattening camera is set according to the orthogonal projection type and the viewing space of the flattening camera. In the 3D rendering engine library OpenGL, the orthogonal projection matrix can be automatically set from the viewing space using the gluOtho function.
(2.4) setting a flattened camera viewport matrix: setting the width W and the height H of a viewport of the flattening camera, wherein the width W and the height H are both required to be more than 0, and the actual width and the actual height of a window can be directly taken as values; a viewport matrix of the flattened camera is set according to the viewport width and height. In the 3D rendering engine library OpenGL, the viewport matrix can be automatically set using the glViewport function.
And step 3: a flattened polygon depth map is generated using a flattening camera.
Decomposing the flattened polygon into a triangular mesh; outputting a flattened polygon depth map: and closing the color cache, opening the depth cache, inputting the decomposed triangulation network into a GPU, and generating a depth map of the flattened polygon based on the FBO technology.
And 4, step 4: and transmitting the observation matrix, the projection matrix and the flattened polygon depth map of the flattening camera into a GPU programmable pipeline.
And 5: in a vertex shader of the GPU, judging the inclusion relation between the flattening polygon and the vertex, and performing displacement flattening on the vertex which is positioned in the flattening polygon and is higher than the flattening polygon.
(5.1) normally rendering a three-dimensional scene, in a vertex shader, converting vertex coordinates V0(x0, y0, z0) into a texture space of a flattening camera according to a flattening camera view matrix Mview, a projection matrix mprject and a viewport matrix Mviewport, wherein an overall transformation matrix M may be set to Mview mprject Mviewport, and transforming and normalizing V0 by using the M matrix to obtain transformed coordinates V1(x1, y1, z 1).
(5.2) if x1 and y1 are both within the range of [0, 1], sampling the flattened polygon depth map with coordinates (x1, y1) to obtain depth value z2, if z2<1.0, indicating that the projection of the point on the XY plane is covered by the flattened polygon, transforming the coordinates (x1, y1, z2) with the inverse of the M matrix to obtain transformed z-coordinates z3(x3, y3, z 3): if z3< z0, then the vertex needs to be flattened, modifying the z value of V0 to be z 3; otherwise, the vertex does not need to be flattened, keeping the VO unchanged.
And (5.3) participating in a normal rendering process by using V0. The final effect is shown in fig. 5.
The invention also provides a three-dimensional scene local area dynamic flattening device based on the flattening polygon, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the computer program realizes the three-dimensional scene local area dynamic flattening method based on the flattening polygon when being loaded to the processor.

Claims (5)

1. A three-dimensional scene local area dynamic flattening method based on a flattening polygon is characterized by comprising the following steps:
(1) defining a flattening polygon in a user coordinate system to determine a flattening area: defining a flattening polygon according to the target to be flattened, so that the flattening polygon is tightly attached to a flattening area of the target to be flattened;
(2) creating a flattening camera according to the flattening polygon, and setting an observation matrix, a projection matrix and a viewport matrix;
(3) generating a flattened polygon depth map using a flattening camera;
(4) transmitting the observation matrix, the projection matrix and the flattened polygon depth map of the flattening camera into a GPU programmable pipeline;
(5) in a vertex shader of the GPU, judging the inclusion relation between the flattening polygon and the vertex, and performing displacement flattening on the vertex which is positioned in the flattening polygon and is higher than the flattening polygon.
2. The method for flattening dynamically a localized area of a three-dimensional scene based on a flattened polygon according to claim 1, wherein said step (2) comprises the steps of:
(21) calculating a flattened polygonal bounding box: the bounding box of the flattened polygon is a minimum axis-aligned cube containing all vertices of the flattened polygon, wherein the upper base Z value bTop is equal to the maximum of all vertex Z values, the lower base Z value bBottom is equal to the minimum of all vertex Z values, and the modified bTop is max (bTop, bBottom + f), where f is any value greater than 0;
(22) setting a flattening camera observation matrix: determining a straight line L by flattening the centers of the upper bottom surface and the lower bottom surface of the polygonal bounding box, selecting any space point higher than the bounding box on the L as an observation coordinate system origin O, defining X, Y and Z axis of an observation coordinate system to be consistent with the directions of X, Y and Z axis of a user coordinate system respectively, establishing an observation coordinate system, and setting a flattening camera observation matrix according to the observation coordinate system;
(23) setting a flattening camera projection matrix: setting the flattening polygon bounding box as an observation space of a flattening camera, and then setting a projection matrix of the flattening camera according to the orthogonal projection type and the observation space of the flattening camera;
(24) setting a flattening camera viewport matrix: setting a viewport width W and a height H of the flattening camera, wherein W and H are both greater than 0; a viewport matrix of the flattened camera is set according to the viewport width and height.
3. The method for flattening dynamically a localized area of a three-dimensional scene based on a flattened polygon according to claim 1, wherein said step (3) comprises the steps of:
(31) decomposing the flattened polygon into a triangular mesh;
(32) outputting a flattened polygon depth map: and closing the color cache, opening the depth cache, inputting the decomposed triangulation network into a GPU, and generating a flattened polygon depth map.
4. The method for dynamically flattening local regions of a three-dimensional scene based on a flattening polygon according to claim 1, wherein the step (5) comprises the steps of:
(51) in a vertex shader, transforming vertex coordinates V0(x0, y0 and z0) in a user coordinate system into a flattening camera texture space according to an observation matrix and a projection matrix of a flattening camera, and setting the transformed coordinates as V1(x1, y1 and z 1);
(52) if x1 and y1 are both in the range of [0, 1], sampling the flattened polygon depth map with coordinates (x1, y1) to obtain depth value z2, and if z2<1.0, inverse transforming coordinates (x1, y1, z2) to the user coordinate system according to the view matrix, projection matrix of the flattened camera to obtain coordinates (x3, y3, z 3): if z3< z0, then the vertex needs to be flattened, modifying the z value of V0 to be z 3; otherwise, keeping the VO unchanged;
(53) and V0 is used for participating in the normal rendering process.
5. A device for dynamically flattening local regions of a three-dimensional scene based on a flattened polygon, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the computer program, when loaded into the processor, implements the method for dynamically flattening local regions of a three-dimensional scene based on a flattened polygon according to any of claims 1 to 4.
CN202110200278.3A 2021-02-23 2021-02-23 Three-dimensional scene local area dynamic flattening method and device based on flattening polygon Pending CN112927352A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110200278.3A CN112927352A (en) 2021-02-23 2021-02-23 Three-dimensional scene local area dynamic flattening method and device based on flattening polygon

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110200278.3A CN112927352A (en) 2021-02-23 2021-02-23 Three-dimensional scene local area dynamic flattening method and device based on flattening polygon

Publications (1)

Publication Number Publication Date
CN112927352A true CN112927352A (en) 2021-06-08

Family

ID=76170313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110200278.3A Pending CN112927352A (en) 2021-02-23 2021-02-23 Three-dimensional scene local area dynamic flattening method and device based on flattening polygon

Country Status (1)

Country Link
CN (1) CN112927352A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117372246A (en) * 2023-10-08 2024-01-09 北京市测绘设计研究院 Partial flattening method for oblique photography three-dimensional model based on filtering algorithm

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463948A (en) * 2014-09-22 2015-03-25 北京大学 Seamless visualization method for three-dimensional virtual reality system and geographic information system
CN107845136A (en) * 2017-09-19 2018-03-27 浙江科澜信息技术有限公司 A kind of landform flattening method of three-dimensional scenic

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463948A (en) * 2014-09-22 2015-03-25 北京大学 Seamless visualization method for three-dimensional virtual reality system and geographic information system
CN107845136A (en) * 2017-09-19 2018-03-27 浙江科澜信息技术有限公司 A kind of landform flattening method of three-dimensional scenic

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谢美亭: "基于AutoCAD的插件式倾斜摄影实景数据处理***设计与实现", CNKI优秀硕士学位论文全文库, no. 01, 15 January 2021 (2021-01-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117372246A (en) * 2023-10-08 2024-01-09 北京市测绘设计研究院 Partial flattening method for oblique photography three-dimensional model based on filtering algorithm
CN117372246B (en) * 2023-10-08 2024-03-22 北京市测绘设计研究院 Partial flattening method for oblique photography three-dimensional model based on filtering algorithm

Similar Documents

Publication Publication Date Title
CN108648269B (en) Method and system for singulating three-dimensional building models
US11024077B2 (en) Global illumination calculation method and apparatus
Wang et al. View-dependent displacement mapping
US7212207B2 (en) Method and apparatus for real-time global illumination incorporating stream processor based hybrid ray tracing
CN102332179B (en) Three-dimensional model data simplification and progressive transmission methods and devices
CN111340928B (en) Ray tracing-combined real-time hybrid rendering method and device for Web end and computer equipment
CN108257204B (en) Vertex color drawing baking method and system applied to Unity engine
US7843463B1 (en) System and method for bump mapping setup
US20080012853A1 (en) Generating mesh from implicit surface
CN113034656B (en) Rendering method, device and equipment for illumination information in game scene
CN107610225B (en) Method for unitizing three-dimensional oblique photography live-action model
US6791544B1 (en) Shadow rendering system and method
EP3211601B1 (en) Rendering the global illumination of a 3d scene
Xu et al. Stylized rendering of 3D scanned real world environments
Merlo et al. 3D model visualization enhancements in real-time game engines
KR20080018404A (en) Computer readable recording medium having background making program for making game
CN113034657B (en) Rendering method, device and equipment for illumination information in game scene
CN111563948A (en) Virtual terrain rendering method for dynamically processing and caching resources based on GPU
CN112927352A (en) Three-dimensional scene local area dynamic flattening method and device based on flattening polygon
CN116664752B (en) Method, system and storage medium for realizing panoramic display based on patterned illumination
CN116385619B (en) Object model rendering method, device, computer equipment and storage medium
KR101118597B1 (en) Method and System for Rendering Mobile Computer Graphic
CN112927351A (en) Three-dimensional scene local area dynamic flattening method and device based on flattening bounding ball
CN113269819B (en) Method and device for dynamically hiding shielding object facing video projection scene
CN112837430A (en) Dynamic flattening method and device for local area of three-dimensional scene based on head-up cone pressing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination