CN114241101A - Three-dimensional scene rendering method, system, device and storage medium - Google Patents

Three-dimensional scene rendering method, system, device and storage medium Download PDF

Info

Publication number
CN114241101A
CN114241101A CN202111304819.3A CN202111304819A CN114241101A CN 114241101 A CN114241101 A CN 114241101A CN 202111304819 A CN202111304819 A CN 202111304819A CN 114241101 A CN114241101 A CN 114241101A
Authority
CN
China
Prior art keywords
scene
rendering
camera
texture picture
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111304819.3A
Other languages
Chinese (zh)
Inventor
朱林生
李多
曾江佑
王海民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Booway New Technology Co ltd
Original Assignee
Jiangxi Booway New Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Booway New Technology Co ltd filed Critical Jiangxi Booway New Technology Co ltd
Priority to CN202111304819.3A priority Critical patent/CN114241101A/en
Publication of CN114241101A publication Critical patent/CN114241101A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention belongs to the technical field of computer vision, and particularly relates to a three-dimensional scene rendering method, a three-dimensional scene rendering system, a three-dimensional scene rendering device and a storage medium. The method comprises the following steps: dividing three-dimensional model scene data; creating a three-dimensional scene camera; and rendering the scene. Defining a current rendering scene as S, and dividing the current rendering scene S into fixed partial scenes SfAnd a changing part scene split into transparent part scenes StWith opaque parts of the scene Sc;Creating a camera C as a master camera for currently rendering a scene S, creating three slave cameras C under the master camera Cf、Cc、CtLoad scene S separatelyf、Sc、StThe scene data of (1). According to the invention, the rendering data of the current main body is cached, and the dynamically modified part is multiplexed and synthesized in the rendering of each frame of scene in the back, so that the rendering content of the current frame of scene can be greatly reduced, and the interaction fluency can be improved.

Description

Three-dimensional scene rendering method, system, device and storage medium
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a three-dimensional scene rendering method, a three-dimensional scene rendering system, a three-dimensional scene rendering device and a storage medium.
Background
With the continuous development and deep application of three-dimensional design technology, more and more zero engineering designs are designed and stored in the form of three-dimensional models. In industrial CAD/CAE design, a design project often contains multiple three-dimensional models of business interest. Taking CAD design software as an example, in the common interactions of primitive drawing and scene layout, if the scene scale is large, since the vertices of all models need to be added to opengl (open Graphics library) for rendering in each frame of scene rendering, the rendering performance is poor.
All three-dimensional models are composed of three-dimensional points. One point generates a point, two points generate a line, and three or more points generate a triangular patch or a polygonal surface. The drawing provided by OpenGL also confirms the drawing content through a vertex array, the vertex is a point set representing the current three-dimensional model, and the drawing of a point line surface can be realized by matching with a link mode. However, through analysis, when the primitive drawing and the scene layout are carried out, the scene body is not changed, and only the current dynamic drawing part is changed.
Therefore, it is desirable to provide an efficient rendering method for a three-dimensional scene, so that the rendering content of the current frame scene can be greatly reduced by caching the rendering data of the current main body, multiplexing and synthesizing a dynamically modified part in each subsequent frame scene rendering.
Disclosure of Invention
In order to solve the problem that in the prior art, the rendering performance is poor because the vertexes of all models are required to be added into OpenGL for rendering in each frame of scene rendering, the invention provides a three-dimensional scene rendering method, a three-dimensional scene rendering system, a three-dimensional scene rendering device and a three-dimensional scene rendering storage medium.
The invention is realized by adopting the following technical scheme:
a method of three-dimensional scene rendering, comprising:
firstly, dividing three-dimensional model scene data;
defining a current rendering scene as S, and dividing the current rendering scene S into fixed partial scenes SfAnd a varying partial scene, wherein the varying partial scene is split into transparent partial scenes StWith opaque parts of the scene Sc
Secondly, creating a three-dimensional scene camera;
creating a camera C as a master camera for currently rendering a scene S, creating three slave cameras C under the master camera Cf、Cc、CtLoad scene S separatelyf、Sc、StCreating two texture pictures for rendering and using in the current window pixel size for each camera;
thirdly, rendering a scene;
rendering a scene of a data change part into a texture picture of the change part for caching; will fix part of scene SfRendering to a frame scene cache object, and multiplexing texture pictures of the previous frame scene; and mixing the cached texture picture data of the changed part and the cached texture picture data of the frame scene caching object rendered by the fixed part to obtain a rendering result, submitting the rendering result to an interface for rendering, and repeating the scene rendering operation to start the rendering of the next frame of scene.
As a further scheme of the invention, in the divided three-dimensional model scene data, the defined current rendering scene S and the divided fixed part scene SfTransparent partial scene StAnd an opaque part of the scene ScSatisfies the following relationship: s ═ Sf∪Sc∪StWhere U represents the union operation.
Further, in the divided three-dimensional model scene data, each member model in the scene is called as an element, and the fixed partial scene Sf(Fix Scene) represents a generally fixed element set in a Scene, which is usually model data of the Scene, and modification is triggered only when the model data is added or deleted;
said opaque part of the scene Sc(Changing Scene) represents a collection of frequently Changing elements in a Scene, generallyIn the normal case, data are drawn dynamically by JIG, and the model changes in real time depending on the current mouse position;
the transparent partial scene St(Transparent Scene) represents a collection of elements with transparency in a Scene, typically rendering auxiliary data, and primitives with transparency.
As a further aspect of the present invention, when creating a three-dimensional scene camera, the camera matrix of the master camera C and the slave camera C are created during rendering each frame of scenefSlave camera CcSlave camera CtThe camera matrices of (a) are consistent.
Further, when creating a three-dimensional scene camera, creating two texture pictures for rendering use of the current window pixel size for each camera includes texture picture ct (color texture) and texture picture dt (depth texture).
Further, when creating the three-dimensional scene camera, the method further includes: under the primary camera C, a rectangular HUD camera C is created again which fills the screen all the timehThe system comprises a rectangular screen, a texture filling module, a data processing module and a data processing module, wherein the rectangular screen is used for carrying out texture filling on the rectangular screen to realize rendering of scene data; wherein, HUD camera ChThe creation mode is as follows: according to a fixed view port matrix MvAnd a projection matrix MpAt HUD Camera ChAdding a corresponding rectangle to obtain a rectangle vertex list;
the fixed viewport matrix MvIs composed of
Figure BDA0003339808370000031
The projection matrix MpIs composed of
Figure BDA0003339808370000041
The list of vertices of the rectangle is (-1, -1, 0), (-1, 1, 0), (1, -1, 0).
As a further scheme of the present invention, during scene rendering, opengl (open Graphics library) is invoked to perform scene rendering, and when the scene is drawn to a specified camera node, the scene is processed in different cases; the method for rendering the scene with the changed data part into the texture picture with the changed part for caching comprises the following steps:
updating the camera pose of the master camera C to the slave camera CcAnd a slave camera CtIn the camera pose of (a); the camera pose consists of a viewport transformation matrix, a projection matrix and a viewport matrix and is used for determining the observation direction and the imaging size of the current camera;
slave camera C is connected through the glBindFrameBufferEXT method of OpenGLcAnd a slave camera CtThe depth cache and the color cache are respectively rendered to the texture picture CTcTexture picture DTcAnd texture picture CTtTexture picture DTtWherein the texture picture CTcTexture picture DTcFor slave camera CcCorresponding opaque part scene ScCorresponding texture picture CT and texture picture DT, said texture picture CTtTexture picture DTtFor slave camera CtCorresponding transparent partial scene StCorresponding texture picture CT and texture picture DT.
Further, the partial scene S is fixedfRendering to a frame scene cache object, and multiplexing texture pictures of a previous frame scene, wherein the method comprises the following steps:
recording the currently dependent camera CfIs compared with the main camera C, records whether the change occurs or not, and records the variable Bc
Comparing the current scene element with the previous scene element to determine whether the current scene element changes, and recording a variable Bs
According to the variable BcAnd variable BsDeciding whether re-rendering of the slave camera C is requiredfThe depth cache and the color cache are respectively rendered to the texture picture CTfTexture picture DTfOtherwise, multiplexing the texture of the previous frame.
Further, the method for obtaining a rendering result by mixing the cached texture picture data of the changed part and the cached texture picture data of the frame scene caching object rendered by the fixed part includes:
CT based on obtained texture picturecTexture picture DTcTexture picture CTtTexture mapSheet DTtTexture picture CTfAnd texture picture DTfBinds all the maps to the HUD camera C by calling the glBindTexture function of OpenGLhMiddle, HUD Camera ChThe lower rectangle is rendered one by adopting a custom rendering pipeline (shader) of OpenGL, wherein the rendering content is split into pixels (rasterized) one by one in the OpenGL rendering pipeline, and each fragment represents a corresponding pixel.
Further, the rendering of one fragment by one fragment adopts a fragment shader, the fragment shader determines the currently rendered texture coordinate (x, y) through gl _ TexCoord, and provides subsequent application by calculating the fragment color value of the current texture coordinate and filling the fragment color value into the FrameBuffer.
Further, the algorithmic expression of the fragment shader is as follows:
Figure BDA0003339808370000051
CT (x, y) represents to acquire a color value RGBA of an x coordinate and a y coordinate of the current color texture, DT (x, y) represents to acquire a depth value of the x coordinate and the y coordinate in the current depth texture, and mix represents to mix the two RGBA colors; subscripts f, C, t in CT (x, y) and DT (x, y) represent slave camera C, respectivelyfSlave camera CcSlave camera CtAnd (5) corresponding texture pictures.
As a further scheme of the present invention, the rendering result is obtained by performing scene rendering through a glSwapBuffer provided by OpenGL, and the rendering result is submitted to interface rendering, and the scene rendering operation is repeated to start rendering of a next frame.
The invention also comprises a three-dimensional scene rendering system, wherein the three-dimensional scene rendering system adopts the three-dimensional scene rendering method to realize the rendering content of the current frame scene; the three-dimensional scene rendering system includes:
a scene division module for defining the current rendering scene as S and dividing the current rendering scene S into a fixed part scene SfAnd a change part scene, whereinSplitting the changed part scene into transparent part scene StWith opaque parts of the scene Sc
A camera creation module for creating a camera C as a master camera for currently rendering a scene S, and creating three slave cameras C under the master camera Cf、Cc、CtLoad scene S separatelyf、Sc、StCreating two texture pictures for rendering and using in the current window pixel size for each camera; and
the scene rendering module is used for rendering the scene of the data change part into the texture picture of the change part for caching; will fix part of scene SfRendering to a frame scene cache object, and multiplexing texture pictures of the previous frame scene; and mixing the cached texture picture data of the changed part and the cached texture picture data of the frame scene caching object rendered by the fixed part to obtain a rendering result, submitting the rendering result to an interface for rendering, and repeating the scene rendering operation to start the rendering of the next frame of scene.
The invention also includes a computer device comprising a memory storing a computer program and a processor implementing the steps of the method for rendering a three-dimensional scene when executing the computer program.
The invention also comprises a storage medium storing a computer program which, when executed by a processor, performs the steps of a method of rendering a three-dimensional scene.
The technical scheme provided by the invention has the following beneficial effects:
the method is analyzed from the perspective of three-dimensional design, a scene only needs to be completely rendered when a viewport changes, and in most cases, only a changed part needs to be processed, taking drawing in the three-dimensional design as an example, most of the scene is unchanged, only the drawn part needs to be redrawn, and the currently drawn part is only a small part of the scene; by caching the rendering data of the current main body and multiplexing and synthesizing the dynamically modified part in the rendering of each frame of scene later, the rendering content of the current frame of scene can be greatly reduced, and the interaction fluency can be improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of a three-dimensional scene rendering method according to embodiment 1 of the present invention.
Fig. 2 is a rendering flowchart of a three-dimensional scene rendering method in embodiment 1 of the present invention.
Fig. 3 is a flowchart of scene rendering in a three-dimensional scene rendering method according to embodiment 1 of the present invention.
Fig. 4 is a flowchart of a rendering result obtained in the three-dimensional scene rendering method in embodiment 1 of the present invention.
Fig. 5 is a system block diagram of a three-dimensional scene rendering system in embodiment 2 of the present invention.
Fig. 6 is a schematic diagram of a scenario in which the present invention is not changed when the guard room example is placed in the entire substation scenario.
Fig. 7 is a schematic diagram of a scenario that changes when a guard room example is placed in an overall substation scenario, according to the present invention.
Fig. 8 is a schematic diagram of the final outcome of the invention when applied to an example of a guard room placed in a whole substation scenario.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides a three-dimensional scene rendering method, a three-dimensional scene rendering system, a three-dimensional scene rendering device and a storage medium, aiming at the problem that in the common interaction of primitive rendering and scene layout, under the condition of large scene scale, the rendering performance is poor because the vertexes of all models are required to be added into OpenGL for rendering in each frame of scene rendering.
Example 1
As shown in fig. 1 and fig. 2, the present embodiment provides a three-dimensional scene rendering method, which includes the following steps:
and S1, dividing the three-dimensional model scene data.
In this embodiment, a current rendered scene is defined as S, and the current rendered scene S is divided into fixed partial scenes SfAnd a varying partial scene, wherein the varying partial scene is split into transparent partial scenes StWith opaque parts of the scene Sc. After being divided, the defined current rendering scene S and the divided fixed part scene S are in the divided three-dimensional model scene datafTransparent partial scene StAnd an opaque part of the scene ScSatisfies the following relationship: s ═ Sf∪Sc∪StWhere U represents the union operation.
In the divided three-dimensional model scene data, each component model in the scene is called as an element, and the fixed part scene Sf(Fix Scene) represents a generally fixed element set in a Scene, which is usually model data of the Scene, and modification is triggered only when the model data is added or deleted;
said opaque part of the scene Sc(Changing Scene) represents a frequently Changing element set in a Scene, usually JIG dynamic rendering data, and a model can change in real time depending on the current mouse position;
the transparent partial scene St(Transparent Scene) represents a collection of elements with transparency in a Scene, typically rendering auxiliary data, and primitives with transparency.
And S2, creating a three-dimensional scene camera.
In the present embodiment, a camera C is created as the current renderingA master camera of scene S, under which three slave cameras C are createdf、Cc、CtLoad scene S separatelyf、Sc、StAnd creating two texture pictures for rendering use for each camera for the current window pixel size. When the three-dimensional scene camera is created, two texture pictures for rendering use, which are created for each camera at the current window pixel size, include a texture picture ct (color texture) and a texture picture dt (depth texture).
When the three-dimensional scene camera is created, the camera matrix of the master camera C and the slave camera C are created when each frame of scene is renderedfSlave camera CcSlave camera CtThe camera matrices of (a) are consistent.
When the three-dimensional scene camera is created, the method further comprises the following steps: under the primary camera C, a rectangular HUD camera C is created again which fills the screen all the timehThe system comprises a rectangular screen, a texture filling module, a data processing module and a data processing module, wherein the rectangular screen is used for carrying out texture filling on the rectangular screen to realize rendering of scene data; wherein, HUD camera ChThe creation mode is as follows: according to a fixed view port matrix MvAnd a projection matrix MpAt HUD Camera ChAdding a corresponding rectangle to obtain a rectangle vertex list; the rectangle is a model of the current scene, and then is always as large as the size of the screen, so that the texture filled on the rectangle is data seen by a user, the projection refers to the coordinate position of a point in a three-dimensional space projected onto the screen, and the projection matrix is used for performing the operation, namely the three-dimensional coordinate point x the projection matrix is the screen coordinate point.
The fixed viewport matrix MvIs composed of
Figure BDA0003339808370000091
The projection matrix MpIs composed of
Figure BDA0003339808370000092
The list of vertices of the rectangle is (-1, -1, 0), (-1, 1, 0), (1, -1, 0).
In this embodiment, the texture of the texture picture in computer graphics includes both the texture of the object surface in the general sense, i.e. the object surface exhibits uneven grooves, and also color patterns on the smooth surface of the object, which we generally refer to more as motifs. The pictures that need to be filled in on the model are generally referred to as textures. The purpose of texture filling here is to save the frame images of other scenes as one picture for composition.
In other rendering operations, not all rendering is implemented by texture, and the implementation uses texture pictures for subsequent rendering.
And S3, rendering the scene.
In this embodiment, OpenGL is invoked to perform scene rendering based on the divided camera information. Referring to fig. 3, the case division processing is performed when drawing to a designated camera node, and the steps are as follows:
s301, rendering the scene of the data change part to the texture picture of the change part for caching.
In the present embodiment, the camera pose of the master camera C is updated to the slave camera CcAnd a slave camera CtIn the camera pose of (a); the camera pose consists of a viewport transformation matrix, a projection matrix and a viewport matrix and is used for determining the observation direction and the imaging size of the current camera.
Slave camera C is connected through the glBindFrameBufferEXT method of OpenGLcAnd a slave camera CtThe depth cache and the color cache are respectively rendered to the texture picture CTcTexture picture DTcAnd texture picture CTtTexture picture DTtWherein the texture picture CTcTexture picture DTcFor slave camera CcCorresponding opaque part scene ScCorresponding texture picture CT and texture picture DT, said texture picture CTtTexture picture DTtFor slave camera CtCorresponding transparent partial scene StCorresponding texture picture CT and texture picture DT.
S302, fixing part of scene SfRendering to a frame scene cache object, and multiplexing texture pictures of the previous frame scene.
In the present embodiment, referring to fig. 4, the partial scene S is fixedfThe method for rendering to a frame scene cache object and multiplexing the texture picture of the previous frame scene comprises the following steps:
s3021, recording the current slave camera CfIs compared with the main camera C, records whether the change occurs or not, and records the variable Bc
S3022, comparing the current scene element with the previous scene element to determine whether the current scene element changes, and recording a variable Bs
S3023 according to the variable BcAnd variable BsDeciding whether re-rendering of the slave camera C is requiredfThe depth cache and the color cache are respectively rendered to the texture picture CTfTexture picture DTfOtherwise, multiplexing the texture of the previous frame.
And S303, mixing the cached texture picture data of the changed part and the cached texture picture data of the frame scene caching object rendered by the fixed part to obtain a rendering result.
In the present embodiment, CT is performed based on the obtained texture picturecTexture picture DTcTexture picture CTtTexture picture DTtTexture picture CTfAnd texture picture DTfBinds all the maps to the HUD camera C by calling the glBindTexture function of OpenGLhMiddle, HUD Camera ChThe lower rectangle is rendered one by adopting a custom rendering pipeline (shader) of OpenGL, wherein the rendering content is split into pixels (rasterized) one by one in the OpenGL rendering pipeline, and each fragment represents a corresponding pixel.
And the fragment shader is used for rendering fragment by fragment, the fragment shader determines the currently rendered texture coordinate (x, y) through gl _ Texchamd, and the fragment color value returned to the current texture coordinate is calculated and filled into the FrameBuffer to provide subsequent application.
The algorithmic expression of the fragment shader is as follows:
Figure BDA0003339808370000111
CT (x, y) represents to acquire a color value RGBA of an x coordinate and a y coordinate of the current color texture, DT (x, y) represents to acquire a depth value of the x coordinate and the y coordinate in the current depth texture, and mix represents to mix the two RGBA colors; subscripts f, C, t in CT (x, y) and DT (x, y) represent slave camera C, respectivelyfSlave camera CcSlave camera CtAnd (5) corresponding texture pictures.
And S304, submitting the rendering result to an interface for rendering, and repeating the scene rendering operation to start the rendering of the next frame of scene.
In this embodiment, the rendering result is obtained by performing scene rendering through a glSwapBuffer provided by OpenGL, submitting the rendering result to interface rendering, and repeating the scene rendering operation to start rendering of a next frame.
According to the embodiment, the rendering data of the current main body is cached, and the dynamically modified part is multiplexed and synthesized in the rendering of each frame of scene in the back, so that the rendering content of the current frame of scene can be greatly reduced, and the interaction fluency can be improved.
Example 2
As shown in fig. 5, a three-dimensional scene rendering system provided in an embodiment of the present invention includes a scene division module 100, a camera creation module 200, and a scene rendering module 300.
A scene dividing module 100, configured to define a current rendered scene S and divide the current rendered scene S into fixed partial scenes SfAnd a varying partial scene, wherein the varying partial scene is split into transparent partial scenes StWith opaque parts of the scene Sc
A camera creation module 200 for creating a camera C as a master camera for currently rendering a scene S, under which three slave cameras C are createdf、Cc、CtLoad scene S separatelyf、Sc、StCreating two texture pictures for rendering and using in the current window pixel size for each camera; and
a scene rendering module 300, configured to render a scene of a data change portion into a texture picture of the change portion for caching; will fix part of scene SfRendering to a frame scene cache object, and multiplexing texture pictures of the previous frame scene; and mixing the cached texture picture data of the changed part and the cached texture picture data of the frame scene caching object rendered by the fixed part to obtain a rendering result, submitting the rendering result to an interface for rendering, and repeating the scene rendering operation to start the rendering of the next frame of scene.
In the scene partitioning module 100, in the partitioned three-dimensional model scene data, each component model in the scene is referred to as an element, and the fixed part of the scene S is referred to as an elementf(Fix Scene) represents a generally fixed element set in a Scene, which is usually model data of the Scene, and modification is triggered only when the model data is added or deleted; said opaque part of the scene Sc(Changing Scene) represents a frequently Changing element set in a Scene, usually JIG dynamic rendering data, and a model can change in real time depending on the current mouse position; the transparent partial scene St(Transparent Scene) represents a collection of elements with transparency in a Scene, typically rendering auxiliary data, and primitives with transparency.
The camera creating module 200 creates a camera matrix of the master camera C and the slave camera C when each frame of scene is renderedfSlave camera CcSlave camera CtThe camera matrix of (a) is consistent, further comprising: under the primary camera C, a rectangular HUD camera C is created again which fills the screen all the timehFor texture filling to a rectangular screen, rendering of scene data, HUD camera ChIs created according to a fixed viewport matrix MvAnd a projection matrix MpAt HUD Camera ChAnd adding a corresponding rectangle next to obtain a rectangle vertex list.
The scene rendering module 300 calls opengl (open Graphics library) to perform scene rendering, performs case-by-case processing when drawing to a designated camera node, and updates the camera pose of the master camera C to the slave camera CcAnd a slave camera CtIn the camera pose of (a); the camera pose consists of a viewport transformation matrix, a projection matrix and a viewport matrix and is used for determining the observation direction and the imaging size of the current camera. Slave camera C is connected through the glBindFrameBufferEXT method of OpenGLcAnd a slave camera CtThe depth cache and the color cache are respectively rendered to the texture picture CTcTexture picture DTcAnd texture picture CTtTexture picture DTtWherein the texture picture CTcTexture picture DTcFor slave camera CcCorresponding opaque part scene ScCorresponding texture picture CT and texture picture DT, said texture picture CTtTexture picture DTtFor slave camera CtCorresponding transparent partial scene StCorresponding texture picture CT and texture picture DT.
CT based on obtained texture picturecTexture picture DTcTexture picture CTtTexture picture DTtTexture picture CTfAnd texture picture DTfBinds all the maps to the HUD camera C by calling the glBindTexture function of OpenGLhMiddle, HUD Camera ChThe lower rectangle is rendered one by adopting a custom rendering pipeline (shader) of OpenGL, wherein the rendering content is split into pixels (rasterized) one by one in the OpenGL rendering pipeline, and each fragment represents a corresponding pixel.
The three-dimensional scene rendering system adopts the steps of the three-dimensional scene rendering method when being executed, and therefore, the operation process of the three-dimensional scene rendering system in the embodiment is not described in detail.
Example 3
In an embodiment of the present invention, there is provided a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the steps in the above method embodiment 1 when executing the computer program:
firstly, dividing three-dimensional model scene data;
defining the current rendering scene as S, and dividing the current rendering scene into SDivided into fixed part scenes SfAnd a varying partial scene, wherein the varying partial scene is split into transparent partial scenes StWith opaque parts of the scene Sc
Secondly, creating a three-dimensional scene camera;
creating a camera C as a master camera for currently rendering a scene S, creating three slave cameras C under the master camera Cf、Cc、CtLoad scene S separatelyf、Sc、StCreating two texture pictures for rendering and using in the current window pixel size for each camera;
thirdly, rendering a scene;
rendering a scene of a data change part into a texture picture of the change part for caching; will fix part of scene SfRendering to a frame scene cache object, and multiplexing texture pictures of the previous frame scene; and mixing the cached texture picture data of the changed part and the cached texture picture data of the frame scene caching object rendered by the fixed part to obtain a rendering result, submitting the rendering result to an interface for rendering, and repeating the scene rendering operation to start the rendering of the next frame of scene.
Example 4
In an embodiment of the present invention, a storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, realizes the steps of the above-mentioned method embodiments:
firstly, dividing three-dimensional model scene data;
defining a current rendering scene as S, and dividing the current rendering scene S into fixed partial scenes SfAnd a varying partial scene, wherein the varying partial scene is split into transparent partial scenes StWith opaque parts of the scene Sc
Secondly, creating a three-dimensional scene camera;
creating a camera C as a master camera for currently rendering a scene S, creating three slave cameras C under the master camera Cf、Cc、CtLoad scene S separatelyf、Sc、StAnd for each scene dataThe method comprises the steps that two texture pictures with the current window pixel size are created by a camera and used for rendering;
thirdly, rendering a scene;
rendering a scene of a data change part into a texture picture of the change part for caching; will fix part of scene SfRendering to a frame scene cache object, and multiplexing texture pictures of the previous frame scene; and mixing the cached texture picture data of the changed part and the cached texture picture data of the frame scene caching object rendered by the fixed part to obtain a rendering result, submitting the rendering result to an interface for rendering, and repeating the scene rendering operation to start the rendering of the next frame of scene.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory.
Taking the placement of the guard rooms in the whole transformer substation scene as an example, the model of the transformer substation is an unchangeable model, the placed guard rooms are changed models, the guard rooms are only required to be rendered again during placement, and then the scenes are mixed to obtain a final result. Fig. 6 is a scene in which a substation is not changed, and fig. 7 is a scene in which a substation is changed. The substation is divided into a constant scene (substation) and a variable scene (guard room). And respectively rendering the corresponding texture pictures. Fig. 8 shows the final result.
By taking the 750kV transformer substation scale and the component number of over a million level as an example, the frame rate of common interactive rendering such as primitive drawing and scene arrangement can be increased to over 60 frames, and the interactive fluency can be ensured.
In summary, the present invention is analyzed from the perspective of three-dimensional design, a scene needs to be completely rendered only when a viewport changes, and in most cases, only a changed portion needs to be processed, taking the drawing in the three-dimensional design as an example, most of the scene is unchanged, only the drawn portion needs to be redrawn, and the currently drawn portion is generally only a small portion of the scene. Based on the analysis, the method can realize the efficient rendering of most three-dimensional scenes.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. A three-dimensional scene rendering method; the three-dimensional scene rendering method is characterized by comprising the following steps:
firstly, dividing three-dimensional model scene data;
defining a current rendering scene as S, and dividing the current rendering scene S into fixed partial scenes SfAnd a varying partial scene, wherein the varying partial scene is split into transparent partial scenes StWith opaque parts of the scene Sc
Secondly, creating a three-dimensional scene camera;
creating a camera C as a master camera for currently rendering a scene S, creating three slave cameras C under the master camera Cf、Cc、CtLoad scene S separatelyf、Sc、StCreating two texture pictures for rendering and using in the current window pixel size for each camera;
thirdly, rendering a scene;
rendering a scene of a data change part into a texture picture of the change part for caching; will fix part of scene SfRendering to a frame scene cache object, and multiplexing texture pictures of the previous frame scene; and mixing the cached texture picture data of the changed part and the cached texture picture data of the frame scene caching object rendered by the fixed part to obtain a rendering result, submitting the rendering result to an interface for rendering, and repeating the scene rendering operation to start the rendering of the next frame of scene.
2. A method of rendering a three-dimensional scene as defined in claim 1, characterized by: in the divided three-dimensional model scene data, a defined current rendering scene S and a divided fixed part scene SfTransparent partial scene StAnd an opaque part of the scene ScSatisfies the following relationship: s ═ Sf∪Sc∪StWhere U represents the union operation.
3. A method of rendering a three-dimensional scene as defined in claim 2, characterized by: when the three-dimensional scene camera is created, two texture pictures with the current window pixel size for rendering use are created for each camera, wherein the two texture pictures comprise a texture picture CT and a texture picture DT.
4. A method of rendering a three-dimensional scene as defined in claim 3, characterized by: when the three-dimensional scene camera is created, the method further comprises the following steps: under the primary camera C, a rectangular HUD camera C is created again which fills the screen all the timehThe system comprises a rectangular screen, a texture filling module, a data processing module and a data processing module, wherein the rectangular screen is used for carrying out texture filling on the rectangular screen to realize rendering of scene data; wherein, HUD camera ChThe creation mode is as follows: according to a fixed view port matrix MvAnd a projection matrix MpAt HUD Camera ChAdding a corresponding rectangle to obtain a rectangle vertex list;
the fixed viewport matrix MvComprises the following steps:
Figure FDA0003339808360000021
the projection matrix MpComprises the following steps:
Figure FDA0003339808360000022
the list of vertices of the rectangle is (-1, -1, 0), (-1, 1, 0), (1, -1, 0).
5. A method of rendering a three-dimensional scene as defined in claim 4, wherein: during scene rendering, calling OpenGL to perform scene rendering, and performing situation-based processing when the scene is drawn to a specified camera node; the method for rendering the scene with the changed data part into the texture picture with the changed part for caching comprises the following steps:
updating the camera pose of the master camera C to the slave camera CcAnd a slave camera CtIn the camera pose of (a); wherein the camera pose consists of a viewport transformation matrix, a projection matrix, and a viewport matrix;
slave camera C is connected through the glBindFrameBufferEXT method of OpenGLcAnd a slave camera CtThe depth cache and the color cache are respectively rendered to the texture picture CTcTexture picture DTcAnd texture picture CTtTexture picture DTtWherein the texture picture CTcTexture picture DTcFor slave camera CcCorresponding opaque part scene ScCorresponding texture picture CT and texture picture DT, said texture picture CTtTexture picture DTtFor slave camera CtCorresponding transparent partial scene StCorresponding texture picture CT and texture picture DT.
6. A method of rendering a three-dimensional scene as defined in claim 5, wherein: the partial scene S is to be fixedfRendering to a frame scene cache object, and multiplexing texture pictures of a previous frame scene, wherein the method comprises the following steps:
recording the currently dependent camera CfIs compared with the main camera C, records whether the change occurs or not, and records the variable Bc
Comparing the current scene element with the previous scene element to determine whether the current scene element changes, and recording a variable Bs
According to the variable BcAnd variable BsDeciding whether re-rendering of the slave camera C is requiredfThe depth cache and the color cache are respectively rendered to the texture picture CTfTexture picture DTfOtherwise, the last one is multiplexedThe texture of the frame.
7. A method of rendering a three-dimensional scene as defined in claim 6, wherein: mixing the cached texture picture data of the changed part and the cached texture picture data of the frame scene caching object rendered by the fixed part to obtain a rendering result, wherein the rendering result comprises the following steps:
CT based on obtained texture picturecTexture picture DTcTexture picture CTtTexture picture DTtTexture picture CTfAnd texture picture DTfBinds all the maps to the HUD camera C by calling the glBindTexture function of OpenGLhMiddle, HUD Camera ChThe lower rectangle is rendered one by adopting a custom rendering pipeline of OpenGL, wherein the rendering content is split into pixels one by one in an OpenGL rendering pipeline, and each fragment represents a corresponding pixel.
8. A three-dimensional scene rendering system, characterized by: the three-dimensional scene rendering system adopts the three-dimensional scene rendering method of any one of claims 1 to 7 to realize the rendering content of the current frame scene; the three-dimensional scene rendering system includes:
a scene division module for defining the current rendering scene as S and dividing the current rendering scene S into a fixed part scene SfAnd a varying partial scene, wherein the varying partial scene is split into transparent partial scenes StWith opaque parts of the scene Sc
A camera creation module for creating a camera C as a master camera for currently rendering a scene S, and creating three slave cameras C under the master camera Cf、Cc、CtLoad scene S separatelyf、Sc、StCreating two texture pictures for rendering and using in the current window pixel size for each camera; and
the scene rendering module is used for rendering the scene of the data change part into the texture picture of the change part for caching; will fix part of scene SfRendering to frame scenesCaching an object, and multiplexing a texture picture of a previous frame of scene; and mixing the cached texture picture data of the changed part and the cached texture picture data of the frame scene caching object rendered by the fixed part to obtain a rendering result, submitting the rendering result to an interface for rendering, and repeating the scene rendering operation to start the rendering of the next frame of scene.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A storage medium storing a computer program, characterized in that the computer program, when being executed by a processor, realizes the steps of the method of any one of claims 1 to 7.
CN202111304819.3A 2021-11-05 2021-11-05 Three-dimensional scene rendering method, system, device and storage medium Pending CN114241101A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111304819.3A CN114241101A (en) 2021-11-05 2021-11-05 Three-dimensional scene rendering method, system, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111304819.3A CN114241101A (en) 2021-11-05 2021-11-05 Three-dimensional scene rendering method, system, device and storage medium

Publications (1)

Publication Number Publication Date
CN114241101A true CN114241101A (en) 2022-03-25

Family

ID=80748495

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111304819.3A Pending CN114241101A (en) 2021-11-05 2021-11-05 Three-dimensional scene rendering method, system, device and storage medium

Country Status (1)

Country Link
CN (1) CN114241101A (en)

Similar Documents

Publication Publication Date Title
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
US8698809B2 (en) Creation and rendering of hierarchical digital multimedia data
US9275493B2 (en) Rendering vector maps in a geographic information system
US6741259B2 (en) Applying multiple texture maps to objects in three-dimensional imaging processes
CN112270756A (en) Data rendering method applied to BIM model file
CN109035383B (en) Volume cloud drawing method and device and computer readable storage medium
DE102015113240A1 (en) SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR SHADING USING A DYNAMIC OBJECT ROOM GATE
US9799134B2 (en) Method and system for high-performance real-time adjustment of one or more elements in a playing video, interactive 360° content or image
JP2005228320A (en) Method and apparatus for high-speed visualization of depth image-based three-dimensional graphic data, and program-recorded computer readable recording medium
JPH0778267A (en) Method for display of shadow and computer-controlled display system
CN112184575A (en) Image rendering method and device
US8040352B2 (en) Adaptive image interpolation for volume rendering
US20040085310A1 (en) System and method of extracting 3-D data generated for 2-D display applications for use in 3-D volumetric displays
US7508390B1 (en) Method and system for implementing real time soft shadows using penumbra maps and occluder maps
KR102659643B1 (en) Residency Map Descriptor
US6396502B1 (en) System and method for implementing accumulation buffer operations in texture mapping hardware
KR102442488B1 (en) Graphics processing systems and graphics processors
CN115760940A (en) Object texture processing method, device, equipment and storage medium
US20070211078A1 (en) Image Processing Device And Image Processing Method
KR20160068204A (en) Data processing method for mesh geometry and computer readable storage medium of recording the same
US7116333B1 (en) Data retrieval method and system
JP4047421B2 (en) Efficient rendering method and apparatus using user-defined rooms and windows
US7646385B2 (en) Computer graphics rendering method and apparatus
CN114241101A (en) Three-dimensional scene rendering method, system, device and storage medium
JP7131080B2 (en) volume rendering device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination