CN112233220B - OpenSceneGraph-based volumetric light generation method, device, equipment and storage medium - Google Patents

OpenSceneGraph-based volumetric light generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN112233220B
CN112233220B CN202011105083.2A CN202011105083A CN112233220B CN 112233220 B CN112233220 B CN 112233220B CN 202011105083 A CN202011105083 A CN 202011105083A CN 112233220 B CN112233220 B CN 112233220B
Authority
CN
China
Prior art keywords
light
scene
map
light source
volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011105083.2A
Other languages
Chinese (zh)
Other versions
CN112233220A (en
Inventor
丁伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongzhi Software Co ltd
Original Assignee
Luoyang Zhongzhi Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luoyang Zhongzhi Software Technology Co ltd filed Critical Luoyang Zhongzhi Software Technology Co ltd
Priority to CN202011105083.2A priority Critical patent/CN112233220B/en
Publication of CN112233220A publication Critical patent/CN112233220A/en
Application granted granted Critical
Publication of CN112233220B publication Critical patent/CN112233220B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The application relates to the technical field of post-scene shadow special effect processing of three-dimensional graphics rendering, in particular to a volume light generation method, device, equipment and storage medium based on OpenSceneGraph. Wherein the method comprises: constructing a delay rendering frame and generating a scene graph/depth graph; generating a light source mask map based on the scene map and the depth map; removing the scene when the light source is not on the screen; calculating a light superposition processing effect based on the mask map to obtain volume light; wherein the calculation of the volume light and the generating of the light source mask map are performed in a pass; carrying out volume light smoothing treatment through a preset algorithm; and fusing the volume light and the scene graph and outputting a final scene.

Description

OpenSceneGraph-based volumetric light generation method, device, equipment and storage medium
Technical Field
The application relates to the technical field of post-scene shadow special effect processing of three-dimensional graphics rendering, in particular to a volume light generation method, device, equipment and storage medium based on OpenSceneGraph.
Background
For a long time, the project based on the OpenSceneGraph is mostly applied to the development of three-dimensional simulation projects, and the scene effect is always more general and gorgeous light-sensitive pictures are not generated due to the scene management and the data structure of the OpenSceneGraph. Therefore, how to improve the scene effect becomes an important topic at present.
Disclosure of Invention
In view of the above, a method, an apparatus, a device and a storage medium for generating volumetric light based on OpenSceneGraph are provided to solve the related problems in the background art.
The application adopts the following technical scheme:
in a first aspect, an embodiment of the present application provides a volumetric light generating method based on OpenSceneGraph, including:
constructing a delay rendering frame and generating a scene graph/depth graph;
generating a light source mask map based on the scene map and the depth map;
removing the scene when the light source is not on the screen;
calculating a light superposition processing effect based on the mask map to obtain volume light;
carrying out volume light smoothing treatment through a preset algorithm;
and fusing the volume light and the scene graph and outputting a final scene.
Optionally, the generating a light source mask map based on the scene map and the depth map includes
Drawing a scene light source and sky on the basis of disabling the depth writing function of the color buffer zone;
drawing scenes according to the sequence of the prior drawing of the background;
after the scene is drawn, obtaining a red component in a mask color component value from the depth map, wherein the red component is depth information;
judging whether the depth information is larger than 1, if so, considering that the color is required to be acquired from the corresponding pixel in the original scene, otherwise, returning to black.
Optionally, the removing the scenario when the light source is not on the screen includes:
saving the depth information of the sun in the view space to screen coordinates;
judging whether the depth value of the sun is less than 35;
if it is smaller, then the light source is not considered to be on the screen, and the volume light need not be processed.
Optionally, calculating the light superposition processing effect based on the mask map to obtain the volume light includes:
subtracting the texture coordinates of the current pixel from the normalized coordinates of the screen projected by the light source after processing to obtain a screen texture offset value;
dividing the texture offset value by the sampling times to obtain the texture offset step length of each sampling;
taking the current pixel as a starting point, performing step sampling along the direction of the light source, and superposing the color value of each sampling;
after the sampling is finished, dividing the color value obtained by superposition by the sampling times to obtain the final color of the current pixel;
after sampling the entire screen is performed, a light beam is generated which radiates outward in the direction of the light source.
Optionally, the method further comprises:
the intensity of the volume of light is controlled.
Optionally, the controlling the intensity of the volume of light includes:
multiplying the color value of the pixel with a factor to obtain a target color value so as to control the intensity of the volume light;
the factor has a value range of 0 and 1, and when 0 is taken, no volume light is indicated, and 1 is taken, and the volume light is indicated to be strongest.
Optionally, the performing the volumetric light smoothing by a preset algorithm includes:
calculating the length of the texture offset coordinates of each pixel to be len;
the brightness color of the current pixel is multiplied by (1-sin (len)) to obtain the target brightness color, so that the smoothing process is performed, and the farther from the light source, the light intensity gradually fades and weakens.
In a second aspect, an embodiment of the present application further provides a volumetric light generating device based on OpenSceneGraph, including:
the frame construction module is used for constructing a delay rendering frame and generating a scene graph/depth graph;
a mask map generating module, configured to generate a light source mask map based on the scene map and the depth map;
the eliminating module is used for eliminating the situation when the light source is not on the screen;
a calculation module for calculating a volume light based on the mask map;
the smoothing processing module is used for carrying out volume light smoothing processing through a preset algorithm;
and the fusion module is used for fusing the volume light and the scene graph and outputting a final scene.
In a third aspect, an embodiment of the present application further provides a volumetric light generating device based on openscene graph, including:
a processor, and a memory coupled to the processor;
the memory is used for storing a computer program, and the computer program is at least used for executing the volumetric light generation method based on OpenSceneGraph;
the processor is configured to invoke and execute the computer program in the memory.
In a fourth aspect, an embodiment of the present application further provides a storage medium, where a computer program is stored, where the computer program is executed by a processor to implement each step in the volumetric light generation method based on OpenSceneGraph according to the first aspect of the present application.
By adopting the technical scheme, the method and the device for generating the scene graph and the depth graph have the advantages that the delay rendering frame is firstly built, and the scene graph and the depth graph are generated. And then quickly generating a mask map based on the delay rendering frame, the scene map and the depth map, rendering without independent pass, and performing calculation with volume light in one pass, wherein the conventional algorithm realizes that the scene without the map and the light source rendering are required to be drawn once and used as the mask map. For scene rendering, the load is not reduced too much, and the scene effect is improved. Before performing the calculation, the scene when the light source is not on the screen is eliminated. The amount of computation is reduced, further reducing the load that may be increased due to the provision of the sense of volume. The method is used for post-processing of three-dimensional scene rendering, and increases the rendering light effect of the light source on the scene model.
Furthermore, as can be seen from the dependent claims, the scheme provided by the application does not adopt later Gaussian blur processing (Gaussian blur needs to be processed transversely and longitudinally by two pass) but adopts a low-cost preset algorithm (namely a low-cost sine function) to carry out simple smooth transition, so that two rendering pass are saved, and the rendering efficiency is doubled compared with that of the conventional method.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a volumetric light generation method based on OpenSceneGraph according to an embodiment of the present application;
FIG. 2 is a schematic view of a volume light provided by an embodiment of the present application;
fig. 3 is a schematic structural diagram of a volumetric light generating device based on OpenSceneGraph according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a volumetric light generating device based on OpenSceneGraph according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail below. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, based on the examples herein, which are within the scope of the application as defined by the claims, will be within the scope of the application as defined by the claims.
Firstly, an application scene of the embodiment of the application is described, the OpenScenegraph is used as a mature three-dimensional simulation engine, and the OpenGL display interface is called by the bottom layer to realize three-dimensional rendering, so that the conventional various rendering functions can be met, and a programmable rendering pipeline is supported. The method has the management capability of a large amount of data, has obvious advantages especially for the dynamic loading and unloading of mass data, and is widely applied to the fields of three-dimensional GIS, flight simulation and the like. But in the aspect of rendering effect, only some basic special effects such as model highlighting, concave-convex mapping, anisotropic mapping, cube mirror surface highlighting and the like are simply introduced into the osgFX library. The scene picture effect is not greatly improved, better special effects and rendering effects are required, and self-design and development are required.
Scattering is a very beautiful natural phenomenon in which light is scattered when it passes through moist or impurity-containing media, and the scattered light enters the human eye, so that these media look like light is collected, so-called bulk light. Volumetric light effects are generally common in three-dimensional games. Common implementation methods include a billbard patch algorithm, a radial blur algorithm, a Ray tracing algorithm, and the like, and in contrast, ray tracing is mainly implemented by a Ray tracing algorithm of Ray-Marching.
Examples
Fig. 1 is a flowchart of OpenSceneGraph-based volumetric light generation provided by an embodiment of the present application, where the method may be performed by the OpenSceneGraph-based volumetric light generation device provided by the embodiment of the present application. Referring to fig. 1, the method may specifically include the steps of:
s101, constructing a delay rendering frame, and generating a scene graph and a depth graph;
specifically, delayed illumination is realized under OpenSceneGraph, an off-screen rendering object needs to be created by using FBO (Frame Buffer Object) in a Camera object, a three-dimensional scene is firstly drawn into an off-screen FBO object, and basic data of off-screen drawing is obtained by associating a depth buffer (DepthBuffer) and a color buffer (color) of the FBO through a Texture (Texture) object.
Scene nodes in the three-dimensional scene, which are child nodes of the Camera, eventually draw the scene into an associated one of the Texture objects for use in post-processing input data.
Further, it is also desirable to create post-processing cameras
In OpenSceneGraph, a quadrilateral with the same size as a screen can be drawn as a rendering geometry for post-processing by an independent camera and setting the attribute to an off-screen mode. When rendering the geometry, a loader is applied on the GPU to perform a processing algorithm.
S102, generating a light source mask map based on the scene map and the depth map;
calculating the volume light needs to pass through the image and the depth map rendered by the original scene, firstly calculating a mask map, and filtering out the light source information. The mask map serves as a basis parameter for computing illumination.
The main processing steps of the light source mask map include:
drawing a scene light source and sky on the basis of disabling the depth writing function of the color buffer zone;
drawing scenes according to the sequence of the prior drawing of the background; firstly, drawing a rear scene, and then drawing a front scene, so that the front scene can be ensured to cover the rear scene, and the original front-back relationship of the position scene can be ensured.
After the scene is drawn, obtaining a red component in a mask color component value from the depth map, wherein the red component is depth information;
judging whether the depth information is larger than 1, if so, considering that the color is required to be acquired from the corresponding pixel in the original scene, otherwise, returning to black.
Thus, a mask map is calculated, and light source information is filtered out and used as a basic parameter for calculating illumination.
S103, eliminating scenes when the light source is not on the screen;
namely: when the sun projection is not on the screen, i.e. the sun is not visible in the scene, the treatment volume light should not be removed. In practical application, the scene when the light source is not on the screen is removed, so that the data required to be calculated can be reduced on the basis of guaranteeing the volume light rendering, and the load when the volume light is calculated is reduced.
Specifically, the depth information of the sun in the view space is stored in the screen coordinates, and the situation that the light source is not on the screen can be well removed without processing the volume light when the depth value is less than 35 through testing.
S104, calculating volume light based on the mask map;
the volumetric light effect is generally common in three-dimensional games. Common implementation methods include a billbard patch algorithm, a radial blur algorithm, a Ray tracing algorithm, and the like, and in contrast, ray tracing is mainly implemented by a Ray tracing algorithm of Ray-Marching.
The basic idea of the algorithm is as follows:
(1) Issuing a number of rays along the eye toward the scene;
(2) Intercepting line segment parts falling in the volume light, and propelling a certain step length each time to sample positions in the line segment;
(3) And calculating the scattered light intensity, and finally adding all the results together to finally obtain the light intensity of the current position.
As shown in fig. 2, because the uniform object we are rendering is not a surface-effect, every point on the path the ray passes through will contribute to the color of the pixel. It is necessary to sample the brightness of each point one at a time, starting from the start point, along the ray, and the sum of the brightness over the sampled points is the final color of the pixel. Through a series of samples, a long column of light is eventually scattered outwardly, centered along the light source location.
Specifically, one volume light is composed of a plurality of pixels;
subtracting the texture coordinates of the current pixel from the normalized coordinates of the screen projected by the light source after processing to obtain a screen texture offset value; the screen texture offset value is the distance between the position where the light source projects to the screen after being processed and the position where the pixel is located;
dividing the texture offset value by the sampling times to obtain the texture offset step length of each sampling;
taking the current pixel as a starting point, performing step sampling along the direction of the light source, and superposing the colors of each sampling;
after the sampling is finished, averaging is carried out through the sampling times, and the final color of the current point is obtained;
because the uniform object we are rendering is not a surface-effect, but each point on the path the ray passes through will contribute to the color of the pixel. It is necessary to sample the brightness of each point one at a time, starting from the start point, along the ray, and the sum of the brightness over the sampled points is the final color of the pixel.
In practice, however, the present application provides a solution for load reduction. And uniformly selecting a preset point between the light source and the pixel. The contribution of the selected preset points to the color of the pixel is calculated. Instead of the contribution of each point on the ray to the pixel.
Further, after sampling the entire screen is performed, a light beam radiated outward along the light source direction can be generated.
Wherein the calculation of the volume light and the generating of the light source mask map are performed in a pass;
of course, the above process flow simply generates the light beam effect, but improvement is still needed. For example, the intensity of the volume light needs to be controlled;
specifically, the intensity of the volume light can be simply controlled by multiplying the calculated color value of each pixel point by a factor, when the factor is in the range of 0 and 1, when 0 is taken to indicate no volume light, 1 is taken to indicate that the volume light is strongest.
S105, performing volume light smoothing treatment through a preset algorithm;
it is common practice to do gaussian blur processing with additional processing cameras, but there is still some resource overhead for scene rendering. For simple analog smoothing, a sinusoidal function is used here for smooth interpolation and a comparatively pronounced effect improvement is obtained.
Calculating the length of the texture offset coordinate to be len;
the smoothing process is performed using the current luminance color multiplied by (1-sin (len)) to obtain that the light intensity gradually fades and weakens the farther from the light source.
S106, fusing the volume light and the scene graph and outputting a final scene.
The color value of the original scene graph is obtained first, and then the color value is added with the color value calculated by the volume light, so that the final effect of obtaining the target color value is obtained.
Therefore, the application can increase the rendering light effect of the light source on the scene model by using the post-processing of the three-dimensional scene rendering. In a specific rendering process, the method and the device realize quick generation of the mask, do not need independent pass to render, and can be executed in one pass together with volume light calculation. Meanwhile, the post Gaussian blur processing (Gaussian blur needs to be performed by two pass horizontally and two pass vertically) is not adopted, but a cheap sine function mode is adopted for simple and smooth transition, so that two rendering pass are saved, and the rendering efficiency is doubled compared with that of the conventional method; for scene rendering, the load is not reduced too much, and the scene effect is improved.
According to a second aspect of the present application, there is provided an OpenSceneGraph-based volumetric light generating device, referring to fig. 3, comprising:
the frame building model 31 is used for building a delay rendering frame and generating a scene graph/depth graph;
a mask map generating module 32, configured to generate a light source mask map based on the scene map and the depth map;
a rejection module 33, configured to reject a scenario when the light source is not on the screen;
a calculation module 34 for calculating a volume light based on the mask map;
a smoothing module 35, configured to perform volumetric light smoothing by a preset algorithm;
a fusion module 36, configured to fuse the volumetric light and the scene graph and output a final scene.
According to a third aspect of the present application, there is provided an OpenSceneGraph-based volumetric light generating device, comprising, with reference to fig. 4:
a processor 41 and a memory 42 connected to the processor;
the memory 42 is configured to store a computer program at least for executing the OpenSceneGraph-based volumetric light generation method according to any embodiment of the present application;
the processor is configured to invoke and execute the computer program in the memory.
According to a fourth aspect of the present application, there is provided a storage medium storing a computer program, which when executed by a processor, implements the steps of the openscene graph-based volumetric light generation method according to any embodiment of the present application.
It is to be understood that the same or similar parts in the above embodiments may be referred to each other, and that in some embodiments, the same or similar parts in other embodiments may be referred to.
It should be noted that in the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present application, unless otherwise indicated, the meaning of "plurality" means at least two.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (9)

1. The volume light generation method based on OpenSceneGraph is characterized by comprising the following steps of:
building a delay rendering frame, and generating a scene graph and a depth graph;
generating a light source mask map based on the scene map and the depth map;
removing the scene when the light source is not on the screen;
calculating a light superposition processing effect based on the mask map to obtain volume light; wherein the calculation of the volume light and the generating of the light source mask map are performed in a pass;
carrying out volume light smoothing treatment through a preset algorithm;
fusing the volume light and the scene graph and outputting a final scene;
wherein the generating a light source mask map based on the scene map and the depth map comprises
Drawing a scene light source and sky on the basis of disabling the depth writing function of the color buffer zone;
drawing scenes according to the sequence of the prior drawing of the background;
after the scene is drawn, obtaining a red component in a mask color component value from the depth map, wherein the red component is depth information;
judging whether the depth information is larger than 1, if so, considering that the color is required to be acquired from the corresponding pixel in the original scene, otherwise, returning to black.
2. The OpenSceneGraph-based volumetric light generation method of claim 1, wherein the culling out of scenarios when light sources are not on screen comprises:
saving the depth information of the sun in the view space to screen coordinates;
judging whether the depth value of the sun is less than 35;
if it is smaller, then the light source is not considered to be on the screen, and the volume light need not be processed.
3. The OpenSceneGraph-based volumetric light generation method according to claim 1, wherein the calculating a light ray superposition processing effect based on the mask map to obtain volumetric light includes:
subtracting the texture coordinates of the current pixel from the normalized coordinates of the screen projected by the light source after processing to obtain a screen texture offset value;
dividing the texture offset value by the sampling times to obtain the texture offset step length of each sampling;
taking the current pixel as a starting point, performing step sampling along the direction of the light source, and superposing the color value of each sampling;
after the sampling is finished, dividing the color value obtained by superposition by the sampling times to obtain the final color of the current pixel;
after sampling the entire screen is performed, a light beam is generated which radiates outward in the direction of the light source.
4. The OpenSceneGraph-based volumetric light generation method of claim 3, further comprising:
the intensity of the volume of light is controlled.
5. The OpenSceneGraph-based volumetric light generation method of claim 4, wherein the controlling the intensity level of the volumetric light comprises:
multiplying the color value of the pixel with a factor to obtain a target color value so as to control the intensity of the volume light;
the factor has a value range of 0 and 1, and when 0 is taken, no volume light is indicated, and 1 is taken, and the volume light is indicated to be strongest.
6. The OpenSceneGraph-based volumetric light generation method of claim 3 wherein the performing volumetric light smoothing by a preset algorithm includes:
calculating the length of the texture offset coordinates of each pixel to be len;
the brightness color of the current pixel is multiplied by (1-sin (len)) to obtain the target brightness color, so that the smoothing process is performed, and the farther from the light source, the light intensity gradually fades and weakens.
7. A volumetric light generating device based on OpenSceneGraph, comprising:
the frame construction module is used for constructing a delay rendering frame and generating a scene graph/depth graph;
a mask map generating module, configured to generate a light source mask map based on the scene map and the depth map;
the eliminating module is used for eliminating the situation when the light source is not on the screen;
a calculation module for calculating a volume light based on the mask map;
the smoothing processing module is used for carrying out volume light smoothing processing through a preset algorithm;
the fusion module is used for fusing the volume light and the scene graph and outputting a final scene;
wherein the generating a light source mask map based on the scene map and the depth map comprises
Drawing a scene light source and sky on the basis of disabling the depth writing function of the color buffer zone;
drawing scenes according to the sequence of the prior drawing of the background;
after the scene is drawn, obtaining a red component in a mask color component value from the depth map, wherein the red component is depth information;
judging whether the depth information is larger than 1, if so, considering that the color is required to be acquired from the corresponding pixel in the original scene, otherwise, returning to black.
8. A volumetric light generating device based on OpenSceneGraph, comprising:
a processor, and a memory coupled to the processor;
the memory is used for storing a computer program at least for executing the OpenSceneGraph-based volumetric light generation method according to any one of claims 1-7;
the processor is configured to invoke and execute the computer program in the memory.
9. A storage medium storing a computer program which, when executed by a processor, implements the steps of the OpenSceneGraph-based volumetric light generation method according to any one of claims 1-6.
CN202011105083.2A 2020-10-15 2020-10-15 OpenSceneGraph-based volumetric light generation method, device, equipment and storage medium Active CN112233220B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011105083.2A CN112233220B (en) 2020-10-15 2020-10-15 OpenSceneGraph-based volumetric light generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011105083.2A CN112233220B (en) 2020-10-15 2020-10-15 OpenSceneGraph-based volumetric light generation method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112233220A CN112233220A (en) 2021-01-15
CN112233220B true CN112233220B (en) 2023-12-15

Family

ID=74118372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011105083.2A Active CN112233220B (en) 2020-10-15 2020-10-15 OpenSceneGraph-based volumetric light generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112233220B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256780B (en) * 2021-07-06 2021-11-19 广州中望龙腾软件股份有限公司 Dynamic sectioning method of tool body, intelligent terminal and storage device
CN114170367B (en) * 2021-12-10 2022-08-16 北京优锘科技有限公司 Method, apparatus, storage medium, and device for infinite-line-of-sight pyramidal heatmap rendering
CN117745928A (en) * 2022-09-15 2024-03-22 北京字跳网络技术有限公司 Image processing method, device, equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002084451A2 (en) * 2001-02-06 2002-10-24 Victor Demjanenko Vector processor architecture and methods performed therein
DE102005061590A1 (en) * 2005-05-27 2006-11-30 Spin E.V. Lighting simulating method for technical lighting system, involves computing color for pixels to represent lighting of scenery and using grey tones for true-color representation or color values for reproduction of lighting

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9025159B2 (en) * 2012-12-10 2015-05-05 The Johns Hopkins University Real-time 3D and 4D fourier domain doppler optical coherence tomography system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002084451A2 (en) * 2001-02-06 2002-10-24 Victor Demjanenko Vector processor architecture and methods performed therein
DE102005061590A1 (en) * 2005-05-27 2006-11-30 Spin E.V. Lighting simulating method for technical lighting system, involves computing color for pixels to represent lighting of scenery and using grey tones for true-color representation or color values for reproduction of lighting

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Osg based real-time volume rendering algorithm for electromagnetic environmen;Beibei han;《2018 International Conference on Computer Science and Software Engineering》;第41-52页 *
一种基于HLA的卫星仿真***的设计与实现;张利强;郑昌文;胡晓惠;吕品;吴佳泽;;***仿真学报;第21卷(第20期);第6487-6491+6497页 *
基于GPU加速的辐射度光照算法的研究及应用;于平;;国外电子测量技术;第35卷(第11期);第46-52+57页 *
基于Perlin噪声的动态水面实时渲染;项予;许森;;计算机工程与设计;第34卷(第11期);第3966-3970页 *
景观表现中高真实度竹林仿真的研究;何秋海;彭月橙;黄心渊;;计算机工程与应用;第51卷(第03期);第175-180页 *
海上搜救模拟器中视景特效的建模与真实感绘制;神和龙;《中国博士学位论文全文数据库 信息科技辑》(第5期);第1-163页 *

Also Published As

Publication number Publication date
CN112233220A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
CN112233220B (en) OpenSceneGraph-based volumetric light generation method, device, equipment and storage medium
CN110443893B (en) Large-scale building scene rendering acceleration method, system, device and storage medium
US9053582B2 (en) Streaming light propagation
WO2022116659A1 (en) Volumetric cloud rendering method and apparatus, and program and readable medium
CN111968215B (en) Volume light rendering method and device, electronic equipment and storage medium
McGuire Ambient occlusion volumes
TWI475513B (en) Method and apparatus for real-time luminosity dependent subdivision
US20140002458A1 (en) Efficient rendering of volumetric elements
US20130063472A1 (en) Customized image filters
JP2005025766A (en) Method for generating blur and blur generation device
JP6543508B2 (en) Image processing method and apparatus
CN111968214B (en) Volume cloud rendering method and device, electronic equipment and storage medium
US11494966B2 (en) Interactive editing of virtual three-dimensional scenes
CN114820906A (en) Image rendering method and device, electronic equipment and storage medium
US10776996B2 (en) Method and apparatus for processing image
Liu et al. Cinematic rendering in UE4 with real-time ray tracing and denoising
US20240087219A1 (en) Method and apparatus for generating lighting image, device, and medium
KR20120062542A (en) Image processing apparatus and method
EP2981944A1 (en) Look-based selection for rendering a computer-generated animation
CN116152408A (en) Screen door transparency-based rendering method, device and system for passing through model
Li et al. Stage Lighting Simulation Based on Epipolar Sampling
Raudsepp Volumetric Fog Rendering
Liu et al. Fast Illumination Shading Method for Immediate Radiance Rendering
Chochlík Scalable multi-GPU cloud raytracing with OpenGL
Denisov Elaboration of New Viewing Modes in CATIA CAD for Lighting Simulation Purpose

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Floor 13, 14 and 15, building 3, lianfei building, No.1, Fenghua Road, high tech Development Zone, Luoyang City, Henan Province, 471000

Patentee after: Zhongzhi Software Co.,Ltd.

Country or region after: China

Address before: Floor 13, 14 and 15, building 3, lianfei building, No.1, Fenghua Road, high tech Development Zone, Luoyang City, Henan Province, 471000

Patentee before: Luoyang Zhongzhi Software Technology Co.,Ltd.

Country or region before: China

CP03 Change of name, title or address