CN112233220A - OpenSceneGraph-based volume light generation method, device, equipment and storage medium - Google Patents

OpenSceneGraph-based volume light generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN112233220A
CN112233220A CN202011105083.2A CN202011105083A CN112233220A CN 112233220 A CN112233220 A CN 112233220A CN 202011105083 A CN202011105083 A CN 202011105083A CN 112233220 A CN112233220 A CN 112233220A
Authority
CN
China
Prior art keywords
light
scene
light source
volume
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011105083.2A
Other languages
Chinese (zh)
Other versions
CN112233220B (en
Inventor
丁伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongzhi Software Co ltd
Original Assignee
Luoyang Zhongzhi Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luoyang Zhongzhi Software Technology Co ltd filed Critical Luoyang Zhongzhi Software Technology Co ltd
Priority to CN202011105083.2A priority Critical patent/CN112233220B/en
Publication of CN112233220A publication Critical patent/CN112233220A/en
Application granted granted Critical
Publication of CN112233220B publication Critical patent/CN112233220B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The invention relates to the technical field related to the later-stage scene light and shadow special effect processing of three-dimensional graphic rendering, in particular to a volume light generation method, a device, equipment and a storage medium based on OpenSceneGraph. Wherein the method comprises: building a delayed rendering framework and generating a scene graph/depth graph; generating a light source mask map based on the scene map and the depth map; eliminating the scene when the light source is not on the screen; calculating a light ray superposition processing effect based on the mask image to obtain volume light; wherein the calculation of the volumetric light and the generating of the light source mask map are performed in one pass; performing volume light smoothing treatment through a preset algorithm; and fusing the volume light and the scene graph and outputting a final scene.

Description

OpenSceneGraph-based volume light generation method, device, equipment and storage medium
Technical Field
The invention relates to the technical field related to the later-stage scene light and shadow special effect processing of three-dimensional graphic rendering, in particular to a volume light generation method, a device, equipment and a storage medium based on OpenSceneGraph.
Background
For a long time, the openscene graph-based project is mostly applied to the development of three-dimensional simulation projects, and due to the scene management and data structure of openscene graph, the scene effect is general and a bright light-sensitive picture does not exist. Therefore, how to improve the scene effect becomes an important topic at present.
Disclosure of Invention
In view of the above, a method, an apparatus, a device and a storage medium for generating volumetric light based on openscene graph are provided to solve the related problems in the background art.
The invention adopts the following technical scheme:
in a first aspect, an embodiment of the present invention provides a method for generating volumetric light based on openscene graph, including:
building a delayed rendering framework and generating a scene graph/depth graph;
generating a light source mask map based on the scene map and the depth map;
eliminating the scene when the light source is not on the screen;
calculating a light ray superposition processing effect based on the mask image to obtain volume light;
performing volume light smoothing treatment through a preset algorithm;
and fusing the volume light and the scene graph and outputting a final scene.
Optionally, the generating a light source mask map based on the scene map and the depth map comprises
Drawing a scene light source and sky on the basis of forbidding the deep writing function of the color buffer area;
drawing the scene according to the background priority drawing sequence;
after the scene is drawn, obtaining red components in mask color component values from the depth map, wherein the red components are depth information;
and judging whether the depth information is greater than 1, if so, determining that the color needs to be acquired from the corresponding pixel in the original scene, and if not, returning to black.
Optionally, the scene when the rejection light source is not on the screen includes:
saving the depth information of the sun in the view space to a screen coordinate;
judging whether the depth value of the sun is less than 35;
if less than, no volumetric light needs to be processed if the light source is deemed to be off-screen.
Optionally, the calculating a light superposition processing effect based on the mask map to obtain the volume light includes:
subtracting the normalized coordinate projected to the screen after the texture coordinate where the current pixel is located and the light source are processed to obtain a screen texture deviation value;
dividing the texture offset value by the sampling times to obtain the texture offset step length of each sampling;
taking a current pixel as a starting point, carrying out stepping sampling along the direction of a light source, and superposing color values sampled every time;
after sampling is finished, dividing the color value obtained by superposition by the sampling times to obtain the final color of the current pixel;
after sampling the whole screen, a light beam radiating outwards along the light source direction can be generated.
Optionally, the method further comprises:
controlling the intensity of the volume of light.
Optionally, the controlling the intensity level of the volume of light comprises:
multiplying the color value of the pixel by a factor to obtain a target color value so as to control the intensity degree of the volume light;
the value range of the control factor is between [0 and 1], when 0 is taken, the volume light is absent, and 1 is taken, the volume light is strongest.
Optionally, the performing the volume light smoothing processing by using a preset algorithm includes:
calculating the length of the texture offset coordinate of each pixel as len;
the brightness color of the current pixel is multiplied by (1-sin (len)) to obtain the target brightness color for smoothing, and the farther away from the light source, the light intensity is gradually faded and weakened.
In a second aspect, an embodiment of the present invention further provides an openscene graph-based volumetric light generation apparatus, including:
the frame building module is used for building a delayed rendering frame and generating a scene graph/depth graph;
a mask map generation module, configured to generate a light source mask map based on the scene map and the depth map;
the rejection module is used for rejecting scenes when the light source is not on the screen;
a calculation module for calculating volumetric light based on the mask map;
the smoothing module is used for carrying out volume light smoothing processing through a preset algorithm;
and the fusion module is used for fusing the volume light and the scene graph and outputting a final scene.
In a third aspect, an embodiment of the present invention further provides a volumetric light generation device based on openscene graph, including:
a processor, and a memory coupled to the processor;
the memory is configured to store a computer program for performing at least the openscene graph based volumetric light generation method according to the first aspect of the present application;
the processor is used for calling and executing the computer program in the memory.
In a fourth aspect, an embodiment of the present invention further provides a storage medium, where the storage medium stores a computer program, and when the computer program is executed by a processor, the method implements each step in the openscene graph-based volumetric light generation method according to the first aspect of the present application.
According to the technical scheme, a delayed rendering framework is firstly built, and a scene graph and a depth graph are generated. And then, a mask map is quickly generated based on the delayed rendering frame, the scene map and the depth map, an independent pass is not needed for rendering, the rendering can be carried out in the same pass with the volume light calculation, and the conventional algorithm realizes that the scene without the map and the light source rendering need to be drawn once and serve as the mask map. For scene rendering, the load is not reduced too much, and the scene effect is improved. And before calculation, eliminating the scene when the light source is not on the screen. Reducing the amount of calculation further reduces the load that may be increased due to the setting of the sense of volume. The method is used for post-processing of three-dimensional scene rendering, and the rendering light sensation effect of the light source on the scene model is increased.
Further, as can be seen in the dependent claims, the scheme provided by the application does not adopt late gaussian blur processing (gaussian blur needs to be processed by two transverse and longitudinal passes), but adopts a cheap preset algorithm (namely a cheap sine function:) to perform simple smooth transition, so that two rendering passes are saved, and the rendering efficiency is improved by one time compared with the conventional method.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a method for generating volumetric light based on openscene graph according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a volumetric light provided by an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an openscene graph-based volumetric light generation apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a volumetric light generation device based on openscene graph according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.
Firstly, an application scene of the embodiment of the invention is explained, the OpenSceneGraph is used as a mature three-dimensional simulation engine, the bottom layer calls an OpenGL display interface to realize three-dimensional rendering, various conventional rendering functions can be met, and a programmable rendering pipeline is supported. The method has the management capability of a large amount of data, particularly has obvious advantages for dynamic loading and unloading of mass data and a multi-window multi-channel display technology, and is widely applied to the fields of three-dimensional GIS, flight simulation and the like. But in the aspect of rendering effect, basic special effects such as model highlight, concave-convex mapping, anisotropic mapping, cube mirror highlight and the like are simply introduced into the osgFX library. The scene picture effect is not greatly improved, a better special effect and a better rendering effect are needed, and the design and development are needed.
Scattering is a very beautiful natural phenomenon in which light is scattered in nature through moist or impurity-containing media, and the scattered light enters human eyes to make the media look like the light is gathered, so-called bulk light. The volume light effect is generally common in three-dimensional games. The common implementation methods include a BillBoard patch algorithm, a radial fuzzy algorithm, a Ray tracing algorithm and the like, and relatively, Ray tracing is mainly implemented by a Ray-Marching Ray tracing algorithm.
Examples
Fig. 1 is a flowchart of openscene graph-based volumetric light generation according to an embodiment of the present invention, where the method may be performed by an openscene graph-based volumetric light generation device according to an embodiment of the present invention. Referring to fig. 1, the method may specifically include the following steps:
s101, building a delayed rendering framework, and generating a scene graph and a depth graph;
specifically, delayed illumination is realized under openscene graph, an off-screen rendering object needs to be created by using an FBO (frame Buffer object) in a Camera object, a three-dimensional scene is firstly rendered into the off-screen FBO object, and basic data of off-screen rendering is acquired by associating a depth Buffer (DepthBuffer) and a color Buffer (ColorBuffer) of the FBO through a Texture (Texture) object.
Scene nodes in the three-dimensional scene, which are children of the Camera, will finally draw the scene into an associated Texture object for use in post-processing input data.
Further, it is also desirable to create post-processing cameras
In openscene graph, a quadrangle with the same size as the screen can be drawn as a rendering geometry of post-processing by an independent camera and setting the attribute as an off-screen mode. When the geometry is drawn, a Shader is applied to the GPU to perform a processing algorithm.
S102, generating a light source mask map based on the scene map and the depth map;
calculating the volume light through an image and a depth map rendered by an original scene, firstly calculating a mask map, and filtering out light source information. The mask map serves as a basic parameter for calculating the illumination.
The main processing steps of the light source mask map comprise:
drawing a scene light source and sky on the basis of forbidding the deep writing function of the color buffer area;
drawing the scene according to the background priority drawing sequence; the scene in front can be ensured to cover the scene in back and the original front-back relation of the scene is positioned.
After the scene is drawn, obtaining red components in mask color component values from the depth map, wherein the red components are depth information;
and judging whether the depth information is greater than 1, if so, determining that the color needs to be acquired from the corresponding pixel in the original scene, and if not, returning to black.
Thus, a mask image is calculated, and light source information is filtered out to be used as a basic parameter for calculating illumination.
S103, eliminating the scene when the light source is not on the screen;
namely: when the sun projection is not on the screen, i.e. the sun is not visible in the scene, the volumetric light should not be processed. In practical application, the situation that the light source is not on the screen is eliminated, so that data needing to be calculated can be reduced on the basis of ensuring volume light rendering, and the load of calculating the volume light is reduced.
Specifically, the depth information of the sun in the view space is stored in the screen coordinates, and through testing, when the depth value is found to be smaller than 35, the volume light is not processed, and the condition that the light source is not on the screen can be well eliminated.
S104, calculating volume light based on the mask map;
it should be noted that the volume light effect is generally common in the three-dimensional game. The common implementation methods include a BillBoard patch algorithm, a radial fuzzy algorithm, a Ray tracing algorithm and the like, and relatively, Ray tracing is mainly implemented by a Ray-Marching Ray tracing algorithm.
The basic idea of the algorithm is as follows:
(1) issuing a number of rays along the eye toward the scene;
(2) intercepting a line segment part falling in the volume light, and advancing a certain step length each time to perform position sampling in the line segment;
(3) and calculating the scattered light intensity, and finally adding all the results together to finally obtain the light intensity of the current position.
As shown in fig. 2, because the uniform object we are rendering does not play a role of surface, but every point on the path that the ray passes through contributes to the color of the pixel. The brightness of each point must be sampled each time a point is advanced along the ray, starting from the starting point, and the sum of the brightness at the sampled points passed is the final color of the pixel. Through a series of sampling, a long light column is finally scattered outwards along the light source position as the center.
Specifically, a volume of light is composed of a plurality of pixels;
subtracting the normalized coordinate projected to the screen after the texture coordinate where the current pixel is located and the light source are processed to obtain a screen texture deviation value; the screen texture deviation value is the distance between the position of the light source projected to the screen after being processed and the position of the pixel;
dividing the texture offset value by the sampling times to obtain the texture offset step length of each sampling;
taking a current pixel as a starting point, carrying out stepping sampling along the direction of a light source, and superposing the color sampled every time;
after sampling is finished, averaging is carried out through the sampling times to obtain the final color of the current point;
since the uniform object we are to render is not a surface contribution, but every point on the path of the ray contributes to the pixel's color. The brightness of each point must be sampled each time a point is advanced along the ray, starting from the starting point, and the sum of the brightness at the sampled points passed is the final color of the pixel.
In practice, however, in order to reduce the load, the present application provides a solution. And uniformly selecting preset points between the light source and the pixels. The contribution of the selected preset points to the color of the pixel is calculated. Instead of the contribution of each point on the ray to the pixel.
Furthermore, after the whole screen is sampled, a light beam radiating outwards along the light source direction can be generated.
Wherein the calculation of the volumetric light and the generating of the light source mask map are performed in one pass;
of course, the above process flow is only a simple generation of the light pillar effect, but needs to be improved. For example, the intensity of the volume light needs to be controlled;
specifically, the intensity degree of the volume light can be simply controlled by multiplying the color value calculated by each pixel point by a factor, when the value range of the control factor is between [0 and 1], when 0 is taken, no volume light is shown, and when 1 is taken, the volume light is shown to be strongest.
S105, performing volume light smoothing processing through a preset algorithm;
it is conventional to perform gaussian blur processing by an additional processing camera, but there is still some resource overhead for scene rendering. For simple analog smoothing, a sine function is used for smooth interpolation, and a relatively obvious effect improvement is obtained.
Calculating the length of the texture offset coordinate to be len;
the smoothing process is performed using the current luminance color multiplied by (1-sin (len)), and the farther from the light source, the light intensity gradually fades and fades.
And S106, fusing the volume light and the scene graph and outputting a final scene.
The color value of the original scene graph is obtained firstly, and then the color value is added with the color value calculated by the volume light to obtain the final effect of obtaining the target color value.
Therefore, the method can be used for increasing the rendering light perception effect of the light source on the scene model by using the post-processing of the three-dimensional scene rendering. In a specific rendering process, the method and the device realize the rapid generation of the mask, do not need independent pass for rendering, and can be executed in one pass together with the volume light calculation. Meanwhile, the later-stage Gaussian blur processing (transverse Gaussian blur and longitudinal two passes are needed for blur processing) is not adopted, but a cheap sine function mode is adopted for simple smooth transition, so that two rendering passes are saved, and the rendering efficiency is improved by one time compared with that of the conventional method; for scene rendering, the load is not reduced too much, and the scene effect is improved.
Based on the second aspect of the present application, there is provided an openscene graph based volumetric light generation apparatus, referring to fig. 3, including:
a frame building modeling step 31, which is used for building a delayed rendering frame and generating a scene graph/depth graph;
a mask map generation module 32, configured to generate a light source mask map based on the scene map and the depth map;
the rejecting module 33 is used for rejecting scenes when the light source is not on the screen;
a calculation module 34 for calculating a volumetric light based on the mask map;
the smoothing module 35 is configured to perform volume light smoothing processing through a preset algorithm;
and a fusion module 36, configured to fuse the volume light and the scene graph and output a final scene.
Based on the third aspect of the present application, there is provided an openscene graph-based volumetric light generation device, including, with reference to fig. 4:
a processor 41, and a memory 42 connected to the processor;
the memory 41 is configured to store a computer program, where the computer program is at least configured to execute the openscene graph-based volumetric light generation method according to any embodiment of the present application;
the processor is used for calling and executing the computer program in the memory.
Based on a fourth aspect of the present application, a storage medium is provided, where the storage medium stores a computer program, and when the computer program is executed by a processor, the computer program implements each step in the openscene graph-based volumetric light generation method according to any embodiment of the present application.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that the terms "first," "second," and the like in the description of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present invention, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A method for generating volume light based on OpenSceneGraph is characterized by comprising the following steps:
building a delayed rendering framework, and generating a scene graph and a depth graph;
generating a light source mask map based on the scene map and the depth map;
eliminating the scene when the light source is not on the screen;
calculating a light ray superposition processing effect based on the mask image to obtain volume light; wherein the calculation of the volumetric light and the generating of the light source mask map are performed in one pass;
performing volume light smoothing treatment through a preset algorithm;
and fusing the volume light and the scene graph and outputting a final scene.
2. The OpenSceneGraph-based volumetric light generation method of claim 1, wherein generating a light source mask map based on the scene map and the depth map comprises
Drawing a scene light source and sky on the basis of forbidding the deep writing function of the color buffer area;
drawing the scene according to the background priority drawing sequence;
after the scene is drawn, obtaining red components in mask color component values from the depth map, wherein the red components are depth information;
and judging whether the depth information is greater than 1, if so, determining that the color needs to be acquired from the corresponding pixel in the original scene, and if not, returning to black.
3. The OpenSceneGraph-based volumetric light generation method according to claim 2, wherein the rejecting the scene when the light source is not on the screen comprises:
saving the depth information of the sun in the view space to a screen coordinate;
judging whether the depth value of the sun is less than 35;
if less than, no volumetric light needs to be processed if the light source is deemed to be off-screen.
4. The OpenSceneGraph-based volumetric light generation method according to claim 2, wherein the calculating a light superposition processing effect based on the mask map to obtain volumetric light comprises:
subtracting the normalized coordinate projected to the screen after the texture coordinate where the current pixel is located and the light source are processed to obtain a screen texture deviation value;
dividing the texture offset value by the sampling times to obtain the texture offset step length of each sampling;
taking a current pixel as a starting point, carrying out stepping sampling along the direction of a light source, and superposing color values sampled every time;
after sampling is finished, dividing the color value obtained by superposition by the sampling times to obtain the final color of the current pixel;
after sampling the whole screen, a light beam radiating outwards along the light source direction can be generated.
5. The OpenSceneGraph-based volumetric light generation method according to claim 4, further comprising:
controlling the intensity of the volume of light.
6. The OpenSceneGraph-based volumetric light generation method according to claim 5, wherein the controlling the intensity of the volumetric light comprises:
multiplying the color value of the pixel by a factor to obtain a target color value so as to control the intensity degree of the volume light;
the value range of the control factor is between [0 and 1], when 0 is taken, the volume light is absent, and 1 is taken, the volume light is strongest.
7. The OpenSceneGraph-based volume light generation method according to claim 4, wherein the volume light smoothing processing by a preset algorithm comprises:
calculating the length of the texture offset coordinate of each pixel as len;
the brightness color of the current pixel is multiplied by (1-sin (len)) to obtain the target brightness color for smoothing, and the farther away from the light source, the light intensity is gradually faded and weakened.
8. An OpenSceneGraph based volumetric light generation device, comprising:
the frame building module is used for building a delayed rendering frame and generating a scene graph/depth graph;
a mask map generation module, configured to generate a light source mask map based on the scene map and the depth map;
the rejection module is used for rejecting scenes when the light source is not on the screen;
a calculation module for calculating volumetric light based on the mask map;
the smoothing module is used for carrying out volume light smoothing processing through a preset algorithm;
and the fusion module is used for fusing the volume light and the scene graph and outputting a final scene.
9. An OpenSceneGraph based volumetric light generation device, comprising:
a processor, and a memory coupled to the processor;
the memory is configured to store a computer program for performing at least the openscene graph based volumetric light generation method of any of claims 1-7;
the processor is used for calling and executing the computer program in the memory.
10. A storage medium storing a computer program which, when executed by a processor, performs the steps of the openscene graph based volumetric light generation method according to any one of claims 1 to 7.
CN202011105083.2A 2020-10-15 2020-10-15 OpenSceneGraph-based volumetric light generation method, device, equipment and storage medium Active CN112233220B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011105083.2A CN112233220B (en) 2020-10-15 2020-10-15 OpenSceneGraph-based volumetric light generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011105083.2A CN112233220B (en) 2020-10-15 2020-10-15 OpenSceneGraph-based volumetric light generation method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112233220A true CN112233220A (en) 2021-01-15
CN112233220B CN112233220B (en) 2023-12-15

Family

ID=74118372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011105083.2A Active CN112233220B (en) 2020-10-15 2020-10-15 OpenSceneGraph-based volumetric light generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112233220B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967201A (en) * 2021-03-05 2021-06-15 厦门美图之家科技有限公司 Image illumination adjusting method and device, electronic equipment and storage medium
CN113256780A (en) * 2021-07-06 2021-08-13 广州中望龙腾软件股份有限公司 Dynamic sectioning method of tool body, intelligent terminal and storage device
CN114170367A (en) * 2021-12-10 2022-03-11 北京优锘科技有限公司 Method, apparatus, storage medium, and device for infinite-line-of-sight pyramidal heatmap rendering
WO2024055837A1 (en) * 2022-09-15 2024-03-21 北京字跳网络技术有限公司 Image processing method and apparatus, and device and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002084451A2 (en) * 2001-02-06 2002-10-24 Victor Demjanenko Vector processor architecture and methods performed therein
DE102005061590A1 (en) * 2005-05-27 2006-11-30 Spin E.V. Lighting simulating method for technical lighting system, involves computing color for pixels to represent lighting of scenery and using grey tones for true-color representation or color values for reproduction of lighting
US20140160487A1 (en) * 2012-12-10 2014-06-12 The Johns Hopkins University Real-time 3d and 4d fourier domain doppler optical coherence tomography system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002084451A2 (en) * 2001-02-06 2002-10-24 Victor Demjanenko Vector processor architecture and methods performed therein
DE102005061590A1 (en) * 2005-05-27 2006-11-30 Spin E.V. Lighting simulating method for technical lighting system, involves computing color for pixels to represent lighting of scenery and using grey tones for true-color representation or color values for reproduction of lighting
US20140160487A1 (en) * 2012-12-10 2014-06-12 The Johns Hopkins University Real-time 3d and 4d fourier domain doppler optical coherence tomography system

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
BEIBEI HAN: "Osg based real-time volume rendering algorithm for electromagnetic environmen", 《2018 INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND SOFTWARE ENGINEERING》, pages 41 - 52 *
于平;: "基于GPU加速的辐射度光照算法的研究及应用", 国外电子测量技术, vol. 35, no. 11, pages 46 - 52 *
何秋海;彭月橙;黄心渊;: "景观表现中高真实度竹林仿真的研究", 计算机工程与应用, vol. 51, no. 03, pages 175 - 180 *
张利强;郑昌文;胡晓惠;吕品;吴佳泽;: "一种基于HLA的卫星仿真***的设计与实现", ***仿真学报, vol. 21, no. 20, pages 6487 - 6491 *
神和龙: "海上搜救模拟器中视景特效的建模与真实感绘制", 《中国博士学位论文全文数据库 信息科技辑》, no. 5, pages 1 - 163 *
项予;许森;: "基于Perlin噪声的动态水面实时渲染", 计算机工程与设计, vol. 34, no. 11, pages 3966 - 3970 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967201A (en) * 2021-03-05 2021-06-15 厦门美图之家科技有限公司 Image illumination adjusting method and device, electronic equipment and storage medium
CN113256780A (en) * 2021-07-06 2021-08-13 广州中望龙腾软件股份有限公司 Dynamic sectioning method of tool body, intelligent terminal and storage device
CN114170367A (en) * 2021-12-10 2022-03-11 北京优锘科技有限公司 Method, apparatus, storage medium, and device for infinite-line-of-sight pyramidal heatmap rendering
WO2024055837A1 (en) * 2022-09-15 2024-03-21 北京字跳网络技术有限公司 Image processing method and apparatus, and device and medium

Also Published As

Publication number Publication date
CN112233220B (en) 2023-12-15

Similar Documents

Publication Publication Date Title
CN112233220B (en) OpenSceneGraph-based volumetric light generation method, device, equipment and storage medium
CN110443893B (en) Large-scale building scene rendering acceleration method, system, device and storage medium
TWI475513B (en) Method and apparatus for real-time luminosity dependent subdivision
Schütz et al. Real-time continuous level of detail rendering of point clouds
CN105912234B (en) The exchange method and device of virtual scene
US20140002458A1 (en) Efficient rendering of volumetric elements
US20050220358A1 (en) Method of generating blur
JP6543508B2 (en) Image processing method and apparatus
CN111476877B (en) Shadow rendering method and device, electronic equipment and storage medium
CN113900797B (en) Three-dimensional oblique photography data processing method, device and equipment based on illusion engine
CN108986232B (en) Method for presenting AR environment picture in VR display device
US11494966B2 (en) Interactive editing of virtual three-dimensional scenes
US20230230311A1 (en) Rendering Method and Apparatus, and Device
KR102250254B1 (en) Method and apparatus for processing image
CN111968214A (en) Volume cloud rendering method and device, electronic equipment and storage medium
Liu et al. Cinematic rendering in UE4 with real-time ray tracing and denoising
CN117501312A (en) Method and device for graphic rendering
JP2002183228A (en) System and method for simplifying surface description and wire-frame description of geometric model
US20240087219A1 (en) Method and apparatus for generating lighting image, device, and medium
Cabeleira Combining rasterization and ray tracing techniques to approximate global illumination in real-time
CN112184922A (en) Fusion method, device and equipment of two-dimensional video and three-dimensional scene and storage medium
KR102306774B1 (en) Method and apparatus for processing image
CN116152408A (en) Screen door transparency-based rendering method, device and system for passing through model
WO2024037116A9 (en) Three-dimensional model rendering method and apparatus, electronic device and storage medium
Li et al. Stage Lighting Simulation Based on Epipolar Sampling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Floor 13, 14 and 15, building 3, lianfei building, No.1, Fenghua Road, high tech Development Zone, Luoyang City, Henan Province, 471000

Patentee after: Zhongzhi Software Co.,Ltd.

Country or region after: China

Address before: Floor 13, 14 and 15, building 3, lianfei building, No.1, Fenghua Road, high tech Development Zone, Luoyang City, Henan Province, 471000

Patentee before: Luoyang Zhongzhi Software Technology Co.,Ltd.

Country or region before: China

CP03 Change of name, title or address