CN112967366A - Volume light rendering method and device, electronic equipment and storage medium - Google Patents

Volume light rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112967366A
CN112967366A CN202110273388.2A CN202110273388A CN112967366A CN 112967366 A CN112967366 A CN 112967366A CN 202110273388 A CN202110273388 A CN 202110273388A CN 112967366 A CN112967366 A CN 112967366A
Authority
CN
China
Prior art keywords
map
value
depth
volume light
dimensional model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110273388.2A
Other languages
Chinese (zh)
Other versions
CN112967366B (en
Inventor
易律
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shell Wood Software Co ltd
Original Assignee
Beijing Shell Wood Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shell Wood Software Co ltd filed Critical Beijing Shell Wood Software Co ltd
Priority to CN202110273388.2A priority Critical patent/CN112967366B/en
Publication of CN112967366A publication Critical patent/CN112967366A/en
Application granted granted Critical
Publication of CN112967366B publication Critical patent/CN112967366B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a volume light rendering method and device, electronic equipment and a storage medium, which are used for solving the problem of ghost shadow in a rendered volume light map. The method comprises the following steps: obtaining a depth map and a shadow map of the three-dimensional model, wherein the depth map is a depth value for performing depth sampling on the three-dimensional model in the direction with a camera viewpoint as a starting point, and the shadow map is a depth value for performing shadow sampling on the three-dimensional model in the direction with a light source viewpoint as the starting point; dividing each depth value in the depth map along the camera viewpoint as the starting point direction by using a plurality of sampling points to obtain a stepping value; randomly offsetting the stepping value, and comparing the randomly offset stepping value with each depth value in the shadow map to obtain a comparison result; selecting and superposing color values according to the comparison result to obtain a superposed result map; and rendering the three-dimensional model in the direction taking the camera viewpoint as a starting point according to the superimposed result map to obtain a volume light map of the three-dimensional model.

Description

Volume light rendering method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of graphics rendering and image processing, and in particular, to a volume light rendering method and apparatus, an electronic device, and a storage medium.
Background
The volume light is an illumination special effect used in games, and is used for representing that when light rays irradiate a shielding object, transparent radioactivity leakage is shown around the object, and a leaked light column can be understood as the volume light; specific examples thereof include: when sunlight shines on a tree through the cloud layer, the sunlight penetrates through gaps of leaves to form light columns, and the light columns can be diffused through fog. The reason why the light beam is called as a volume light is because the light beam with a special effect gives a visual sense of space to a player than the light beam in a conventional game, and the volume light gives a more realistic sense to the player.
At present, a mode of accumulating multi-frame results and then averaging is generally adopted to perform volume light rendering on a three-dimensional model in a game; however, in a specific practical process, it is found that when a moving object model or character model occurs, that is, when the picture content of several adjacent frames in a video changes greatly, the result obtained by accumulating the results of multiple frames and then averaging the accumulated results has a large error, which results in the occurrence of ghosting in a rendered volume light map.
Disclosure of Invention
An object of the embodiments of the present application is to provide a volume light rendering method, a volume light rendering device, an electronic device, and a storage medium, which are used to solve the problem of image sticking in a rendered volume light map.
The embodiment of the application provides a volume light rendering method, which comprises the following steps: obtaining a depth map and a shadow map of the three-dimensional model, wherein the depth map is a depth value for performing depth sampling on the three-dimensional model in the direction with a camera viewpoint as a starting point, and the shadow map is a depth value for performing shadow sampling on the three-dimensional model in the direction with a light source viewpoint as the starting point; dividing each depth value in the depth map along the camera viewpoint as a starting point direction by using a plurality of sampling points to obtain a stepping value, wherein the stepping value is an interval distance value between adjacent sampling points in the plurality of sampling points; randomly offsetting the stepping value, and comparing the randomly offset stepping value with each depth value in the shadow map to obtain a comparison result; selecting and superposing color values according to the comparison result to obtain a superposed result map; and rendering the three-dimensional model in the direction taking the camera viewpoint as a starting point according to the superimposed result map to obtain a volume light map of the three-dimensional model. In the implementation process, each depth value in a depth map of the three-dimensional model is divided along a camera viewpoint as a starting point direction by using a sampling point to obtain a step value, the step value corresponding to each depth value in the depth map is subjected to random offset, the step value subjected to random offset is compared with each depth value in a shadow map to obtain a comparison result, and then selection, superposition and rendering are performed according to the comparison result to obtain a volume light map without residual shadows; that is to say, the depth value corresponding to the pixel point is used as the sampling standard of the volume light corresponding to the screening pixel point, so that the result error of the volume light is effectively reduced, and the condition that the residual shadow appears in the rendered volume light map is avoided.
Optionally, in this embodiment of the present application, obtaining a depth map and a shadow map of a three-dimensional model includes: and rendering the three-dimensional model in the direction taking the camera viewpoint as a starting point to obtain a depth map, and rendering the three-dimensional model in the direction taking the light source viewpoint as the starting point to obtain a shadow map. In the implementation process, the depth map and the shadow map of the three-dimensional model are rendered and obtained in advance, so that the condition that the three-dimensional model is rendered and obtained only when the depth map and the shadow map are needed is avoided, the calculation amount in real-time rendering is effectively reduced, and the real-time rendering speed is improved.
Optionally, in this embodiment of the present application, the randomly offsetting the step value, and comparing the step value after the random offsetting with each depth value in the shadow map includes: extracting a random offset value taking a camera viewpoint as a starting point direction from a preset sampling noise map; superposing the stepping value of each depth value of the depth map with the random deviation value to obtain a superposed stepping value corresponding to each depth value; the overlay step value corresponding to each depth value is compared with each depth value in the shadow map. In the implementation process, the light stepping technology requires high sampling times, the stepping value of each depth value of the depth map and the random deviation value are sampled after being overlapped under the limited sampling times, and the sampling results of multiple frames are overlapped, so that the sampling precision is improved, and the rendering calculation amount is reduced.
Optionally, in this embodiment of the present application, selecting and superimposing color values according to the comparison result includes: accumulating the volume light color value of each sampling point in the depth map according to the comparison result aiming at each pixel point needing to be rendered by the current camera to obtain a volume light accumulation result map; and acquiring a historical volume light rendering map from the cache, and selecting and superposing a volume light accumulation result map and the historical volume light rendering map with color values. In the implementation process, the color values are selected and superimposed by the volume light accumulation result map and the historical volume light rendering map, and the depth value in the map is used as a screening standard for whether to perform superimposition or not, so that the result error of sampling volume light is effectively reduced, and the condition that ghost shadow occurs in the rendered volume light map is avoided.
Optionally, in this embodiment of the present application, accumulating the volume light color value of each sampling point in the depth map according to the comparison result includes: judging whether the comparison result is that the step value of the sampling point after random offset is larger than the depth value of the sampling point in the shadow map; if so, accumulating the volume light color value of the sampling point; and if not, not accumulating the volume light color value of the sampling point. In the implementation process, the depth value in the map is used as a judgment standard for accumulating the volume light color value, so that the volume light color value of each pixel point is effectively obtained.
Optionally, in this embodiment of the present application, selecting and superimposing color values on the volumetric light accumulation result map and the historical volumetric light rendering map includes: aiming at each pixel point in the volume light accumulation result map, judging whether the difference value between the volume light color value of the pixel point in the volume light accumulation result map and the alpha value of the pixel point in the historical volume light rendering map is smaller than a preset threshold value, wherein the alpha value is the depth value of a sampling point in the direction with the camera viewpoint as the starting point; if so, overlapping the volume light color value in the volume light accumulation result map of the pixel point with the volume light color value of the pixel point in the historical volume light rendering map.
Optionally, in this embodiment of the present application, after obtaining the volume light map of the three-dimensional model, the method further includes: and synthesizing the animation video of the three-dimensional model according to the volume light map of the three-dimensional model. In the implementation process, the volume light map without the residual shadow is used for manufacturing the animation video of the three-dimensional model, so that the quality of the animation video is improved, and the situation of the residual shadow in the animation video is avoided.
An embodiment of the present application further provides a volume light rendering device, including: the model map obtaining module is used for obtaining a depth map and a shadow map of the three-dimensional model, the depth map is a depth value for performing depth sampling on the three-dimensional model in the direction with a camera viewpoint as a starting point, and the shadow map is a depth value for performing shadow sampling on the three-dimensional model in the direction with a light source viewpoint as the starting point; the depth map dividing module is used for dividing each depth value in the depth map along the camera viewpoint as a starting point direction by using a plurality of sampling points to obtain a stepping value, and the stepping value is an interval distance value between adjacent sampling points in the plurality of sampling points; a comparison result obtaining module, configured to perform random offset on the step value, and compare the step value after the random offset with each depth value in the shadow map to obtain a comparison result; the result chartlet obtaining module is used for selecting and superposing color values according to the comparison result to obtain a superposed result chartlet; and the volume light map obtaining module is used for rendering the three-dimensional model in the direction with the camera viewpoint as the starting point according to the superimposed result map so as to obtain the volume light map of the three-dimensional model.
Optionally, in an embodiment of the present application, the model map obtaining module includes: and the three-dimensional model rendering module is used for rendering the three-dimensional model in the direction taking the camera viewpoint as the starting point to obtain the depth map, and rendering the three-dimensional model in the direction taking the light source viewpoint as the starting point to obtain the shadow map.
Optionally, in an embodiment of the present application, the comparison result obtaining module includes: the noise map extraction module is used for extracting a random offset value taking a camera viewpoint as a starting point direction from a preset sampling noise map; the sampling random offset module is used for superposing the stepping value of each depth value of the depth map with the random offset value to obtain a superposed stepping value corresponding to each depth value; and the step depth comparison module is used for comparing the superposition step value corresponding to each depth value with each depth value in the shadow map.
Optionally, in an embodiment of the present application, the result map obtaining module includes: the accumulation result obtaining module is used for accumulating the volume light color value of each sampling point in the depth map according to the comparison result aiming at each pixel point needing to be rendered by the current camera to obtain a volume light accumulation result map; and the map selection and superposition module is used for acquiring the historical volume light rendering map from the cache, and selecting and superposing the color value of the volume light accumulation result map and the historical volume light rendering map.
Optionally, in an embodiment of the present application, the accumulated result obtaining module includes: the comparison result judging module is used for judging whether the comparison result is that the step value of the sampling point after random offset is larger than the depth value of the sampling point in the shadow map; the comparison result affirming module is used for accumulating the volume light color value of the sampling point if the comparison result is that the stepping value of the sampling point after random offset is larger than the depth value of the sampling point in the shadow map; and the comparison result negation module is used for not accumulating the volume light color value of the sampling point if the comparison result is that the step value of the sampling point after the random offset is not greater than the depth value of the sampling point in the shadow map.
Optionally, in an embodiment of the present application, the map selecting and overlaying module includes: the map difference judging module is used for judging whether the difference between the volume light color value of the pixel point in the map of the volume light accumulation result and the alpha value of the pixel point in the historical map of the volume light rendering map is smaller than a preset threshold value or not aiming at each pixel point in the map of the volume light accumulation result, wherein the alpha value is the depth value of a sampling point in the direction taking the viewpoint of the camera as a starting point; and the light color value superposition module is used for superposing the volume light color value of the pixel point in the volume light accumulation result map and the volume light color value of the pixel point in the historical volume light rendering map if the difference value between the volume light color value of the pixel point in the volume light accumulation result map and the alpha value of the pixel point in the historical volume light rendering map is smaller than a preset threshold value.
Optionally, in an embodiment of the present application, the volume light rendering apparatus further includes: and the animation video synthesis module is used for synthesizing the animation video of the three-dimensional model according to the volume light map of the three-dimensional model.
An embodiment of the present application further provides an electronic device, including: a processor and a memory, the memory storing processor-executable machine-readable instructions, the machine-readable instructions when executed by the processor performing the method as described above.
Embodiments of the present application also provide a storage medium having a computer program stored thereon, where the computer program is executed by a processor to perform the method as described above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic flow chart of a volume light rendering method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating rendering of a three-dimensional model according to an embodiment of the present application;
FIG. 3 is a diagram illustrating the division of step values provided by an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating obtaining a map and selecting an overlay according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a volume light rendering apparatus according to an embodiment of the present application.
Detailed Description
The technical solution in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Before introducing the volume light rendering method provided in the embodiment of the present application, some concepts related in the embodiment of the present application are introduced:
three-dimensional models, which refer to three-dimensional polygonal representations of objects, are typically displayed by computers or other cinematic devices; the displayed object may be a real-world entity or a fictitious object, and of course, objects existing in physical nature may be represented by a three-dimensional model, and the objects may be opaque objects.
Ray stepping (raymanching), which is a fast rendering method for real-time scenes, takes a camera viewpoint as an example of ray stepping sampling in a starting direction, a ray is emitted from a camera position to each pixel point of a screen, the ray is also called stepping ray, the ray advances according to a certain step length, whether the current ray is positioned on the surface of an object is detected, the advancing amplitude of the ray is adjusted accordingly until the ray reaches the surface of the object, and then the color value is calculated according to a general ray tracing method. If the object surface can not be reached, the pixel point can be determined not to have a corresponding object.
Depth map (DepthTexture), which is a map generated during rendering, is written into a depth buffer (depth buffer) first when each frame is rendered in real time, where the depth buffer is used as an input for the subsequent rendering step, and the result of each frame stored in the depth buffer is used as a depth map which can be used for rendering some special effects (e.g. volume light).
Shadow Texture, also known as Shadow depth map (Shadow depth), refers to a depth map used to render shadows under different light sources.
A server refers to a device that provides computing services over a network, such as: x86 server and non-x 86 server, non-x 86 server includes: mainframe, minicomputer, and UNIX server.
It should be noted that the volume light rendering method provided in the embodiment of the present application may be executed by an electronic device, where the electronic device refers to a device terminal having a function of executing a computer program or the server described above, and the device terminal includes, for example: a smart phone, a Personal Computer (PC), a tablet computer, a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a network switch or a network router, and the like.
Before introducing the volume light rendering method provided in the embodiment of the present application, an application scenario applicable to the volume light rendering method is introduced, where the application scenario includes but is not limited to: game scenes, in which volume light in a game screen can be rendered using the volume light rendering method, animation scenes, in which an animation image or an animation video, etc., without a ghost can be made using the volume light rendering method, and so on.
Please refer to a flow chart diagram of a volume light rendering method provided by the embodiment of the present application shown in fig. 1; the volume light rendering method mainly includes the steps that each depth value in a depth map of a three-dimensional model is divided along a camera viewpoint as a starting point direction by using a sampling point to obtain a stepping value, the stepping value corresponding to each depth value in the depth map is subjected to random offset, the stepping value subjected to random offset is compared with each depth value in a shadow map to obtain a comparison result, and then selection, superposition and rendering are carried out according to the comparison result, so that a volume light map without a residual shadow is obtained; that is to say, the depth value corresponding to the pixel point is taken as the sampling standard of the volume light corresponding to the screening pixel point, so that the result error of the volume light is effectively reduced, and the condition that the residual shadow appears in the rendered volume light map is avoided; the volume light rendering method may include:
step S110: and obtaining a depth map and a shadow map of the three-dimensional model.
Please refer to fig. 2, which is a schematic diagram illustrating rendering a three-dimensional model according to an embodiment of the present application; the embodiment of step S110 described above is, for example: rendering the three-dimensional model by using a ray stepping (raymanching) technology in a direction with a camera viewpoint as a starting point to obtain a depth map, wherein the method can also be understood as that a plurality of stepping rays are sent out by taking the camera viewpoint as the starting point direction aiming at the three-dimensional model sampling, and each stepping ray corresponds to the depth value of one pixel point in the depth map; the three-dimensional model can also be rendered in the direction of the light source viewpoint as the starting point to obtain the shadow map, and the same can be understood that a plurality of stepping rays are sent out for depth sampling of the three-dimensional model in the direction of the starting point by taking the light source viewpoint as the starting point, each stepping ray corresponds to the depth value of one pixel point in the shadow map, and the depth value in the shadow map is the value of the sampling point closest to the light source viewpoint in the Z-axis direction, so the depth value in the shadow map is also called as Z-depth, and the Z-depth represents the depth value of the sampling point in the Z-axis direction.
It is understood that the above-mentioned map may include a plurality of elements, where an element refers to a depth value of a pixel point (in the volumetric light map to be rendered) in a certain direction, for example, the depth map includes a depth value in each step ray direction (where a step ray is a ray with a camera viewpoint as a starting point and a contact point of the three-dimensional model and the step ray as an end point), and the shadow map includes a depth value in the Z-axis direction; therefore, one pixel corresponds to one depth value. The sampling points are virtual pixel points in the direction of the stepping rays, the virtual pixel points are pixel points which are only used in the calculation process, do not store the specific positions of the sampling points, and only store the results calculated according to the sampling points; there may be multiple sampling points in a stepping ray direction, and thus, one depth value corresponds to multiple sampling points. The step value can be simply understood as an interval distance value between the sampling point and the last adjacent sampling point, namely, each two adjacent sampling points in the plurality of sampling points have a step value, so that the number of the sampling points and the number of the step values are in a relation of N (N-1); wherein, N represents the number of sampling points, the depth map is a depth value for depth sampling of the three-dimensional model in the direction with the camera viewpoint as the starting point, and the shadow map is a depth value for shadow sampling of the three-dimensional model in the direction with the light source viewpoint as the starting point.
After step S110, step S120 is performed: dividing each depth value in the depth map along the camera viewpoint as a starting point direction by using a plurality of sampling points to obtain a stepping value, wherein the stepping value is an interval distance value between adjacent sampling points in the plurality of sampling points.
Please refer to fig. 3 for a schematic diagram illustrating the division of the step values provided in the embodiment of the present application; the embodiment of step S120 described above is, for example: obtaining a depth value of each pixel point from the depth map, and (taking a camera viewpoint as a starting point, taking a three-dimensional model as a target, that is, taking a pixel point which is firstly contacted by a stepping optical fiber on the three-dimensional model as an end point) emitting a light stepping ray, dividing the depth value into a plurality of segments along the stepping ray by using a plurality of sampling points, wherein each segment can be understood as a stepping value, that is, the stepping value is obtained by segmenting the depth value on one stepping ray by using a plurality of sampling points, and obtaining an interval distance value between the sampling point and the last adjacent sampling point, wherein the number of the plurality of sampling points can also be set according to specific conditions (can be set to 6 or 8, etc.), for example, the stepping value of 6 sampling points which are numbered from 1 to 6 on the stepping ray in fig. 3; the world coordinates of the current pixel points on the near shearing plane of the camera are used as the starting points of the stepping values, for example, in fig. 3, the sampling points numbered 1 can be used as the starting points of the stepping values, and the near shearing plane can be set according to the specific calculation complexity; then, the depth value of the current pixel point is used as the end point of the step value, for example, the sampling point numbered 6 in fig. 3 may be used as the end point of the step value.
After step S120, step S130 is performed: and carrying out random offset on the stepping value, and comparing the stepping value after the random offset with each depth value in the shadow map to obtain a comparison result.
There are many embodiments of the above step S130, including but not limited to the following:
in a first embodiment, a random offset value is obtained from a preset sampling noise map, and random offset and comparison are performed according to the random offset value, which may include:
step S131: and extracting a random offset value taking a camera view point as a starting point direction from the preset sampling noise map.
The preset sampling noise map is a matrix map formed by a plurality of random offset values, because the random offset values are generated by random sampling, for example, time stamp data is collected and the random offset values are generated by using time stamp calculation, or a built-in pseudo-random function can be used for generating the random offset values; therefore, it can be understood as sampling noise.
The embodiment of step S131 described above is, for example: acquiring a preset sampling noise map from a local cache, and extracting a random offset value corresponding to each pixel point in the direction taking a camera viewpoint as a starting point from the preset sampling noise map, where the random offset value may be a preset range, where the preset range is, for example, -2 to 3, etc., -2 represents that all sampling points corresponding to the pixel point are moved by two units (i.e., an interval distance value between two sample points, then a sampling point numbered 4 in fig. 3 should be at the position of a sampling point numbered 2, and the other sampling points are so called), and 3 represents that all sampling points corresponding to the pixel point are moved by three units (i.e., three step values) in the direction taking the camera viewpoint as the starting point; the preset sampling noise map may be generated by sampling in advance and stored in a local cache.
Step S132: and superposing the subsection stepping value of each depth value of the depth map with each random deviation value to obtain a superposed stepping value corresponding to each depth value.
The embodiment of step S132 described above is, for example: superposing the stepping value of each depth value of the depth map with the random deviation value to obtain a superposed stepping value corresponding to each depth value; for example, in fig. 3, the sampling points numbered 1 to 6 may be all superimposed with the random offset value-2, and then the sampling point numbered 3 should be at the position of the sampling point numbered 1 after the superimposition, the sampling point numbered 4 should be at the position of the sampling point numbered 2, and so on for the other sampling points.
Step S133: the overlay step value corresponding to each depth value is compared with each depth value in the shadow map.
The embodiment of step S133 described above includes, for example: for ease of understanding and explanation, it is assumed that the random offset value is 0, and therefore, the superimposed sample points numbered 1 to 6 are still at the original positions. The coordinate system (e.g., camera viewpoint coordinate system or world coordinate system) of each sampling point is converted into a light source viewpoint coordinate system, and finally, the Z-axis component corresponding to the superimposed step value corresponding to each depth value is compared with each depth value (i.e., Z-depth) in the shadow map. It will be appreciated that if the Z-axis component corresponding to the overlay step value for the depth value is greater than the cached depth value of the shadow map (i.e., Z-depth), then the sample point is in the shadow (e.g., sample points numbered 5 and 6), otherwise the sample point is not in the shadow (e.g., sample points numbered 1 through 4).
In a second embodiment, the step value is added to a random value within a preset range, and the step value is compared after random offset, for example: generating a random value within a preset range, and adding the step value to the random value within the preset range, wherein the preset range is, for example, -2 to 3; then, overlapping the step value of each depth value of the depth map with the random value to obtain an overlapped step value corresponding to each depth value; finally, the overlay stepping value corresponding to each depth value is compared with each depth value in the shadow map. The technical principle of this embodiment is similar to that of the first embodiment, except that the random offset value of the second embodiment is randomly generated in real time instead of being generated and stored in advance, and the storage space can be effectively saved by using the real-time random generation method.
After step S130, step S140 is performed: and selecting and superposing color values according to the comparison result to obtain a superposed result map.
The implementation of step S140 may include:
step S141: and accumulating the volume light color value of each sampling point in the depth map according to the comparison result aiming at each pixel point needing to be rendered by the current camera to obtain a volume light accumulation result map.
The embodiment of step S141 described above includes, for example: judging whether a comparison result is that a Z-axis component corresponding to a step value of the sampling point after random offset is larger than a depth value (namely Z-depth) of the sampling point in the shadow map aiming at each pixel point needing to be rendered by the current camera; if the comparison result is that the Z-axis component corresponding to the step value of the sampling point after the random offset is greater than the depth value (i.e. Z-depth) of the sampling point in the shadow map, it indicates that the sampling point is in the shadow (e.g. the sampling points numbered 5 and 6), and the volume light color values of the sampling point should not be accumulated, i.e. the illumination contribution of the sampling point to the volume light color is 0; if the comparison result is not that the Z-axis component corresponding to the step value of the sampling point after the random offset is greater than the depth value (i.e. Z-depth) of the sampling point in the shadow map, it indicates that the sampling point is not in the shadow (for example, the sampling points numbered from 1 to 4), the volume light color values of the sampling point should be accumulated, the volume light color values of the sampling point here can represent the scattering from the sampling point to the light source viewpoint, and the volume light color values obtained by accumulation can represent the scattering from the camera viewpoint to the light source viewpoint; after each pixel point needing to be rendered by the current camera is subjected to the processing, the accumulated volume light color value of each pixel point needing to be rendered can be obtained, and the accumulated volume light color value of all the pixel points is the volume light accumulation result chartlet.
Step S142: and acquiring a historical volume light rendering map from the cache, and selecting and superposing a volume light accumulation result map and the historical volume light rendering map with color values.
Please refer to fig. 4, which illustrates a schematic diagram of obtaining a map and selecting an overlay according to an embodiment of the present application; the embodiment of obtaining the historical volume light rendering map from the cache in step S142 is, for example: in a specific practical process, a memory of a Graphics Processing Unit (GPU) may be used as a cache of the historical volume photo rendering map, and thus, the historical volume photo rendering map may be obtained from the memory of the GPU. It is to be understood that the historical volume light rendering map here is one or more volume light rendering maps, and the specific number of the plurality of volume light rendering maps here may also be set according to specific situations, for example, set to 4, 6, or 8, etc.; when the historical volume light rendering map is a plurality of volume light rendering maps, it is necessary to select and superimpose a color value on each volume light rendering map, and for convenience of explanation, the process of superimposing one volume light rendering map is explained below.
The embodiment of selecting and superimposing color values on the volumetric light accumulation result map and the historical volumetric light rendering map in step S142 is, for example: for each pixel point in the volumetric light accumulation result map, it is determined whether a difference between a volumetric light color value of the pixel point in the volumetric light accumulation result map and an Alpha value (Alpha) of the pixel point in the historical volumetric light rendering map is smaller than a preset threshold, where the preset threshold represents an acceptable error and may be set according to a specific situation, for example, set to 4, 10, or 20, and the like. Where the Alpha value is a depth value of the sampling point in the direction of the starting point from the camera viewpoint, the depth value being stored in the Alpha channel, it is understood that the Alpha value refers to the Alpha value previously stored in the Alpha (a) channel in the RGBA image format. If the difference value between the volume light color value of the pixel point in the volume light accumulation result map and the alpha value of the pixel point in the historical volume light rendering map is smaller than the preset threshold value, the volume light color of the pixel point in the historical volume light rendering map is usable, namely the volume light color value of the pixel point in the volume light accumulation result map and the volume light color value of the pixel point in the historical volume light rendering map need to be superposed; if the difference is greater than or equal to the preset threshold, it is indicated that the volume light color of the pixel point in the historical volume light rendering map cannot be used, that is, no superposition is required. It can be understood that if the volume light color of the pixel point in the historical volume light rendering map is used when the difference value is greater than or equal to the preset threshold, the problem of image sticking in the volume light rendering map is caused.
After step S140, step S150 is performed: and rendering the three-dimensional model in the direction taking the camera viewpoint as a starting point according to the superimposed result map to obtain a volume light map of the three-dimensional model.
The embodiment of step S150 described above is, for example: each pixel value in the result maps superimposed above is obtained after being superimposed, and therefore, it is necessary to average each pixel value in the result maps superimposed, that is, to divide each pixel value in the result maps superimposed by the number of the superimposed volume light colors to obtain a volume light color average value, and then render the three-dimensional model in the direction with the camera viewpoint as the starting point according to the volume light color average value to obtain the volume light map of the three-dimensional model.
In the implementation process, each depth value in a depth map of the three-dimensional model is divided along a camera viewpoint as a starting point direction by using a sampling point to obtain a step value, the step value corresponding to each depth value in the depth map is subjected to random offset, the step value subjected to random offset is compared with each depth value in a shadow map to obtain a comparison result, and then selection, superposition and rendering are performed according to the comparison result to obtain a volume light map without residual shadows; that is to say, through using light step-by-step and multiframe random sampling to regard the depth value that the pixel point corresponds as the sampling standard of screening the corresponding volume light of pixel point, reduced the result error of sampling volume light effectively, thereby avoided the condition that the ghost appears in the volume light map of rendering.
Optionally, in this embodiment of the present application, after obtaining the volume light map of the three-dimensional model, the method further includes: synthesizing the volume light map of the three-dimensional model and the volume light map rendered by the rendering light to obtain an animation video of the three-dimensional model, and performing subsequent processing on the animation video to obtain a processed animation video; the subsequent processing herein includes, but is not limited to: dubbing the animation video, adding subtitles, recognizing images, recognizing human faces and the like. In the implementation process, the volume light map without the residual shadow is used for manufacturing the animation video of the three-dimensional model, so that the quality of the animation video is improved, and the situation of the residual shadow in the animation video is avoided.
Please refer to fig. 5, which illustrates a schematic structural diagram of a volume light rendering apparatus according to an embodiment of the present application. The embodiment of the present application provides a volume light rendering device 200, including:
the model map obtaining module 210 is configured to obtain a depth map and a shadow map of the three-dimensional model, where the depth map is a depth value obtained by performing depth sampling on the three-dimensional model in a direction with a camera viewpoint as a starting point, and the shadow map is a depth value obtained by performing shadow sampling on the three-dimensional model in a direction with a light source viewpoint as a starting point.
The depth map dividing module 220 is configured to divide each depth value in the depth map along the camera viewpoint as a starting point using a plurality of sampling points to obtain a step value, where the step value is an interval distance value between adjacent sampling points in the plurality of sampling points.
And a comparison result obtaining module 230, configured to perform random offset on the step value, and compare the step value after the random offset with each depth value in the shadow map to obtain a comparison result.
And a result map obtaining module 240, configured to select and superimpose the color values according to the comparison result, so as to obtain a superimposed result map.
And a volume light map obtaining module 250, configured to render the three-dimensional model in a direction where the camera viewpoint is a starting point according to the superimposed result map, so as to obtain a volume light map of the three-dimensional model.
Optionally, in an embodiment of the present application, the model map obtaining module includes:
and the three-dimensional model rendering module is used for rendering the three-dimensional model by taking the camera viewpoint as a starting point direction to obtain a depth map, and rendering the three-dimensional model by taking the light source viewpoint as a starting point direction to obtain a shadow map.
Optionally, in an embodiment of the present application, the comparison result obtaining module includes:
and the noise map extraction module is used for extracting a random offset value taking a camera viewpoint as a starting point direction from a preset sampling noise map.
And the sampling random offset module is used for superposing the stepping value of each depth value of the depth map with the random offset value to obtain a superposed stepping value corresponding to each depth value.
And the step depth comparison module is used for comparing the superposition step value corresponding to each depth value with each depth value in the shadow map.
Optionally, in an embodiment of the present application, the result map obtaining module includes:
and the accumulation result obtaining module is used for accumulating the volume light color value of each sampling point in the depth map according to the comparison result aiming at each pixel point needing to be rendered by the current camera to obtain a volume light accumulation result map.
And the map selection and superposition module is used for acquiring the historical volume light rendering map from the cache, and selecting and superposing the color value of the volume light accumulation result map and the historical volume light rendering map.
Optionally, in an embodiment of the present application, the accumulated result obtaining module includes:
and the comparison result judging module is used for judging whether the comparison result is that the step value of the sampling point after random offset is larger than the depth value of the sampling point in the shadow map.
And the comparison result affirming module is used for accumulating the volume light color value of the sampling point if the comparison result is that the stepping value of the sampling point after random offset is larger than the depth value of the sampling point in the shadow map.
And the comparison result negation module is used for not accumulating the volume light color value of the sampling point if the comparison result is that the step value of the sampling point after the random offset is not greater than the depth value of the sampling point in the shadow map.
Optionally, in an embodiment of the present application, the map selecting and overlaying module includes:
and the map difference judging module is used for judging whether the difference between the volume light color value of the pixel point in the map of the volume light accumulation result and the alpha value of the pixel point in the historical map of the volume light rendering map is smaller than a preset threshold value or not aiming at each pixel point in the map of the volume light accumulation result, wherein the alpha value is the depth value of the sampling point in the direction taking the viewpoint of the camera as the starting point.
And the light color value superposition module is used for superposing the volume light color value of the pixel point in the volume light accumulation result map and the volume light color value of the pixel point in the historical volume light rendering map if the difference value between the volume light color value of the pixel point in the volume light accumulation result map and the alpha value of the pixel point in the historical volume light rendering map is smaller than a preset threshold value.
Optionally, in an embodiment of the present application, the volume light rendering apparatus further includes:
and the animation video synthesis module is used for synthesizing the animation video of the three-dimensional model according to the volume light map of the three-dimensional model.
It should be understood that the apparatus corresponds to the above-mentioned embodiment of the volume light rendering method, and can perform the steps related to the above-mentioned embodiment of the method, and the specific functions of the apparatus can be referred to the above description, and the detailed description is appropriately omitted here to avoid redundancy. The device includes at least one software function that can be stored in memory in the form of software or firmware (firmware) or solidified in the Operating System (OS) of the device.
An electronic device provided in an embodiment of the present application includes: a processor and a memory, the memory storing processor-executable machine-readable instructions, the machine-readable instructions when executed by the processor performing the method as above.
The embodiment of the application also provides a storage medium, wherein the storage medium is stored with a computer program, and the computer program is executed by a processor to execute the method.
The storage medium may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules of the embodiments in the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an alternative embodiment of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the embodiments of the present application, and all the changes or substitutions should be covered by the scope of the embodiments of the present application.

Claims (10)

1. A method of volumetric light rendering, comprising:
obtaining a depth map and a shadow map of a three-dimensional model, wherein the depth map is a depth value for performing depth sampling on the three-dimensional model in a direction with a camera viewpoint as a starting point, and the shadow map is a depth value for performing shadow sampling on the three-dimensional model in a direction with a light source viewpoint as a starting point;
dividing each depth value in the depth map along the camera viewpoint as a starting point direction by using a plurality of sampling points to obtain a stepping value, wherein the stepping value is an interval distance value between adjacent sampling points in the plurality of sampling points;
randomly offsetting the stepping value, and comparing the randomly offset stepping value with each depth value in the shadow map to obtain a comparison result;
selecting and superposing color values according to the comparison result to obtain a superposed result map;
and rendering the three-dimensional model in the direction taking the camera viewpoint as a starting point according to the superimposed result map to obtain a volume light map of the three-dimensional model.
2. The method of claim 1, wherein obtaining the depth map and the shadow map of the three-dimensional model comprises:
rendering the three-dimensional model in the direction with the camera viewpoint as a starting point to obtain a depth map, and rendering the three-dimensional model in the direction with the light source viewpoint as the starting point to obtain a shadow map.
3. The method of claim 1, wherein the randomly shifting the step value and comparing the randomly shifted step value with each depth value in the shadow map comprises:
extracting a random offset value taking the camera viewpoint as a starting point direction from a preset sampling noise map;
superposing the stepping value of each depth value of the depth map with the random deviation value to obtain a superposed stepping value corresponding to each depth value;
and comparing the superposition stepping value corresponding to each depth value with each depth value in the shadow map.
4. The method of claim 1, wherein selecting and superimposing color values based on the comparison comprises:
accumulating the volume light color value of each sampling point in the depth map according to the comparison result aiming at each pixel point needing to be rendered by the current camera to obtain a volume light accumulation result map;
and acquiring a historical volume light rendering map from a cache, and selecting and superposing the volume light accumulation result map and the historical volume light rendering map with color values.
5. The method of claim 4, wherein accumulating the volumetric light color values of each sample point in the depth map according to the comparison comprises:
judging whether the comparison result is that the step value of the sampling point after random offset is larger than the depth value of the sampling point in the shadow map;
if so, accumulating the volume light color value of the sampling point;
and if not, not accumulating the volume light color value of the sampling point.
6. The method of claim 4, wherein selecting and overlaying the volumetric light accumulation result map and the historical volumetric light rendering map with color values comprises:
aiming at each pixel point in the volume light accumulation result mapping, judging whether the difference value between the volume light color value of the pixel point in the volume light accumulation result mapping and the alpha value of the pixel point in the historical volume light rendering mapping is smaller than a preset threshold value, wherein the alpha value is the depth value of a sampling point in the direction with the camera viewpoint as the starting point;
if so, overlapping the volume light color value in the volume light accumulation result map of the pixel point with the volume light color value of the pixel point in the historical volume light rendering map.
7. The method according to any of claims 1-6, further comprising, after said obtaining the volumetric light map of the three-dimensional model:
and synthesizing the animation video of the three-dimensional model according to the volume light map of the three-dimensional model.
8. A volumetric light rendering apparatus, comprising:
the model map obtaining module is used for obtaining a depth map and a shadow map of the three-dimensional model, the depth map is a depth value for performing depth sampling on the three-dimensional model in the direction with a camera viewpoint as a starting point, and the shadow map is a depth value for performing shadow sampling on the three-dimensional model in the direction with a light source viewpoint as the starting point;
the depth map dividing module is used for dividing each depth value in the depth map along the camera viewpoint as a starting point direction by using a plurality of sampling points to obtain a stepping value, wherein the stepping value is an interval distance value between adjacent sampling points in the plurality of sampling points;
a comparison result obtaining module, configured to perform random offset on the step value, and compare the step value after the random offset with each depth value in the shadow map to obtain a comparison result;
the result chartlet obtaining module is used for selecting and superposing color values according to the comparison result to obtain a superposed result chartlet;
and the volume light map obtaining module is used for rendering the three-dimensional model in the direction with the camera viewpoint as a starting point according to the superimposed result map so as to obtain the volume light map of the three-dimensional model.
9. An electronic device, comprising: a processor and a memory, the memory storing machine-readable instructions executable by the processor, the machine-readable instructions, when executed by the processor, performing the method of any of claims 1 to 7.
10. A storage medium, having stored thereon a computer program which, when executed by a processor, performs the method of any one of claims 1 to 7.
CN202110273388.2A 2021-03-12 2021-03-12 Volume light rendering method and device, electronic equipment and storage medium Active CN112967366B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110273388.2A CN112967366B (en) 2021-03-12 2021-03-12 Volume light rendering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110273388.2A CN112967366B (en) 2021-03-12 2021-03-12 Volume light rendering method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112967366A true CN112967366A (en) 2021-06-15
CN112967366B CN112967366B (en) 2023-07-28

Family

ID=76278923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110273388.2A Active CN112967366B (en) 2021-03-12 2021-03-12 Volume light rendering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112967366B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470161A (en) * 2021-06-30 2021-10-01 完美世界(北京)软件科技发展有限公司 Illumination determination method for volume cloud in virtual environment, related equipment and storage medium
CN113781611A (en) * 2021-08-25 2021-12-10 北京壳木软件有限责任公司 Animation production method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030112237A1 (en) * 2001-12-13 2003-06-19 Marco Corbetta Method, computer program product and system for rendering soft shadows in a frame representing a 3D-scene
US20110069069A1 (en) * 2009-09-21 2011-03-24 Klaus Engel Efficient determination of lighting effects in volume rendering
US20110109631A1 (en) * 2009-11-09 2011-05-12 Kunert Thomas System and method for performing volume rendering using shadow calculation
CN111968216A (en) * 2020-07-29 2020-11-20 完美世界(北京)软件科技发展有限公司 Volume cloud shadow rendering method and device, electronic equipment and storage medium
CN113012274A (en) * 2021-03-24 2021-06-22 北京壳木软件有限责任公司 Shadow rendering method and device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030112237A1 (en) * 2001-12-13 2003-06-19 Marco Corbetta Method, computer program product and system for rendering soft shadows in a frame representing a 3D-scene
US20110069069A1 (en) * 2009-09-21 2011-03-24 Klaus Engel Efficient determination of lighting effects in volume rendering
US20110109631A1 (en) * 2009-11-09 2011-05-12 Kunert Thomas System and method for performing volume rendering using shadow calculation
CN111968216A (en) * 2020-07-29 2020-11-20 完美世界(北京)软件科技发展有限公司 Volume cloud shadow rendering method and device, electronic equipment and storage medium
CN113012274A (en) * 2021-03-24 2021-06-22 北京壳木软件有限责任公司 Shadow rendering method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吕梦雅;刘丁;唐勇;李颖;周升腾;: "真实感海下光照效果实时绘制", 小型微型计算机***, no. 10, pages 200 - 203 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470161A (en) * 2021-06-30 2021-10-01 完美世界(北京)软件科技发展有限公司 Illumination determination method for volume cloud in virtual environment, related equipment and storage medium
CN113781611A (en) * 2021-08-25 2021-12-10 北京壳木软件有限责任公司 Animation production method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112967366B (en) 2023-07-28

Similar Documents

Publication Publication Date Title
US10984558B2 (en) Learning-based sampling for image matting
US11227428B2 (en) Modification of a live-action video recording using volumetric scene reconstruction to replace a designated region
US11748986B2 (en) Method and apparatus for recognizing key identifier in video, device and storage medium
US9025902B2 (en) Post-render motion blur
CN111080776B (en) Human body action three-dimensional data acquisition and reproduction processing method and system
CN112541876B (en) Satellite image processing method, network training method, related device and electronic equipment
CN111199573B (en) Virtual-real interaction reflection method, device, medium and equipment based on augmented reality
CN112200035B (en) Image acquisition method, device and vision processing method for simulating crowded scene
US20180181814A1 (en) Video abstract using signed foreground extraction and fusion
US10825231B2 (en) Methods of and apparatus for rendering frames for display using ray tracing
US20230306563A1 (en) Image filling method and apparatus, decoding method and apparatus, electronic device, and medium
CN111988657A (en) Advertisement insertion method and device
CN112652046A (en) Game picture generation method, device, equipment and storage medium
CN113225606A (en) Video barrage processing method and device
CN112967366B (en) Volume light rendering method and device, electronic equipment and storage medium
CN113178017A (en) AR data display method and device, electronic equipment and storage medium
CN108520259B (en) Foreground target extraction method, device, equipment and storage medium
CN115115767A (en) Scene rendering method and device, electronic equipment and storage medium
CN112419147B (en) Image rendering method and device
CN111243099B (en) Method and device for processing image and method and device for displaying image in AR (augmented reality) equipment
CN112565623A (en) Dynamic image display system
CN116250002A (en) Single image 3D photography with soft layering and depth aware restoration
CN117575976B (en) Image shadow processing method, device, equipment and storage medium
Liu et al. A flare removal network for night vision perception: Resistant to the interference of complex light
Valença et al. Shadow Harmonization for Realistic Compositing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant