CN115908716A - Virtual scene light rendering method and device, storage medium and electronic equipment - Google Patents

Virtual scene light rendering method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN115908716A
CN115908716A CN202211599567.6A CN202211599567A CN115908716A CN 115908716 A CN115908716 A CN 115908716A CN 202211599567 A CN202211599567 A CN 202211599567A CN 115908716 A CN115908716 A CN 115908716A
Authority
CN
China
Prior art keywords
map
dimensional model
virtual scene
rendering
lighting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211599567.6A
Other languages
Chinese (zh)
Inventor
杨家骏
李东明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202211599567.6A priority Critical patent/CN115908716A/en
Publication of CN115908716A publication Critical patent/CN115908716A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Generation (AREA)

Abstract

The embodiment of the disclosure provides a virtual scene light rendering method, a virtual scene light rendering device, a medium and equipment; the method comprises the following steps: acquiring three-dimensional model resources in a virtual scene, and creating or updating materials for the three-dimensional model resources; performing lighting baking through a rendering pipeline and adding a prefabricated lighting effect to obtain a lighting channel diagram; and superposing the scene image corresponding to the virtual scene comprising the three-dimensional model resource and the light channel image to finish the light rendering of the virtual scene. Therefore, by implementing the technical scheme of the embodiment of the disclosure, the display fineness and efficiency of virtual scene lighting rendering can be improved.

Description

Virtual scene light rendering method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of virtual display, and in particular, to a virtual scene lighting rendering method, a virtual scene lighting rendering apparatus, a computer-readable storage medium, and an electronic device.
Background
The virtual scene breaks through the limitation of real life, and the night scene is one of the main contents which can be displayed by the virtual scene. The night scene of the virtual scene usually has various light and shadow or light effects, so that the night virtual scene is richer.
The current scheme is to manually draw out the required lighting effect on the basis of the original virtual scene, and finally superimpose the layer of the lighting effect in the image processing software.
However, this solution requires a lot of labor cost to draw, and is very inefficient. Moreover, the reality degree of the hand-drawn light effect is not high, the material details of the virtual scene are lost, and the physical texture is poor.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
An object of the embodiments of the present disclosure is to provide a virtual scene lighting rendering method, a virtual scene lighting rendering apparatus, a computer-readable storage medium, and an electronic device. Establishing a material or updating a material for the three-dimensional model resource by acquiring the three-dimensional model resource in the virtual scene; performing lighting baking through a rendering pipeline and adding a prefabricated lighting effect to obtain a lighting channel diagram; and superposing the scene image corresponding to the virtual scene comprising the three-dimensional model resource and the light channel image to finish the light rendering of the virtual scene. The display fineness and the simulation degree of virtual scene light rendering can be improved.
A first aspect of the embodiments of the present disclosure provides a method for rendering virtual scene lighting, where the method includes: acquiring three-dimensional model resources in a virtual scene, and creating or updating materials for the three-dimensional model resources; performing lighting baking through a rendering pipeline and adding a prefabricated lighting effect to obtain a lighting channel diagram; and superposing the image of the virtual scene including the three-dimensional model resource and the light channel image to finish the light rendering of the virtual scene.
According to a second aspect of the embodiments of the present disclosure, there is provided a virtual scene lighting rendering apparatus, the apparatus including:
the resource material module is used for acquiring three-dimensional model resources in a virtual scene and creating materials or updating the materials for the three-dimensional model resources;
the lighting rendering module is used for performing lighting baking through a rendering pipeline and adding a prefabricated lighting effect to obtain a lighting channel diagram;
and the scene lighting module is used for superposing the scene image corresponding to the virtual scene comprising the three-dimensional model resource and the lighting channel image to finish the lighting rendering of the virtual scene.
According to a third aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing the virtual scene light rendering method according to the first aspect of the embodiments.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of virtual scene light rendering as described in the first aspect of the embodiments above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product or a computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the virtual scene light rendering method provided in the above-mentioned various optional implementations.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the technical solutions provided by some embodiments of the present disclosure, a material may be created or updated for a three-dimensional model resource by acquiring the three-dimensional model resource in a virtual scene; performing lighting baking through a rendering pipeline and adding a prefabricated lighting effect to obtain a lighting channel diagram; and superposing the image of the virtual scene including the three-dimensional model resource and the light channel image to finish the light rendering of the virtual scene. Therefore, by implementing the technical scheme of the embodiment of the disclosure, on one hand, the material can be determined according to the resources of different virtual scenes, and the material details of the virtual scenes in the light rendering and the display fineness of the virtual scenes can be improved; on the other hand, the virtual scene is overlaid by baking the lamplight and adding the prefabricated lamplight effect, so that the lamplight rendering efficiency of the virtual scene can be improved. The display fineness and the simulation degree of the virtual scene lighting rendering can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
Fig. 1 is a schematic diagram illustrating an exemplary system structure of a virtual scene lighting rendering method and apparatus according to the present exemplary embodiment;
fig. 2 is a flowchart illustrating a method for rendering light in a virtual scene according to the present exemplary embodiment;
FIG. 3 illustrates a flow diagram for generating a coarseness patch corresponding to a three-dimensional model resource in the present exemplary embodiment;
FIG. 4 is a flowchart illustrating scene image generation corresponding to a virtual scene in accordance with the illustrative embodiment;
fig. 5 is a block diagram showing a configuration of a virtual scene light rendering apparatus according to the present exemplary embodiment;
fig. 6 schematically illustrates a structural diagram of a computer system suitable for a terminal device used to implement an embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
Fig. 1 shows a schematic diagram of an exemplary terminal device to which a virtual scene lighting rendering method and a virtual scene lighting rendering apparatus according to an embodiment of the present disclosure may be applied. The method can be applied to three-rendering two-light rendering application scenes.
As shown in fig. 1, the terminal devices may include one or more of the terminal devices 101, 102, 103. The terminal devices 101, 102, 103 may be various electronic devices having a display screen, including but not limited to desktop computers, portable computers, smart phones, tablet computers, and the like.
The computer readable storage media shown in this disclosure may be computer readable signal media or computer readable storage media or any combination of the two. The computer-readable storage medium carries one or more programs which, when executed by the terminal device, cause the terminal device to implement the method as described in the embodiments below. For example, the terminal device may implement the steps shown in fig. 2, and the like.
In an exemplary embodiment of the present disclosure, the virtual scene may be a digital scene outlined by an intelligent terminal device such as a computer, a mobile phone, a tablet computer, and the like through a digital communication technology, and the digital scene may be on a display screen of the intelligent terminal device or projected onto other display devices. The virtual scene may include buildings or structures such as houses, buildings, gardens, bridges, pools, and the like, and may further include natural landscapes such as mountains, rivers, lakes, and the like, and any virtual objects and virtual props such as weapons, tools, creatures, and the like, which is not limited in this exemplary embodiment.
The present example embodiment provides a virtual scene lighting rendering method, as shown in fig. 2, the method includes:
step S210, obtaining three-dimensional model resources in a virtual scene, and creating or updating materials for the three-dimensional model resources;
step S220, performing lighting baking through a rendering pipeline and adding a prefabricated lighting effect to obtain a lighting channel diagram;
and step S230, overlapping the scene image corresponding to the virtual scene comprising the three-dimensional model resources and the light channel image to finish the light rendering of the virtual scene.
By implementing the virtual scene light rendering method shown in fig. 2, a material can be created or updated for a three-dimensional model resource by acquiring the three-dimensional model resource in the virtual scene; performing lighting baking through a rendering pipeline and adding a prefabricated lighting effect to obtain a lighting channel diagram; and superposing the scene image corresponding to the virtual scene comprising the three-dimensional model resource and the light channel image to finish the light rendering of the virtual scene. Therefore, by implementing the technical scheme of the embodiment of the disclosure, on one hand, the material and the map can be determined according to the resources of different virtual scenes, the material details of the virtual scenes in the light rendering can be improved, and the display fineness of the virtual scenes is improved; on the other hand, the virtual scene is overlaid by baking the lamplight and adding the prefabricated lamplight effect, so that the lamplight rendering efficiency of the virtual scene can be improved.
It should be noted that, in order to more clearly illustrate the solution in the present disclosure, the embodiment will illustrate that the virtual scene lighting rendering method is applied to a unity engine as an example. The above steps of the present exemplary embodiment will be described in more detail below.
In step S210, three-dimensional model resources in the virtual scene are obtained, and a material is created or updated for the three-dimensional model resources.
In the disclosed embodiment, the three-dimensional model resources are three-dimensional stereo models, each of which has its own shape and volume in the three-dimensional virtual environment, occupying a part of the space in the three-dimensional virtual environment.
The texture is a texture attribute of a three-dimensional model resource (for example, metal, wood, glass, and the like), and in three-dimensional animation software, for example, the texture attribute of an object can be actually realized by a texture node. Meanwhile, the drawn chartlet can be input and assigned to the corresponding object through the nodes, so that the object has real texture and luster, and has details of color and texture. In three-dimensional fabrication, various materials can be simulated by both photoresponse and texture. Wherein, the light translation includes surface color, whether the surface is rough, reflection intensity, whether the surface is metal and whether the surface is transparent, and the texture refers to the pattern of the surface.
Optionally, parameters such as model resource type, material obtaining approach, material retrieval mode, and the like may be configured in advance by inputting configuration information. For example, the resource type may be an old model resource or a new model resource; the acquisition way of the material can be to search the old material in the whole project, and can also be to create a new material locally; the retrieval mode of the definition material can be retrieval according to the name of the material or the name of the paste. The disclosed embodiments are not limited herein.
Optionally, after the resource file corresponding to the three-dimensional model resource in the virtual scene is obtained, the resource file may be traversed to remove some files in invalid formats, so as to improve the availability of the resource file.
Optionally, the filtered three-dimensional Model resource may be imported into the unity engine through a Model importer (Model inputer) for further configuration. For example, the parameters such as the format, naming rule, and search method of the generated material may be further configured with reference to the input configuration information. The disclosed embodiments are not limited herein.
Optionally, the method of calling a Mesh Renderer (Mesh Renderer) component may also be used to automatically import the three-dimensional model resource item general textures and shaders, so as to implement rendering of the three-dimensional model resource.
In an optional implementation manner, the step of creating a texture or updating a texture for the three-dimensional model resource may include: when the three-dimensional model resources have no own material, creating a preset default material; or when the three-dimensional model resources have own materials, classifying and converting the format of the original map of the three-dimensional model resources, and determining a new map required by rendering the pipeline for the three-dimensional model resources; and assigning the new map to the self-owned material so as to update the material of the three-dimensional model resource.
When the three-dimensional model resource has no material, a preset default material can be established for the three-dimensional model resource; when the three-dimensional model resources have the own materials, the own materials of the three-dimensional model resources can be updated so as to improve the material texture of the three-dimensional model resources.
Wherein, the self-owned material refers to an original material carried by the three-dimensional model resource.
Specifically, when the self-material of the three-dimensional model resource is updated, the original map of the three-dimensional model resource can be classified and converted in format, and a new map required by rendering a pipeline is determined for the three-dimensional model resource; and assigning the new map to the self-owned material to update the material of the three-dimensional model resource so as to enhance the material details of the three-dimensional model resource.
It should be noted that the rendering pipeline is a hybrid delay/forward tile/cluster renderer. The pipeline may provide several physics-based characteristics and can adapt camera orthogonal perspective rendering. And the real-time ray tracing technology is used for comprehensively improving the picture effect, and meanwhile, the positive feedback effect can be given in the manufacturing stage, so that iteration is continuously carried out. Wherein the new map required for rendering the pipeline may include one or more of: mask map, normal map.
The mask map is a map which takes a black-and-white mask map as a pattern and can combine different colors or maps together, and another material can be checked through one material on the curved surface by using the mask map. The normal map is to make a normal on each point of the concave-convex surface of the original object, and mark the direction of the normal through a color channel.
In addition, the new maps required for the rendering pipeline may also include a base map, where the base map may be used to characterize the base appearance characteristics of the three-dimensional model.
The three-dimensional model resource original map can include but is not limited to: roughness mapping, metal mapping, shading mapping, detail mapping, diffuse reflection mapping, and the like.
After determining a new chartlet required by the rendering pipeline for the three-dimensional model resource, deleting the original chartlet data corresponding to the three-dimensional model resource, and assigning the new chartlet data to the material, so as to update the material of the three-dimensional model resource and enhance the material texture of the three-dimensional model.
In an optional implementation manner, when classifying and converting the format of the original map of the three-dimensional model resource and determining a new map required by a rendering pipeline for the three-dimensional model resource, the mask map may be generated for the three-dimensional model resource specifically by the following steps: converting the roughness mapping corresponding to the three-dimensional model resource into a smoothness mapping; mixing a metal map, a shielding map, a detail map and a smoothness map corresponding to the three-dimensional model resource to generate a new shielding map for the three-dimensional model resource; wherein, the roughness map, the metal map, the shielding map and the detail map are original maps of the three-dimensional model resource.
When the maps of the three-dimensional model resources are classified, the maps of the three-dimensional model resources may be different from the channels used by the engine, so that single channels of the "metallization degree", "detail texture", "roughness", and "ambient light shielding" maps of the three-dimensional model resources need to be combined into a brand-new mask map. With the unity engine, since the unity engine uses "smoothness", it is necessary to correct the coarseness of the three-dimensional model resources.
Taking the unity engine as an example, since the unity engine uses the smoothness map, when the original map of the three-dimensional model resource does not include the smoothness map, the roughness map can be converted into the smoothness map. For example, after the roughness is squared, the value after the squaring is subtracted by 1 to obtain the smoothness, and then the conversion from the roughness mapping to the smoothness mapping is realized.
If the original map of the three-dimensional model resource does not contain the roughness map, the roughness map corresponding to the three-dimensional model resource can be generated by the following steps: generating a roughness map corresponding to the three-dimensional model resource based on the diffuse reflection map corresponding to the three-dimensional model resource; wherein, the diffuse reflection map is the original map of the three-dimensional model resource.
Optionally, the generating a roughness map corresponding to the three-dimensional model resource based on the diffuse reflection map corresponding to the three-dimensional model resource may be specifically implemented by the steps shown in fig. 3:
step S301, determining the category of a material corresponding to a diffuse reflection map based on the diffuse reflection map corresponding to the three-dimensional model resource;
step S302, obtaining a texture map matched with the category of the texture in the general texture library;
and step S303, mixing the material map and the diffuse reflection map to generate a roughness map corresponding to the three-dimensional model resource.
In the embodiment of the present disclosure, general material types (for example, material types such as wood, stone, cloth, tile, wall, etc.) may be predetermined, a general material library may be created based on the general material types, and material maps corresponding to the material types may be placed in the general material library.
Illustratively, the material name information in the resource diffuse reflection map can be extracted by traversing the three-dimensional model resource, and the material name information is classified into one of the materials contained in the general material library to obtain the category of the material corresponding to the diffuse reflection map. Then, the material maps matched with the types of the materials in the general material library can be read, the material maps and the diffuse reflection maps are mixed through a specific calculation rule to obtain PBR (physical Based Rendering) maps, and the generated PBR maps are assigned to the three-dimensional model resources, so that the roughness maps corresponding to the three-dimensional model resources can be generated. At this time, the three-dimensional model resource generates a highly physical roughness map from its own diffuse reflection map.
In an optional implementation manner, when classifying and converting the format of the original map of the three-dimensional model resource and determining a new map required by a rendering pipeline for the three-dimensional model resource, the normal map may be generated for the three-dimensional model resource specifically by the following steps: generating a normal map for the three-dimensional model resource based on the diffuse reflection map corresponding to the three-dimensional model resource; wherein, the diffuse reflection map is the original map of the three-dimensional model resource.
Since the diffuse reflection map includes illumination information of the model surface, the normal map can be generated from the diffuse reflection map data. Specifically, the diffuse reflection map can be converted into a gray scale map, that is, each pixel of the diffuse reflection map is converted into one channel from three channels (for example, red, green and blue color channels); the grayscale map is then converted to a normal map.
Furthermore, to facilitate the application of the map, the following format conversion steps may be performed: the original formats of various maps of the three-dimensional model resources are converted into the formats available for the unity engine by performing square operation in the shader, so that the model effect map is prevented from being presented without errors and correctly. In addition, the output model effect diagram can be reversely solved, and a final output required chartlet format is obtained through evolution operation.
After generating the corresponding material for the three-dimensional model resource, the following steps can be continuously executed:
in step S220, a lighting channel diagram is obtained by performing lighting baking through a rendering pipeline and adding a prefabricated lighting effect.
The high-definition rendering pipeline of the unity engine can be used for lighting production. Illustratively, several special light effects may be created as needed and applied in bulk throughout the three-dimensional virtual scene. Such as the light transmission effect of window, lantern, paper umbrella, whole environment reflection light filling effect, light and shadow effect of ray tracing, indirect light effect, the post processing effect of GI (Global Illumination), photic effect of PBR material etc. to advance the quality of promoting the light and playing up.
After the lighting baking is carried out based on the Rendering pipeline and the prefabricated lighting effect is added, the channel information of each Rendering stage of the unit engine can be continuously checked in the unit engine through a Rendering debogger tool and is led out through a Rendering output tool, and a lighting channel map is obtained. The Rendering debug is a Rendering debugging window for a programmable Rendering pipeline, can visualize light, rendering and material attributes, helps developers to find Rendering problems, and optimizes scenes and Rendering configurations.
For different use scenes, different processing modes can be adopted when the light channel diagram is derived.
If the scene image and the light channel map corresponding to the virtual scene including the three-dimensional model resource are superimposed in image processing software (such as Photoshop), in an optional embodiment, the light channel map is obtained by performing light baking through a rendering pipeline and adding a pre-made light effect, and the method can be specifically realized by the following steps: rendering the prefabricated lighting effect in a rendering engine to obtain a lighting effect rendering graph; outputting the rendering graph without the illumination channel, and superposing the light effect rendering graph with the rendering graph without the illumination channel to obtain a light channel graph.
Wherein the illumination channel free rendering map may be used to superimpose physical lighting effects.
If the scene image and the light channel map corresponding to the virtual scene including the three-dimensional model resource are superimposed in the two-dimensional engine, in an optional implementation manner, the light channel map is obtained by performing light baking through the rendering pipeline and adding a prefabricated light effect, and the method can be specifically realized by the following steps: and turning off the rendering of the parallel light, and respectively outputting diffuse reflection channel data, roughness channel data and normal channel data to the pre-made light effect to obtain a light channel map.
It should be noted that, when the two-dimensional engine is used to realize the three-rendering two-effect, the diffuse reflection channel, the roughness channel and the normal channel are required, so that the light effect can be left by canceling the rendering of the parallel light, and the data of the diffuse reflection channel, the roughness channel and the normal channel are respectively output to obtain the light channel map.
In an optional implementation manner, before the scene image corresponding to the virtual scene including the three-dimensional model resource is superimposed on the light channel map, the scene image corresponding to the virtual scene may be further generated by the following steps: controlling a virtual camera for photographing a virtual scene to zoom and shift in the virtual scene so that the virtual scene is rendered in a plurality of tiles; obtaining block images corresponding to a plurality of blocks by reading rendering textures of the virtual camera; and splicing the block images corresponding to the plurality of blocks according to a rendering sequence to generate an initial scene image corresponding to the virtual scene.
The virtual camera can directly acquire the picture of the virtual scene, read the rendering texture of the virtual camera for shooting the virtual scene, and generate a scene image containing the shot picture.
By controlling the scaling of the virtual camera, the virtual camera can be made to acquire a smaller picture of the virtual scene, i.e. one block of the virtual scene, at the same resolution. By controlling the offset of the virtual camera, the blocks of other parts of the virtual scene can be acquired. The scene picture corresponding to each block may form a block image.
Since the zoom and the movement of the virtual camera are controlled to obtain a partial picture of the virtual scene, block images corresponding to a plurality of small blocks of the obtained virtual scene need to be spliced to form a larger scene image, i.e., an initial scene image.
In an optional embodiment, the scene image corresponding to the virtual scene may be further generated by: reading a rendering texture of a virtual camera for shooting a virtual scene, and generating a first image according to the rendering texture; based on the first image, controlling the virtual camera to generate scaling and offset in the virtual scene to obtain block images corresponding to the multiple blocks, and rendering the first image in the multiple blocks; and splicing the block images corresponding to the plurality of blocks on the first image to generate an initial scene image corresponding to the virtual scene.
The first image is a large-size image captured by the virtual camera in the virtual scene.
In the above steps, the images are spliced by zooming and moving the camera, so that the definition of the scene image corresponding to the virtual scene can be improved to a certain extent.
In an optional implementation manner, after an initial scene image corresponding to a virtual scene is generated, the following steps may be further performed to further improve the definition of the scene image, and preserve material details as much as possible: calculating a connection gap area generated when the block images corresponding to the plurality of blocks are spliced; shooting the area containing the connection gap by adopting a virtual camera to obtain a connection image corresponding to the connection gap area; and based on the position of the connecting gap area, overlapping the joint image on the initial scene image to generate a final scene image corresponding to the virtual scene.
The connection gap region refers to a splicing region between adjacent block images. The stitched image refers to an image including a connection slit region photographed by the virtual camera.
Since the block image is essentially an image of different blocks, a connection gap region may be formed during the stitching process. The positions of the connection gap areas can be calculated, and the virtual camera is controlled to acquire the connection images of the positions of the connection gap areas. And then overlaying a joint image on the initial scene image for soft transition to obtain a final scene image corresponding to the virtual scene. In practical applications, in this way, the image with the highest upper limit of, for example, 4K resolution can be upgraded to 8K to 16K.
As shown in fig. 4, a flow chart for generating a scene image corresponding to a virtual scene is provided, which specifically includes the following steps:
step S401, controlling a virtual camera for shooting a virtual scene to zoom and shift in the virtual scene, so that the virtual scene is rendered in a plurality of blocks;
step S402, obtaining block images corresponding to a plurality of blocks by reading rendering textures of a virtual camera;
step S403, splicing the block images corresponding to the multiple blocks according to a rendering sequence to generate an initial scene image corresponding to a virtual scene;
step S404, calculating a connection gap area generated when the block images corresponding to the plurality of blocks are spliced;
step S405, shooting the area containing the connection gap by adopting a virtual camera to obtain a connection image corresponding to the connection gap area;
step S406, based on the position of the connection gap region, superimposing a joint image on the initial scene image to generate a final scene image corresponding to the virtual scene.
By acquiring the image in the connecting gap area, the excessive unnatural area generated in the process of improving the resolution of the image can be avoided, and the fineness of the image of the virtual scene is further improved.
In step S230, the scene image corresponding to the virtual scene including the three-dimensional model resource and the lighting channel map are superimposed to complete lighting rendering of the virtual scene.
In the embodiment of the present disclosure, the superimposition may be performed by image processing software or a two-dimensional engine, and the embodiment of the present disclosure is not limited herein.
In an optional implementation manner, the above superimposing a scene image corresponding to a virtual scene including a three-dimensional model resource and a lighting channel map to complete lighting rendering of the virtual scene may be implemented by the following steps: the method comprises the steps of obtaining a camera proportion of a virtual scene, importing the virtual scene into a target engine based on the camera proportion of the virtual scene, and obtaining a first camera proportion corresponding to the virtual scene in the target engine; creating another virtual camera, and determining a second camera proportion of the another virtual camera, wherein the another virtual camera is used for shooting a virtual scene; determining a zoom ratio parameter according to the first camera proportion and the second camera proportion; and according to the scaling ratio parameter, overlapping the image of the virtual scene including the three-dimensional model resource and the light channel image after position matching.
It should be noted that when the image of the virtual scene is superimposed on the light channel map, the images need to be aligned completely, otherwise, the light effect and the scene may be misaligned.
The camera scale parameter of the virtual scene refers to an original scene picture display size parameter corresponding to the virtual scene. The first camera scale parameter refers to a corresponding camera scale parameter after the virtual scene is imported into a target engine (e.g., a unity engine). The second camera scale parameter refers to a camera scale parameter corresponding to another virtual camera created in the unity engine, which can be used to photograph the virtual scene.
For example, the camera scale of the virtual scene may be obtained in the three-dimensional animation software (3 dmx), and then the entire virtual scene may be imported into the unity engine, so as to obtain the parameters of the virtual scene corresponding to the virtual camera in the unity engine. And then another virtual camera is created, and the parameter proportional relation of the two virtual cameras is calculated, so that a scaling ratio parameter can be obtained. Then, the resolution of the picture of the rendering output result is obtained, and after the scaling ratio is calculated, the scaling ratio can be imported into image processing software.
By obtaining the scaling ratio parameter, the rendering result obtained in the unity engine can be accurately matched into the virtual scene image in the image processing software in a pixel manner, and the accuracy of the superposition position is improved.
Further, in this example embodiment, a virtual scene lighting rendering apparatus is also provided. Referring to fig. 5, the virtual scene light rendering apparatus 500 may include:
a resource material module 510, configured to obtain three-dimensional model resources in a virtual scene, and create materials or update materials for the three-dimensional model resources;
a light rendering module 520, configured to perform light baking through a rendering pipeline and add a pre-made light effect to obtain a light channel map;
and the scene lighting module 530 is configured to superimpose a scene image and a lighting channel image corresponding to the virtual scene including the three-dimensional model resource, so as to complete lighting rendering of the virtual scene.
In an alternative embodiment, the resource material module 510 further includes: the material creating module is used for creating a preset default material when the three-dimensional model resource has no material; or, the material updating module is used for classifying and converting the format of the original map of the three-dimensional model resource when the three-dimensional model resource has the self material, and determining a new map required by the rendering pipeline for the three-dimensional model resource; and assigning the new chartlet to the own material so as to update the material of the three-dimensional model resource.
In an alternative embodiment, the new maps required by the rendering pipeline in the virtual scene lighting rendering apparatus 500 may include one or more of the following: mask map, normal map.
In an optional implementation manner, the material update module may further include: a matte map generation module may be configured to: converting the roughness mapping corresponding to the three-dimensional model resource into a smoothness mapping; mixing a metal map, a shielding map, a detail map and a smoothness map corresponding to the three-dimensional model resource to generate a new shielding map for the three-dimensional model resource; wherein, the roughness map, the metal map, the shielding map and the detail map are original maps of the three-dimensional model resource.
In an optional implementation manner, if the original map of the three-dimensional model resource does not include the roughness map, the material updating module may further include: the roughness generating module is used for generating a roughness map corresponding to the three-dimensional model resource based on the diffuse reflection map corresponding to the three-dimensional model resource; wherein, the diffuse reflection map is the original map of the three-dimensional model resource.
In an alternative embodiment, the roughness generating module may be configured to: determining the category of the material corresponding to the diffuse reflection map based on the diffuse reflection map corresponding to the three-dimensional model resource; acquiring a material map matched with the category of the material in the general material library; and mixing the material map with the diffuse reflection map to generate a roughness map corresponding to the three-dimensional model resource.
In an optional implementation manner, the material update module may further include: the normal map generating module is used for generating a normal map for the three-dimensional model resource based on the diffuse reflection map corresponding to the three-dimensional model resource; wherein, the diffuse reflection map is the original map of the three-dimensional model resource.
In an optional implementation manner, before superimposing the scene image and the light channel map corresponding to the virtual scene including the three-dimensional model resource, the virtual scene light rendering apparatus 500 further includes: a scene blocking module for controlling a virtual camera for shooting a virtual scene to zoom and shift in the virtual scene so that the virtual scene is rendered in a plurality of blocks; the block image generation module is used for reading rendering textures of the virtual camera to obtain block images corresponding to the blocks; and the first image splicing module is used for splicing the block images corresponding to the multiple blocks according to a rendering sequence to generate an initial scene image corresponding to the virtual scene.
In an optional embodiment, the pseudo scene light rendering apparatus 500 further includes: the gap area determining module is used for calculating a connection gap area generated when the block images corresponding to the blocks are spliced; the joint image generation module is used for shooting the area containing the connecting gap by adopting a virtual camera to obtain a joint image corresponding to the connecting gap area; and the second image splicing module is used for superposing the joint images on the initial scene image based on the position of the connecting gap area to generate a final scene image corresponding to the virtual scene.
In an optional embodiment, the light rendering module 520 may be configured to: rendering the prefabricated lighting effect in an engine to obtain a lighting effect rendering graph; outputting a rendering graph without the illumination channel, and overlapping the light effect rendering graph with the rendering graph without the illumination channel to obtain a light channel graph.
In an optional implementation manner, the light rendering module 520 may be further configured to: and turning off the rendering of the parallel light, and respectively outputting the prefabricated lighting effect to a diffuse reflection channel, a roughness channel and a normal channel to obtain a lighting channel diagram.
In an alternative embodiment, the scene light module 530 may be configured to: the method comprises the steps of obtaining a camera proportion of a virtual scene, importing the virtual scene into a target engine based on the camera proportion of the virtual scene, and obtaining a first camera proportion corresponding to the virtual scene in the target engine; creating another virtual camera, and determining a second camera proportion of the another virtual camera, wherein the another virtual camera is used for shooting a virtual scene; determining a zoom ratio parameter according to the first camera ratio and the second camera ratio; and according to the scaling ratio parameter, overlapping the image of the virtual scene including the three-dimensional model resource and the light channel image after position matching.
The specific details of each part in the virtual scene lighting rendering apparatus 500 are described in detail in the embodiment of the method part, and details that are not disclosed may refer to the embodiment of the method part, and thus are not described again.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a program product capable of implementing the above-described virtual scene light rendering method of the present specification. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing an electronic device to perform the steps according to various exemplary embodiments of the disclosure described in the above-mentioned "exemplary methods" section of this specification, when the program product is run on the electronic device. The program product may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on an electronic device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The exemplary embodiment of the present disclosure also provides an electronic device capable of implementing the above virtual scene light rendering method. An electronic device 600 according to this exemplary embodiment of the present disclosure is described below with reference to fig. 6. The electronic device 600 shown in fig. 6 is only an example and should not bring any limitations to the function and scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may take the form of a general purpose computing device. The components of the electronic device 600 may include, but are not limited to: at least one processing unit 610, at least one memory unit 620, a bus 630 that couples various system components including the memory unit 620 and the processing unit 610, and a display unit 640.
The storage unit 620 stores program code, which may be executed by the processing unit 610, to cause the processing unit 610 to perform the steps according to various exemplary embodiments of the present disclosure described in the above section "exemplary method" of this specification:
in particular, the program product stored on the computer-readable storage medium may cause the electronic device to perform the steps of:
acquiring three-dimensional model resources in a virtual scene, and creating or updating materials for the three-dimensional model resources;
performing lighting baking through a rendering pipeline and adding a prefabricated lighting effect to obtain a lighting channel diagram;
and superposing the scene image corresponding to the virtual scene comprising the three-dimensional model resource and the light channel image to finish the light rendering of the virtual scene.
In an optional implementation manner, the step of creating a texture or updating a texture for the three-dimensional model resource includes: when the three-dimensional model resources have no own material, creating a preset default material; or when the three-dimensional model resources have self materials, classifying and converting the format of the original map of the three-dimensional model resources, and determining a new map required by the rendering pipeline for the three-dimensional model resources; and assigning the new chartlet to the own material so as to update the material of the three-dimensional model resource.
In an alternative embodiment, the new map required by the rendering pipeline includes one or more of the following: mask map, normal map.
In an optional implementation manner, the classifying and format converting the original map of the three-dimensional model resource to determine a new map required for rendering the pipeline for the three-dimensional model resource may be implemented by the following steps: converting the roughness mapping corresponding to the three-dimensional model resource into a smoothness mapping; mixing a metal map, a shielding map, a detail map and a smoothness map corresponding to the three-dimensional model resource to generate a new shielding map for the three-dimensional model resource; wherein, the roughness map, the metal map, the shielding map and the detail map are original maps of the three-dimensional model resource.
In an optional implementation manner, if the original map of the three-dimensional model resource does not include the roughness map, the following steps may be further performed: generating a roughness map corresponding to the three-dimensional model resource based on the diffuse reflection map corresponding to the three-dimensional model resource; wherein, the diffuse reflection map is the original map of the three-dimensional model resource.
In an optional implementation manner, the generating the roughness map corresponding to the three-dimensional model resource based on the diffuse reflection map corresponding to the three-dimensional model resource may be implemented by the following steps: determining the category of the material corresponding to the diffuse reflection map based on the diffuse reflection map corresponding to the three-dimensional model resource; acquiring a material map matched with the category of the material in the general material library; and mixing the material map with the diffuse reflection map to generate a roughness map corresponding to the three-dimensional model resource.
In an optional implementation manner, the classifying and format converting the original map of the three-dimensional model resource to determine a new map required for rendering the pipeline for the three-dimensional model resource may be implemented by the following steps: generating a normal map for the three-dimensional model resource based on the diffuse reflection map corresponding to the three-dimensional model resource; wherein, the diffuse reflection map is the original map of the three-dimensional model resource.
In an optional embodiment, before superimposing the scene image and the light channel map corresponding to the virtual scene including the three-dimensional model resource, the following steps may be further performed: controlling a virtual camera for photographing a virtual scene to zoom and shift in the virtual scene so that the virtual scene is rendered in a plurality of tiles; obtaining block images corresponding to a plurality of blocks by reading rendering textures of the virtual camera; and splicing the block images corresponding to the plurality of blocks according to a rendering sequence to generate an initial scene image corresponding to the virtual scene.
In an alternative embodiment, the following steps can also be performed: calculating a connection gap area generated when the block images corresponding to the blocks are spliced; shooting the area containing the connection gap by adopting a virtual camera to obtain a connection image corresponding to the connection gap area; and based on the position of the connecting gap area, overlapping the joint image on the initial scene image to generate a final scene image corresponding to the virtual scene.
In an optional embodiment, the obtaining of the light channel map by performing the light baking through the rendering pipeline and adding the pre-made light effect may be implemented by the following steps: rendering the prefabricated lighting effect in an engine to obtain a lighting effect rendering graph; outputting a rendering graph without the illumination channel, and overlapping the light effect rendering graph with the rendering graph without the illumination channel to obtain a light channel graph.
In an optional embodiment, the obtaining of the light channel map by performing the light baking through the rendering pipeline and adding the pre-made light effect may be implemented by the following steps: and turning off the rendering of the parallel light, and respectively outputting the prefabricated lighting effect to a diffuse reflection channel, a roughness channel and a normal channel to obtain a lighting channel map.
In an optional implementation manner, the superimposing of the scene image corresponding to the virtual scene including the three-dimensional model resource and the lighting channel map to complete the lighting rendering of the virtual scene may be implemented by the following steps: the method comprises the steps of obtaining a camera proportion of a virtual scene, importing the virtual scene into a target engine based on the camera proportion of the virtual scene, and obtaining a first camera proportion corresponding to the virtual scene in the target engine; creating another virtual camera, and determining a second camera proportion of the another virtual camera, wherein the another virtual camera is used for shooting a virtual scene; determining a zoom ratio parameter according to the first camera ratio and the second camera ratio; and according to the scaling ratio parameter, overlapping the image of the virtual scene including the three-dimensional model resource and the light channel image after position matching.
By implementing the technical scheme of the embodiment of the disclosure, on one hand, the material can be determined according to the resources of different virtual scenes, and the material details of the virtual scenes in the light rendering and the display fineness of the virtual scenes can be improved; on the other hand, the virtual scene is overlaid by baking the lamplight and adding the prefabricated lamplight effect, so that the lamplight rendering efficiency of the virtual scene can be improved. The display fineness and the simulation degree of the virtual scene lighting rendering can be improved.
The storage unit 620 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM) 621 and/or a cache memory unit 622, and may further include a read-only memory unit (ROM) 623.
The storage unit 620 may also include a program/utility 624 having a set (at least one) of program modules 625, such program modules 625 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 630 can be any bus representing one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 600, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 600 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 650. Also, the electronic device 600 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 660. As shown, the network adapter 660 communicates with the other modules of the electronic device 600 over the bus 630. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 600, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the exemplary embodiments of the present disclosure.
Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes illustrated in the above figures are not intended to indicate or limit the temporal order of the processes. In addition, it is also readily understood that these processes may be performed, for example, synchronously or asynchronously in multiple modules.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, according to exemplary embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the following claims.

Claims (15)

1. A method for rendering virtual scene lighting, the method comprising:
acquiring three-dimensional model resources in a virtual scene, and creating or updating materials for the three-dimensional model resources;
performing lighting baking through a rendering pipeline and adding a prefabricated lighting effect to obtain a lighting channel diagram;
and superposing a scene image corresponding to the virtual scene comprising the three-dimensional model resource and the light channel image to finish the light rendering of the virtual scene.
2. The method of claim 1, wherein the step of creating or updating a material for the three-dimensional model resource comprises:
when the three-dimensional model resources have no own material, creating a preset default material; or
When the three-dimensional model resources have self materials, classifying and converting formats of original maps of the three-dimensional model resources, and determining new maps required by the rendering pipeline for the three-dimensional model resources;
and assigning the new map to the self-owned material so as to update the material of the three-dimensional model resource.
3. The method of claim 2, wherein the new map required for rendering the pipeline comprises one or more of: mask map, normal map.
4. The method of claim 3, wherein classifying and format converting the original map of the three-dimensional model resource to determine a new map required by the rendering pipeline for the three-dimensional model resource comprises:
converting the roughness mapping corresponding to the three-dimensional model resource into a smoothness mapping;
mixing a metal map, a shielding map, a detail map and a smoothness map corresponding to the three-dimensional model resource to generate a new shielding map for the three-dimensional model resource;
wherein the roughness map, the metal map, the shielding map and the detail map are original maps of the three-dimensional model resource.
5. The method of claim 4, wherein if the original map of the three-dimensional model resource does not contain a roughness map, the method further comprises:
generating a roughness map corresponding to the three-dimensional model resource based on the diffuse reflection map corresponding to the three-dimensional model resource;
and the diffuse reflection map is the original map of the three-dimensional model resource.
6. The method of claim 5, wherein generating the roughness map corresponding to the three-dimensional model resource based on the diffuse reflection map corresponding to the three-dimensional model resource comprises:
determining the category of the material corresponding to the diffuse reflection map based on the diffuse reflection map corresponding to the three-dimensional model resource;
acquiring a material map matched with the category of the material in a general material library;
and mixing the material map and the diffuse reflection map to generate a roughness map corresponding to the three-dimensional model resource.
7. The method of claim 3, wherein classifying and format converting the original map of the three-dimensional model resource to determine a new map required by the rendering pipeline for the three-dimensional model resource comprises:
generating a normal map for the three-dimensional model resource based on the diffuse reflection map corresponding to the three-dimensional model resource;
and the diffuse reflection map is the original map of the three-dimensional model resource.
8. The method of claim 1, wherein before superimposing the scene image corresponding to the virtual scene including the three-dimensional model resource and the light channel map, the method further comprises:
controlling a virtual camera used to capture the virtual scene to zoom and shift in the virtual scene such that the virtual scene is rendered in a plurality of tiles;
obtaining block images corresponding to the plurality of blocks by reading rendering textures of the virtual camera;
and splicing the block images corresponding to the plurality of blocks according to a rendering sequence to generate an initial scene image corresponding to the virtual scene.
9. The method of claim 8, further comprising:
calculating a connection gap area generated when the block images corresponding to the plurality of blocks are spliced;
shooting the area containing the connection gap by adopting the virtual camera to obtain a connection image corresponding to the connection gap area;
and based on the position of the connecting gap area, overlapping the connection image on the initial scene image to generate a final scene image corresponding to the virtual scene.
10. The method of claim 1, wherein the lighting baking through the rendering pipeline and adding the pre-made lighting effect to obtain the lighting channel map comprises:
rendering the prefabricated lighting effect in an engine to obtain a lighting effect rendering graph;
outputting a rendering graph without an illumination channel, and overlapping the lighting effect rendering graph with the rendering graph without the illumination channel to obtain a lighting channel graph.
11. The method of claim 1, wherein the lighting baking through the rendering pipeline and adding the pre-made lighting effect to obtain the lighting channel map comprises:
and turning off the rendering of the parallel light, and respectively outputting the prefabricated lighting effect to a diffuse reflection channel, a roughness channel and a normal channel to obtain a lighting channel diagram.
12. The method according to claim 1, wherein the superimposing the scene image corresponding to the virtual scene including the three-dimensional model resource and the lighting channel map to complete the lighting rendering of the virtual scene includes:
the method comprises the steps of obtaining a camera proportion of a virtual scene, importing the virtual scene into a target engine based on the camera proportion of the virtual scene, and obtaining a first camera proportion corresponding to the virtual scene in the target engine;
creating another virtual camera, determining a second camera proportion of the other virtual camera, the other virtual camera being used for shooting the virtual scene;
determining a zoom ratio parameter from the first camera scale and the second camera scale;
and according to the scaling ratio parameter, overlapping the image of the virtual scene comprising the three-dimensional model resource and the light channel image after position matching.
13. An apparatus for rendering virtual scene lighting, the apparatus comprising:
the resource material module is used for acquiring three-dimensional model resources in a virtual scene and creating materials or updating the materials for the three-dimensional model resources;
the lighting rendering module is used for performing lighting baking through a rendering pipeline and adding a prefabricated lighting effect to obtain a lighting channel diagram;
and the scene lighting module is used for superposing the scene image corresponding to the virtual scene comprising the three-dimensional model resource and the lighting channel image to finish the lighting rendering of the virtual scene.
14. A computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing the virtual scene light rendering method of any one of claims 1 to 12.
15. An electronic device, comprising:
one or more processors;
a storage device to store one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the virtual scene light rendering method of any one of claims 1 to 12.
CN202211599567.6A 2022-12-12 2022-12-12 Virtual scene light rendering method and device, storage medium and electronic equipment Pending CN115908716A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211599567.6A CN115908716A (en) 2022-12-12 2022-12-12 Virtual scene light rendering method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211599567.6A CN115908716A (en) 2022-12-12 2022-12-12 Virtual scene light rendering method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN115908716A true CN115908716A (en) 2023-04-04

Family

ID=86492354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211599567.6A Pending CN115908716A (en) 2022-12-12 2022-12-12 Virtual scene light rendering method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115908716A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188698A (en) * 2023-04-23 2023-05-30 阿里巴巴达摩院(杭州)科技有限公司 Object processing method and electronic equipment
CN116347003A (en) * 2023-05-30 2023-06-27 湖南快乐阳光互动娱乐传媒有限公司 Virtual lamplight real-time rendering method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188698A (en) * 2023-04-23 2023-05-30 阿里巴巴达摩院(杭州)科技有限公司 Object processing method and electronic equipment
CN116188698B (en) * 2023-04-23 2023-09-12 阿里巴巴达摩院(杭州)科技有限公司 Object processing method and electronic equipment
CN116347003A (en) * 2023-05-30 2023-06-27 湖南快乐阳光互动娱乐传媒有限公司 Virtual lamplight real-time rendering method and device
CN116347003B (en) * 2023-05-30 2023-08-11 湖南快乐阳光互动娱乐传媒有限公司 Virtual lamplight real-time rendering method and device

Similar Documents

Publication Publication Date Title
CN111932664B (en) Image rendering method and device, electronic equipment and storage medium
JP5891425B2 (en) Video providing device, video providing method and video providing program capable of providing follow-up video
CN115908716A (en) Virtual scene light rendering method and device, storage medium and electronic equipment
US4970666A (en) Computerized video imaging system for creating a realistic depiction of a simulated object in an actual environment
CN110969685A (en) Customizable rendering pipeline using rendering maps
CN111968216A (en) Volume cloud shadow rendering method and device, electronic equipment and storage medium
CN113674389B (en) Scene rendering method and device, electronic equipment and storage medium
US9183654B2 (en) Live editing and integrated control of image-based lighting of 3D models
CN114677467B (en) Terrain image rendering method, device, equipment and computer readable storage medium
CN114119853B (en) Image rendering method, device, equipment and medium
CN112891946B (en) Game scene generation method and device, readable storage medium and electronic equipment
CN113763231A (en) Model generation method, image perspective determination device, image perspective determination equipment and medium
CN113110731B (en) Method and device for generating media content
US20210241539A1 (en) Broker For Instancing
CN110378948B (en) 3D model reconstruction method and device and electronic equipment
CN117197296A (en) Traffic road scene simulation method, electronic equipment and storage medium
JP2007272847A (en) Lighting simulation method and image composition method
US20210241540A1 (en) Applying Non-Destructive Edits To Nested Instances For Efficient Rendering
CN112070904A (en) Augmented reality display method applied to museum
CN116402984B (en) Three-dimensional model processing method and device and electronic equipment
Callieri et al. A realtime immersive application with realistic lighting: The Parthenon
WO2024124370A1 (en) Model construction method and apparatus, storage medium, and electronic device
US20230325908A1 (en) Method of providing interior design market platform service using virtual space content data-based realistic scene image and device thereof
WO2023197729A1 (en) Object rendering method and apparatus, electronic device, and storage medium
CN115317909A (en) Virtual scene generation method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination