WO2023098358A1 - Model rendering method and apparatus, computer device, and storage medium - Google Patents

Model rendering method and apparatus, computer device, and storage medium Download PDF

Info

Publication number
WO2023098358A1
WO2023098358A1 PCT/CN2022/128075 CN2022128075W WO2023098358A1 WO 2023098358 A1 WO2023098358 A1 WO 2023098358A1 CN 2022128075 W CN2022128075 W CN 2022128075W WO 2023098358 A1 WO2023098358 A1 WO 2023098358A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
model
fusion layer
map
color
Prior art date
Application number
PCT/CN2022/128075
Other languages
French (fr)
Chinese (zh)
Inventor
宋田骥
刘欢
陈烨
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023098358A1 publication Critical patent/WO2023098358A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the present disclosure relates to the technical field of computer graphics, in particular, to a model rendering method, device, computer equipment and storage medium.
  • 3D models have higher and higher requirements for the styles of 3D models in 3D scenes, such as realistic, cartoon, hand-painted and other diversified styles.
  • 3D scene rendered in cartoon style some 3D models lack vividness and realistic experience when rendering.
  • Embodiments of the present disclosure at least provide a model rendering method, device, computer equipment, and storage medium.
  • an embodiment of the present disclosure provides a model rendering method, including:
  • rendering processing is performed on the target three-dimensional model to obtain a rendered target three-dimensional model.
  • the first fusion processing is performed on the color map and the material capture map to obtain a first fusion layer, including:
  • the sub-fusion layers corresponding to the respective patches are integrated to obtain the first fusion layer.
  • performing rendering processing on the target 3D model based on the second fusion layer to obtain the rendered target 3D model includes:
  • Rendering is performed on the target three-dimensional model based on the photosensitive-processed second fusion layer to obtain a rendered target three-dimensional model.
  • the photosensitive parameter information includes at least one of metal light and shadow intensity information, metal reflection intensity information, and environmental reflection color information corresponding to a reflective area on the three-dimensional target model.
  • the method also includes:
  • the step of rendering the target 3D model based on the second fusion layer to obtain the rendered target 3D model includes:
  • the color scale information includes the color scale corresponding to each color scale color value, the proportion information of each color level in the light-receiving area in the target three-dimensional model, and the color fusion information of adjacent color levels in each color level;
  • the target three-dimensional model is rendered to obtain the rendered target three-dimensional model.
  • the method also includes:
  • the method further includes:
  • the second fusion processing of the cube map and the first fusion layer is carried out to obtain the second fusion layer, including:
  • the method further includes:
  • the step of rendering the target 3D model based on the second fusion layer to obtain the rendered target 3D model includes:
  • the target three-dimensional model is rendered to obtain a rendered target three-dimensional model.
  • the method also includes:
  • the step of rendering the target 3D model based on the second fusion layer to obtain the rendered target 3D model includes:
  • Rendering is performed on the target 3D model based on the second fusion layer after the light and shadow shape processing, to obtain the rendered target 3D model.
  • the embodiment of the present disclosure further provides a model rendering device, including:
  • An acquisition module configured to acquire a color map corresponding to the target three-dimensional model
  • a generating module configured to generate, based on a preset lighting direction, a material capture map for representing material photosensitive information of the target 3D model under the preset lighting direction and a cube map of photosensitive information in different three-dimensional position areas;
  • a first processing module configured to perform a first fusion process on the color map and the texture capture map to obtain a first fusion layer
  • a second processing module configured to perform a second fusion process on the cubemap and the first fusion layer to obtain a second fusion layer
  • the third processing module is configured to perform rendering processing on the target 3D model based on the second fusion layer to obtain a rendered target 3D model.
  • an embodiment of the present disclosure further provides a computer device, including: a processor, a memory, and a bus, the memory stores machine-readable instructions executable by the processor, and when the computer device is running, the The processor communicates with the memory through a bus, and when the machine-readable instructions are executed by the processor, the above-mentioned first aspect, or the steps in any possible implementation manner of the first aspect are executed.
  • the embodiments of the present disclosure further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the above-mentioned first aspect, or the first aspect Steps in any of the possible implementations.
  • an embodiment of the present disclosure further provides a computer program, which executes the steps in the above first aspect or any possible implementation manner of the first aspect when the computer program is run by a processor.
  • the embodiments of the present disclosure further provide a computer program product, the computer program product includes a computer program, and when the computer program is run by a processor, the above-mentioned first aspect, or any possible option in the first aspect may be executed. steps in the implementation.
  • the color map used for rendering the target 3D model and the material capture map representing the photosensitive information of the material are first fused, so that the target 3D model can realize the basic material texture (for example, metal texture); then the second fusion process will be performed on the cube map representing the photosensitive information of different three-dimensional position areas of the target three-dimensional model and the first fusion layer, which can realize the stereoscopic visual display effect on the target three-dimensional model, and then in the target three-dimensional
  • the model when rendered, it can achieve three-dimensional and material texture visual effects, making the rendering effect of the target 3D model more realistic and vivid.
  • FIG. 1 shows a flowchart of a model rendering method provided by an embodiment of the present disclosure
  • FIG. 2 shows a schematic diagram of the effect of a material capture map of a metal ball provided by an embodiment of the present disclosure
  • FIG. 3 shows a schematic diagram of the effect of a cube map of a metal ball provided by an embodiment of the present disclosure
  • Fig. 4 shows a schematic diagram of the effect of a color scale map provided by an embodiment of the present disclosure
  • Fig. 5 shows a schematic diagram of the rendering effect of a metal ball provided by an embodiment of the present disclosure
  • Fig. 6 shows a schematic diagram of a model rendering device provided by an embodiment of the present disclosure
  • Fig. 7 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
  • 3D models can be displayed to users in 3D scenes. Users have higher and higher requirements for the styles of 3D models in 3D scenes, such as realistic, cartoon, hand-painted and other diversified styles. In the 3D scene rendered in cartoon style, some 3D models lack vividness and realistic experience when rendering.
  • the present disclosure provides a model rendering method, which can make the target 3D model realize the basic material Texture (such as metal texture); then the cube map representing the photosensitive information of different three-dimensional position areas of the target three-dimensional model and the first fusion layer are subjected to the second fusion process, which can realize the stereoscopic visual display effect of the target three-dimensional model, and then in the object three-dimensional model
  • the target 3D model is rendered, a three-dimensional and textured visual effect can be achieved, making the rendering effect of the target 3D model more realistic.
  • the execution subject of the model rendering method provided in the embodiments of the present disclosure is generally a computer device with certain computing capabilities.
  • model rendering method provided by the embodiments of the present disclosure will be described below by taking the execution subject as a server as an example.
  • FIG. 1 is a flowchart of a model rendering method provided by an embodiment of the present disclosure, the method includes S101-S105, wherein:
  • the target three-dimensional model may be a three-dimensional model to be rendered in the virtual space applied in the target game scene, such as a virtual character model, a virtual object model, and the like.
  • the target three-dimensional model can be obtained by using unit animation rendering and production software based on a personal computer (PC) system, such as 3D Studio Max (abbreviated as 3DS Max or 3D Max) or Maya and other three-dimensional model production software.
  • PC personal computer
  • the produced target three-dimensional model can be unfolded to obtain a two-dimensional image under the UV coordinate system (in UV coordinates, U represents the horizontal coordinate axis, and V represents the vertical coordinate axis).
  • UV coordinates U represents the horizontal coordinate axis
  • V represents the vertical coordinate axis.
  • Each UV coordinate value in the two-dimensional image obtained after unfolding may correspond to each point on the surface of the target three-dimensional model.
  • the color map may contain the color information of the virtual character itself corresponding to the target 3D model. Specifically, the color information contained in the color map corresponds to the UV coordinate value, and the color map may contain the color information under each UV coordinate value in the two-dimensional image obtained after the target three-dimensional model is unfolded.
  • the color map can be drawn by image processing software, such as drawing software Photoshop or other drawing software.
  • S102 Based on a preset lighting direction, generate a material capture map for representing material photosensitive information of the target 3D model under the preset lighting direction and a cube map of photosensitive information in different three-dimensional position areas.
  • the Material Capture (Matcap) texture is a two-dimensional plane texture under the preset lighting direction.
  • the generated material capture map can contain the 2D material photosensitive information obtained by the camera from the front of the target 3D model under the preset lighting direction when the pose of the target 3D model remains unchanged.
  • the material capture map may include information such as illumination and reflection on the surface of the target material (such as metal) in the rendering target 3D model.
  • the lighting information of the material used to render the target 3D model under light can be presented.
  • Figure 2 is a schematic diagram of the effect of a material capture map of a metal ball. From Figure 2, we can see the light-receiving area, the reflection area and the shadow area contained in the material capture map.
  • Cube maps can be obtained by baking. Based on the preset lighting direction, the generated cube map can contain the photosensitive information obtained by the camera from six directions under the preset lighting direction and the pose of the target 3D model remains unchanged.
  • the cube map may contain the photosensitive information of different three-dimensional position areas in the target three-dimensional model under the above-mentioned preset lighting direction, specifically, it may include information such as shadow areas and reflective areas to be rendered.
  • Figure 3 a schematic diagram of the effect of a cube map of a metal ball. From Figure 3, it can be seen that the reflective area and the shadow area contained in the cube map.
  • the obtained color map, as well as the generated material capture map and cube map can be added to the game engine, and the information contained in the color map, material capture map and cube map can be processed by the game engine to obtain the target 3D model. Rendering information for rendering.
  • S103 Perform a first fusion process on the color map and the texture capture map to obtain a first fusion layer.
  • the first fusion process of the color map and the material capture map may be: combining the color information of the virtual character itself corresponding to the target 3D model contained in the color map with the target 3D model contained in the material capture map Sensitivity information is fused.
  • the first fusion processing process is a process of color information included in the sampling color map and photosensitive information of the target 3D model included in the material capture map.
  • the obtained first fusion layer includes the first rendering information after the aforementioned color information and photosensitive information are fused.
  • the 3D model is composed of patches.
  • the sub-color corresponding to each patch of the target 3D model in the color map can be The first fusion processing is performed on the texture map and the material capture map respectively, and the sub-fusion layers corresponding to each patch are obtained. Then, the sub-fusion layers corresponding to each patch are integrated to obtain the first fusion layer.
  • the sub-color map corresponding to each patch contains the color information of the corresponding position of the virtual character corresponding to the patch.
  • the sub-fusion layer corresponding to each patch may contain the first rendering information after the color information of the patch is fused with the photosensitive information contained in the material capture map.
  • the sub-fusion layers corresponding to each patch are integrated to obtain the first fusion layer.
  • S104 Perform a second fusion process on the cubemap and the first fusion layer to obtain a second fusion layer.
  • the process of performing the first fusion process on the cube map and the first fusion layer may be: combining the photosensitive information of different three-dimensional position regions in the target three-dimensional model contained in the cube map with the above-mentioned first fusion layer.
  • the rendering information is fused.
  • the second fusion processing process is the process of sampling the photosensitive information of different three-dimensional position regions in the target three-dimensional model contained in the cube map and the rendering information contained in the aforementioned first fusion layer.
  • the obtained second fusion layer includes second rendering information that can be used to render the target three-dimensional model.
  • S105 Based on the second fusion layer, perform rendering processing on the target three-dimensional model to obtain a rendered target three-dimensional model.
  • the second rendering information contained in the second fusion layer can be used for direct rendering, and the obtained target 3D model is fused with the colors corresponding to each point of the target 3D model stored in the color map information, the planar light and shadow information stored in the material capture map representing the material photosensitive information of the target 3D model under the preset lighting direction, and the information stored in the cube map representing different 3D position areas of the target 3D model under the preset lighting direction.
  • the three-dimensional light and shadow information of the photosensitive information so that the rendered three-dimensional model of the target can present the three-dimensional visual effect of the target material in the preset light direction, as shown in Figure 5, a schematic diagram of the effect of a metal ball after rendering.
  • photosensitive processing may be performed on the second fusion layer to obtain the photosensitive processed second Blend layers. Then, based on the photosensitive-processed second fusion layer, the target three-dimensional model is rendered to obtain the rendered target three-dimensional model.
  • the input photosensitive parameter information can increase photosensitive effects of the target material (such as metal) in the target three-dimensional model, such as light and shadow intensity effects, reflection intensity effects, reflection colors, and the like. Therefore, in yet another embodiment, the input photosensitive parameter information may include at least one of metal light and shadow intensity information, metal reflection intensity information, and environment reflection color information corresponding to the reflection area on the target three-dimensional model.
  • the intensity of the light and shadow in the target three-dimensional model can be adjusted based on the intensity information of the metal light and shadow.
  • the metal light and shadow intensity information is used to perform photosensitive processing on the second fused layer, and the metal light and shadow intensity information is fused in the obtained second fused layer after photosensitive processing.
  • the target 3D model is rendered with the second fusion layer after photosensitive processing, the light and shadow intensity of the metal on the target 3D model is greater, which means that the target 3D model presents a vision with brighter reflection areas and darker shadow areas. Effect.
  • the reflection intensity of the reflective area in the target three-dimensional model may also be adjusted based on the metal reflection intensity information.
  • the metal reflection intensity information is used to perform photosensitization processing on the second fusion layer, and the metal reflection intensity information is integrated in the second fusion layer obtained after the photosensitization processing.
  • the reflective intensity of the metal on the target 3D model is greater, and it can also make the target 3D model present a visual effect of brighter reflection areas and darker shadow areas .
  • the reflection color of the reflective area in the target three-dimensional model may also be determined based on the environmental reflection color information corresponding to the reflective area on the target three-dimensional model.
  • Sensitization processing is performed on the second fusion layer by using the environmental reflection color information, and the environmental reflection color information is integrated in the obtained second fusion layer after the sensitivity processing.
  • the reflective color of the metal on the target 3D model can be more in line with the environment color in the target game scene.
  • multiple types of the above-mentioned photosensitive parameter information may also be used to perform photosensitization processing on the second fusion layer, so as to further increase the authenticity of the stereoscopic visual effect.
  • the color of the light-receiving area in the target 3D model may also be color-graded, so that the light-receiving area in the target 3D model has a visual effect of color change.
  • it may include performing color scale processing on the second fusion layer based on the color scale information in the color scale map to obtain the second fusion layer after the color scale processing; the color scale information includes the colors corresponding to each color scale value, the proportion information of each color level in the light-receiving area of the target 3D model, and the color fusion information of adjacent color levels in each color level. Then, based on the second fusion layer after the color scale processing, the target three-dimensional model is rendered to obtain the rendered target three-dimensional model.
  • the color scale map may be a map created by drawing software to represent the color change characteristics of the light-receiving area in the target three-dimensional model. Multiple levels can be included in a level map. Each color scale in the color scale map can be arranged in the order of color shade change.
  • a schematic diagram of the effect of a color scale map as shown in FIG. 4 may include 4 color scales, and the 4 color scales are arranged from left to right in order from dark to light. And the proportion information of each color level in the whole color level map can be different.
  • the color values at the critical positions of two adjacent color levels may be a color fusion value obtained by fusing the color values of the two color levels.
  • the color scale information in the color scale map is used to perform color scale processing on the second fusion layer, and the color scale information in the color scale map is integrated into the obtained second fusion layer after the color scale processing.
  • the color of the light-receiving area on the target 3D model can show a gradient effect, thereby avoiding color jumps from the shadow area to the light-receiving area situation, thereby increasing the authenticity of the stereoscopic visual effect.
  • the light and shadow shapes in the target 3D model may also be processed, so that the light and shadow shapes in the target 3D model are more consistent with the light and shadow shapes in the real environment.
  • the light and shadow shape processing may be performed on the second fusion layer to obtain the light and shadow shape processed second fusion layer.
  • the target three-dimensional model is rendered to obtain the rendered target three-dimensional model.
  • the environment map may be a map made by drawing software that represents the light and shadow shape of the target three-dimensional model under a preset lighting direction.
  • the environment map contains the shape information of the reflective area and the shape information of the shadow area.
  • the light and shadow shape information in the environment map is used to perform light and shadow shape processing on the second fusion layer, and the light and shadow shape information in the environment map is integrated into the obtained second fusion layer after the light and shadow shape processing.
  • the shape of the reflective area and the shape of the shadow area on the target 3D model can be more in line with the light and shadow shape in the real environment, thereby increasing the reality of the stereoscopic visual effect sex.
  • the channel map indicating the area to be rendered in the target 3D model can be obtained here, and the area indicated by the area information to be rendered in the channel map is used to render the area indicated by the area information to be rendered in material.
  • the area corresponding to the area to be rendered in the first fusion layer corresponds to The first local fusion layer of .
  • the obtained first local fusion layer may include area information to be rendered with material.
  • the cube map and the first local fusion layer including the area information to be rendered by material can be subjected to a second fusion process to obtain the second fusion layer.
  • a second local fusion layer corresponding to the area to be rendered in the second fusion layer may be determined based on the channel map. Then, based on the second partial fusion layer, the target three-dimensional model is rendered to obtain the rendered target three-dimensional model.
  • material rendering can be performed on the area to be rendered by material, so that the area to be rendered by material can present the effect of special material texture.
  • various texture maps may be processed by an algorithm for material rendering to obtain rendering information for rendering a target three-dimensional model.
  • the algorithms used here mainly include light shading algorithm and light and shadow algorithm.
  • the process of algorithmic processing of various texture maps is described in detail.
  • a color map and a material capture map corresponding to the target 3D model may be obtained.
  • the color information of the virtual character itself corresponding to the target 3D model contained in the color map is a
  • the 2D light and shadow information of the target 3D model contained in the material capture map under the preset lighting direction is b
  • the first fusion The rendering information contained in the layer is c
  • the rendering information contained in the first fusion layer can use the first formula contained in the lighting and shading algorithm:
  • the function of the saturate(x) function can be: clamp i to range[0,1], that is, when x is greater than 1, i is set to be equal to 1, and when x is less than 0, i is set to be equal to 0, where i is arbitrary variable.
  • the saturate(i) function can make i take a value between 0 and 1. That is, through the above first formula, the rendering information contained in the first fused layer can take a value between 0 and 1.
  • the metal texture of the base of the target 3D model can be reflected through the above lighting and coloring algorithm.
  • the target 3D model contained in the cube map has light and shadow information d of different 3D position areas in the target 3D model under a preset lighting direction.
  • the area information to be rendered in the target 3D model contained in the channel map is y.
  • the area information value in the channel map here can be stored in any channel.
  • the input metal light and shadow intensity information x can also be obtained.
  • first use the second formula d1 smoothstep(d ⁇ 0.1, d+0.1, 0.17) in the light and shadow algorithm to obtain light and shadow information d1 after smoothing the light and shadow information in the cube map.
  • the function of the smoothstep function may be: return a smooth difference between 0 and 1.
  • the rendering information c contained in the light and shadow information d1 and the first fusion layer is obtained by using the metal light and shadow intensity information x After performing light and shadow intensity fusion processing, and using the metal light and shadow intensity information x to perform light and shadow intensity fusion processing on the two-dimensional light and shadow information b of the target three-dimensional model, the light and shadow information d2 after the light and shadow intensity fusion processing is obtained.
  • the rendering information c contained in the first fusion layer and the light and shadow intensity fusion by using the area information y to be rendered by the material are obtained
  • the processed light and shadow information d2 is subjected to region selection processing to obtain light and shadow rendering information f for region selection processing.
  • the input metal reflection intensity information, the environment reflection color information corresponding to the reflective area on the input target 3D model, and the stored target 3D model can also be obtained.
  • the metal reflection intensity information is fused with the light and shadow rendering information f of the area selection process, which can make the reflection intensity of the metal on the rendered target 3D model larger; the environment reflection color information corresponding to the reflective area on the target 3D model is combined with The light and shadow rendering information f of the area selection process is fused, so that the reflective color of the metal on the rendered target 3D model can be more in line with the ambient color of the target game scene; the color of the light-receiving area in the target 3D model stored in the color scale map
  • the light and shadow rendering information f of the area selection process is fused with the order information, so that the color of the light-receiving area of the rendered target 3D model can present a gradient effect; the light and shadow of the target 3D model in the preset lighting direction contained in the environment map
  • the shape information is fused with the light and shadow rendering information f of the area selection process, so that the shape of the reflective area and the shape of the shadow area on the rendered target 3D model can be more in line with the
  • the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process.
  • the specific execution order of each step should be based on its function and possible
  • the inner logic is OK.
  • the embodiment of the present disclosure also provides a model rendering device corresponding to the model rendering method. Since the problem-solving principle of the device in the embodiment of the present disclosure is similar to the above-mentioned model rendering method in the embodiment of the present disclosure, the implementation of the device Reference can be made to the implementation of the method, and repeated descriptions will not be repeated.
  • FIG. 6 it is a schematic diagram of the architecture of a model rendering device provided by an embodiment of the present disclosure.
  • the device includes: an acquisition module 601 , a generation module 602 , a first processing module 603 , a second processing module 604 , and a third processing module.
  • Module 605 a third processing module.
  • An acquisition module 601 configured to acquire a color map corresponding to the target three-dimensional model
  • a generation module 602 configured to generate a material capture map for representing material photosensitive information of the target 3D model under the preset lighting direction and a cube map of photosensitive information in different three-dimensional position regions based on a preset lighting direction;
  • the first processing module 603 is configured to perform a first fusion process on the color map and the texture capture map to obtain a first fusion layer;
  • the second processing module 604 is configured to perform a second fusion process on the cube map and the first fusion layer to obtain a second fusion layer;
  • the third processing module 604 is configured to perform rendering processing on the target 3D model based on the second fusion layer to obtain a rendered target 3D model.
  • the first processing module 603 is specifically configured to:
  • the sub-fusion layers corresponding to the respective patches are integrated to obtain the first fusion layer.
  • the third processing module 604 is specifically configured to:
  • Rendering is performed on the target three-dimensional model based on the photosensitive-processed second fusion layer to obtain a rendered target three-dimensional model.
  • the photosensitive parameter information includes at least one of metal light and shadow intensity information, metal reflection intensity information, and environmental reflection color information corresponding to a reflective area on the three-dimensional target model.
  • the obtaining module 601 is also used to obtain a color scale map representing the color change characteristics of the light-receiving area in the target three-dimensional model;
  • the third processing module 605 is specifically used for:
  • the color scale information includes the color scale corresponding to each color scale color value, the proportion information of each color level in the light-receiving area in the target three-dimensional model, and the color fusion information of adjacent color levels in each color level;
  • the target three-dimensional model is rendered to obtain the rendered target three-dimensional model.
  • the obtaining module 601 is further configured to obtain a channel map representing an area to be rendered in the target 3D model;
  • the device also includes:
  • a first determination module configured to determine, based on the channel map, a first partial fusion layer in the first fusion layer corresponding to the region to be rendered with material
  • the second processing module 604 is specifically configured to: perform a second fusion process on the cubemap and the first local fusion layer to obtain the second fusion layer;
  • the device also includes:
  • a second determining module configured to determine a second partial fusion layer in the second fusion layer corresponding to the area to be rendered in material based on the channel map;
  • the third processing module 605 is specifically configured to: perform rendering processing on the target 3D model based on the second local fusion layer to obtain a rendered target 3D model.
  • the acquisition module 601 is further configured to acquire an environment map representing the light and shadow shape of the target three-dimensional model under the preset illumination direction;
  • the third processing module 605 is specifically used for:
  • Rendering is performed on the target 3D model based on the second fusion layer after the light and shadow shape processing, to obtain the rendered target 3D model.
  • FIG. 7 it is a schematic structural diagram of a computer device 700 provided by an embodiment of the present disclosure, including a processor 701 , a memory 702 , and a bus 703 .
  • the memory 702 is used to store execution instructions, including a memory 7021 and an external memory 7022; the memory 7021 here is also called an internal memory, and is used to temporarily store calculation data in the processor 701 and exchange data with an external memory 7022 such as a hard disk.
  • the processor 701 exchanges data with the external memory 7022 through the memory 7021.
  • the processor 701 communicates with the memory 702 through the bus 703, so that the processor 701 executes the following instructions:
  • rendering processing is performed on the target three-dimensional model to obtain a rendered target three-dimensional model.
  • An embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the steps of the model rendering method described in the above-mentioned method embodiments are executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • Embodiments of the present disclosure further provide a computer program, which executes the steps of the model rendering method described in the foregoing method embodiments when the computer program is run by a processor.
  • the embodiment of the present disclosure also provides a computer program product, the computer product carries program code, and the instructions included in the program code can be used to execute the steps of the model rendering method described in the above method embodiment, for details, please refer to the above method The embodiment will not be repeated here.
  • the above-mentioned computer program product may be specifically implemented by means of hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) etc. wait.
  • a software development kit Software Development Kit, SDK
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor.
  • the technical solution of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The present disclosure provides a model rendering method and apparatus, a computer device, and a storage medium. The method comprises: acquiring a color map corresponding to a target three-dimensional model (S101); on the basis of a preset illumination direction, generating a material capture map of material photosensitive information of the target three-dimensional model in the preset illumination direction and a cube map of photosensitive information of different three-dimensional position areas (S102); performing first fusion processing on the color map and the material capture map to obtain a first fusion layer (S103); performing second fusion processing on the cube map and the first fusion layer to obtain a second fusion layer (S104); and on the basis of the second fusion layer, rendering the target three-dimensional model to obtain a rendered target three-dimensional model (S105). Embodiments of the present disclosure can achieve a three-dimensional visual effect of a material texture, so that the rendering effect of the target three-dimensional model is more real and vivid.

Description

一种模型渲染方法、装置、计算机设备及存储介质A model rendering method, device, computer equipment and storage medium
相关申请交叉引用Related Application Cross Reference
本申请要求于2021年12月05日提交中国专利局、申请号为202111471457.7、发明名称为“一种模型渲染方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用并入本文。This application claims the priority of the Chinese patent application with the application number 202111471457.7 and the title of the invention "a model rendering method, device, computer equipment and storage medium" submitted to the China Patent Office on December 05, 2021, the entire content of which is passed Incorporated herein by reference.
技术领域technical field
本公开涉及计算机图形技术领域,具体而言,涉及一种模型渲染方法、装置、计算机设备及存储介质。The present disclosure relates to the technical field of computer graphics, in particular, to a model rendering method, device, computer equipment and storage medium.
背景技术Background technique
随着计算机图形技术的发展,三维场景中可以向用户展示的三维模型越来越丰富。With the development of computer graphics technology, more and more 3D models can be displayed to users in 3D scenes.
用户对三维场景中三维模型所具有的风格要求越来越高,例如,写实、卡通、手绘等多元化风格。在卡通风格渲染的三维场景中,对一些三维模型进行渲染时缺少生动性和真实感体验。Users have higher and higher requirements for the styles of 3D models in 3D scenes, such as realistic, cartoon, hand-painted and other diversified styles. In the 3D scene rendered in cartoon style, some 3D models lack vividness and realistic experience when rendering.
发明内容Contents of the invention
本公开实施例至少提供一种模型渲染方法、装置、计算机设备及存储介质。Embodiments of the present disclosure at least provide a model rendering method, device, computer equipment, and storage medium.
第一方面,本公开实施例提供了一种模型渲染方法,包括:In a first aspect, an embodiment of the present disclosure provides a model rendering method, including:
获取目标三维模型对应的颜色贴图;Obtain the color map corresponding to the target 3D model;
基于预设光照方向,生成用于表示所述目标三维模型在所述预设光照方向下的材质感光信息的材质捕获贴图以及在不同三维位置区域的感光信息的立方体贴图;Based on the preset lighting direction, generate a material capture map for representing the material photosensitive information of the target 3D model under the preset lighting direction and a cube map of photosensitive information in different three-dimensional position areas;
将所述颜色贴图与所述材质捕获贴图进行第一融合处理,得到第一融合图层;performing a first fusion process on the color map and the texture capture map to obtain a first fusion layer;
将所述立方体贴图与所述第一融合图层进行第二融合处理,得到第二融合图层;performing a second fusion process on the cube map and the first fusion layer to obtain a second fusion layer;
基于所述第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。Based on the second fused layer, rendering processing is performed on the target three-dimensional model to obtain a rendered target three-dimensional model.
在一种可选的实施方式中,所述将所述颜色贴图与所述材质捕获贴图进行第一融合处理,得到第一融合图层,包括:In an optional implementation manner, the first fusion processing is performed on the color map and the material capture map to obtain a first fusion layer, including:
将所述颜色贴图中与所述目标三维模型的各个面片对应的子颜色贴图分别与所述材质捕获贴图进行所述第一融合处理,得到所述各个面片分别对应的子融合图层;Performing the first fusion process on the sub-color maps corresponding to each patch of the target three-dimensional model in the color map and the material capture map respectively, to obtain sub-fusion layers corresponding to each patch;
对所述各个面片分别对应的所述子融合图层进行整合,得到所述第一融合图层。The sub-fusion layers corresponding to the respective patches are integrated to obtain the first fusion layer.
在一种可选的实施方式中,所述基于所述第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型,包括:In an optional implementation manner, performing rendering processing on the target 3D model based on the second fusion layer to obtain the rendered target 3D model includes:
响应于输入的感光参数信息,基于所述感光参数信息,对所述第二融合图层进行感光处理,得到感光处理后的第二融合图层;In response to the input photosensitive parameter information, based on the photosensitive parameter information, perform photosensitization processing on the second fusion layer to obtain a photosensitive processed second fusion layer;
基于所述感光处理后的第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。Rendering is performed on the target three-dimensional model based on the photosensitive-processed second fusion layer to obtain a rendered target three-dimensional model.
在一种可选的实施方式中,所述感光参数信息包括金属光影强度信息、金属反射强度信息、所述目标三维模型上的反光区域对应的环境反射颜色信息中的至少一种。In an optional implementation manner, the photosensitive parameter information includes at least one of metal light and shadow intensity information, metal reflection intensity information, and environmental reflection color information corresponding to a reflective area on the three-dimensional target model.
在一种可选的实施方式中,所述方法还包括:In an optional embodiment, the method also includes:
获取表示所述目标三维模型中受光区域的颜色变化特点的色阶贴图;Acquiring a color scale map representing the color change characteristics of the light-receiving area in the target three-dimensional model;
所述基于所述第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型,包括:The step of rendering the target 3D model based on the second fusion layer to obtain the rendered target 3D model includes:
基于所述色阶贴图中的色阶信息,对所述第二融合图层进行色阶处理,得到色阶处理后的第二融合图层;所述色阶信息中包括各个色阶分别对应的颜色值、所述各个色阶在所述目标三维模型中受光区域的占比信息和所述各个色阶中相邻色阶的颜色融合信息;Based on the color scale information in the color scale map, perform color scale processing on the second fusion layer to obtain the second fusion layer after the color scale processing; the color scale information includes the color scale corresponding to each color scale color value, the proportion information of each color level in the light-receiving area in the target three-dimensional model, and the color fusion information of adjacent color levels in each color level;
基于所述色阶处理后的第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。Based on the second fusion layer after the color scale processing, the target three-dimensional model is rendered to obtain the rendered target three-dimensional model.
在一种可选的实施方式中,所述方法还包括:In an optional embodiment, the method also includes:
获取表示所述目标三维模型中待进行材质渲染的区域的通道贴图;Obtaining a channel map representing an area to be rendered in the target three-dimensional model;
在得到所述第一融合图层之后,将所述立方体贴图与所述第一融合图层进行第二融合处理之前,所述方法还包括:After obtaining the first fusion layer, before performing the second fusion process on the cube map and the first fusion layer, the method further includes:
基于所述通道贴图,确定所述第一融合图层中与所述待进行材质渲染的区域对应的第一局部融合图层;Based on the channel map, determine a first partial fusion layer corresponding to the area to be rendered in the first fusion layer in the first fusion layer;
所述将所述立方体贴图与所述第一融合图层进行第二融合处理,得到第二融合图层,包括:The second fusion processing of the cube map and the first fusion layer is carried out to obtain the second fusion layer, including:
将所述立方体贴图与所述第一局部融合图层进行第二融合处理,得到所述第二融合图层;performing a second fusion process on the cube map and the first partial fusion layer to obtain the second fusion layer;
在所述得到所述第二融合图层之后,对所述目标三维模型进行渲染处理之前,所述方法还包括:After the second fusion layer is obtained, before rendering the target 3D model, the method further includes:
基于所述通道贴图,确定所述第二融合图层中与所述待进行材质渲染的区域对应的第二局部融合图层;Based on the channel map, determine a second partial fusion layer in the second fusion layer corresponding to the area to be rendered with material;
所述基于所述第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型,包括:The step of rendering the target 3D model based on the second fusion layer to obtain the rendered target 3D model includes:
基于所述第二局部融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。Based on the second local fusion layer, the target three-dimensional model is rendered to obtain a rendered target three-dimensional model.
在一种可选的实施方式中,所述方法还包括:In an optional embodiment, the method also includes:
获取在所述预设光照方向下,表示所述目标三维模型的光影形状的环境贴图;Obtaining an environment map representing the light and shadow shape of the target 3D model under the preset lighting direction;
所述基于所述第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型,包括:The step of rendering the target 3D model based on the second fusion layer to obtain the rendered target 3D model includes:
基于所述环境贴图所表示的光影形状,对所述第二融合图层进行光影形状处理,得到光影形状处理后的第二融合图层;Based on the light and shadow shape represented by the environment map, performing light and shadow shape processing on the second fusion layer to obtain a light and shadow shape processed second fusion layer;
基于所述光影形状处理后的第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。Rendering is performed on the target 3D model based on the second fusion layer after the light and shadow shape processing, to obtain the rendered target 3D model.
第二方面,本公开实施例还提供一种模型渲染装置,包括:In the second aspect, the embodiment of the present disclosure further provides a model rendering device, including:
获取模块,用于获取目标三维模型对应的颜色贴图;An acquisition module, configured to acquire a color map corresponding to the target three-dimensional model;
生成模块,用于基于预设光照方向,生成用于表示所述目标三维模型在所述预设光照方向下的材质感光信息的材质捕获贴图以及在不同三维位置区域的感光信息的立方体贴图;A generating module, configured to generate, based on a preset lighting direction, a material capture map for representing material photosensitive information of the target 3D model under the preset lighting direction and a cube map of photosensitive information in different three-dimensional position areas;
第一处理模块,用于将所述颜色贴图与所述材质捕获贴图进行第一融合处理,得到第一融合图层;A first processing module, configured to perform a first fusion process on the color map and the texture capture map to obtain a first fusion layer;
第二处理模块,用于将所述立方体贴图与所述第一融合图层进行第二融合处理,得到第二融合图层;A second processing module, configured to perform a second fusion process on the cubemap and the first fusion layer to obtain a second fusion layer;
第三处理模块,用于基于所述第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。The third processing module is configured to perform rendering processing on the target 3D model based on the second fusion layer to obtain a rendered target 3D model.
第三方面,本公开实施例还提供一种计算机设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当所述计算机设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。In a third aspect, an embodiment of the present disclosure further provides a computer device, including: a processor, a memory, and a bus, the memory stores machine-readable instructions executable by the processor, and when the computer device is running, the The processor communicates with the memory through a bus, and when the machine-readable instructions are executed by the processor, the above-mentioned first aspect, or the steps in any possible implementation manner of the first aspect are executed.
第四方面,本公开实施例还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器运行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。In the fourth aspect, the embodiments of the present disclosure further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the above-mentioned first aspect, or the first aspect Steps in any of the possible implementations.
第五方面,本公开实施例还提供一种计算机程序,所述计算机程序被处理器运行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。In a fifth aspect, an embodiment of the present disclosure further provides a computer program, which executes the steps in the above first aspect or any possible implementation manner of the first aspect when the computer program is run by a processor.
第六方面,本公开实施例还提供一种计算机程序产品,所述计算机程序产品包括计算机程序,所述计算机程序被处理器运行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。In the sixth aspect, the embodiments of the present disclosure further provide a computer program product, the computer program product includes a computer program, and when the computer program is run by a processor, the above-mentioned first aspect, or any possible option in the first aspect may be executed. steps in the implementation.
本公开实施例提供的模型渲染方法,通过将用于对目标三维模型进行渲染的颜色贴图与表示材质感光信息的材质捕获贴图进行第一融合处理,可以使得目标三维模型实现基础的材质质感(例如金属质感);然后将表示目标三维模型的不同三维位置区域的感光信息的立方体贴图与第一融合图层进行第二融合处理,可以实现对目标三维模型的立体视觉显示效果,进而在对目标三维模型进行渲染时,可以实现立体的、材质质感的视觉效果,使得目标三维模型的渲染效果更加真实和生动。In the model rendering method provided by the embodiments of the present disclosure, the color map used for rendering the target 3D model and the material capture map representing the photosensitive information of the material are first fused, so that the target 3D model can realize the basic material texture (for example, metal texture); then the second fusion process will be performed on the cube map representing the photosensitive information of different three-dimensional position areas of the target three-dimensional model and the first fusion layer, which can realize the stereoscopic visual display effect on the target three-dimensional model, and then in the target three-dimensional When the model is rendered, it can achieve three-dimensional and material texture visual effects, making the rendering effect of the target 3D model more realistic and vivid.
为使本公开的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。In order to make the above-mentioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments will be described in detail below together with the accompanying drawings.
附图说明Description of drawings
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to illustrate the technical solutions of the embodiments of the present disclosure more clearly, the following will briefly introduce the accompanying drawings used in the embodiments. The accompanying drawings here are incorporated into the specification and constitute a part of the specification. The drawings show the embodiments consistent with the present disclosure, and are used together with the description to explain the technical solution of the present disclosure. It should be understood that the following drawings only show some embodiments of the present disclosure, and therefore should not be regarded as limiting the scope. For those skilled in the art, they can also make From these drawings other related drawings are obtained.
图1示出了本公开实施例所提供的一种模型渲染方法的流程图;FIG. 1 shows a flowchart of a model rendering method provided by an embodiment of the present disclosure;
图2示出了本公开实施例所提供的一种金属球的材质捕获贴图的效果示意图;FIG. 2 shows a schematic diagram of the effect of a material capture map of a metal ball provided by an embodiment of the present disclosure;
图3示出了本公开实施例所提供的一种金属球的立方体贴图的效果示意图;FIG. 3 shows a schematic diagram of the effect of a cube map of a metal ball provided by an embodiment of the present disclosure;
图4示出了本公开实施例所提供的一种色阶贴图的效果示意图;Fig. 4 shows a schematic diagram of the effect of a color scale map provided by an embodiment of the present disclosure;
图5示出了本公开实施例所提供的一种金属球渲染完成后的效果示意图;Fig. 5 shows a schematic diagram of the rendering effect of a metal ball provided by an embodiment of the present disclosure;
图6示出了本公开实施例所提供的一种模型渲染装置的示意图;Fig. 6 shows a schematic diagram of a model rendering device provided by an embodiment of the present disclosure;
图7示出了本公开实施例所提供的一种计算机设备的示意图。Fig. 7 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
具体实施方式Detailed ways
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present disclosure clearer, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only It is a part of the embodiments of the present disclosure, but not all of them. The components of the disclosed embodiments generally described and illustrated in the figures herein may be arranged and designed in a variety of different configurations. Accordingly, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the claimed disclosure, but merely represents selected embodiments of the present disclosure. Based on the embodiments of the present disclosure, all other embodiments obtained by those skilled in the art without creative effort shall fall within the protection scope of the present disclosure.
随着计算机图形技术的发展,三维场景中可以向用户展示的三维模型越来越丰富。用户对三维场景中三维模型所具有的风格要求越来越高,例如,写实、卡通、手绘等多元化风格。在卡通风格渲染的三维场景中,对一些三维模型进行渲染时缺少生动性和真实感体验。With the development of computer graphics technology, more and more 3D models can be displayed to users in 3D scenes. Users have higher and higher requirements for the styles of 3D models in 3D scenes, such as realistic, cartoon, hand-painted and other diversified styles. In the 3D scene rendered in cartoon style, some 3D models lack vividness and realistic experience when rendering.
基于此,本公开提供了一种模型渲染方法,通过将用于对目标三维模型进行渲染的颜色贴图与表示材质感光信息的材质捕获贴图进行第一融合处理,可以使得目标三维模型实现基础的材质质感(例如金属质感);然后将表示目标三维模型的不同三维位置区域的感光信息的立方体贴图与第一融合图层进行第二融合处理,可以实现目标三维模型的立体视觉显示效果,进而在对目标三维模型进行渲染时,可以实现立体的、材质质感的视觉效果,使得目标三维模型的渲染效果更加真实。Based on this, the present disclosure provides a model rendering method, which can make the target 3D model realize the basic material Texture (such as metal texture); then the cube map representing the photosensitive information of different three-dimensional position areas of the target three-dimensional model and the first fusion layer are subjected to the second fusion process, which can realize the stereoscopic visual display effect of the target three-dimensional model, and then in the object three-dimensional model When the target 3D model is rendered, a three-dimensional and textured visual effect can be achieved, making the rendering effect of the target 3D model more realistic.
针对以上方案所存在的缺陷,均是发明人在经过实践并仔细研究后得出的结果,因此,上述问题的发现过程以及下文中本公开针对上述问题所提出的解决方案,都应该是发明人在本公开过程中对本公开做出的贡献。The defects in the above solutions are all the results obtained by the inventor after practice and careful research. Therefore, the discovery process of the above problems and the solutions proposed by the present disclosure below for the above problems should be the result of the inventor Contributions made to this disclosure during the course of this disclosure.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。It should be noted that like numerals and letters denote similar items in the following figures, therefore, once an item is defined in one figure, it does not require further definition and explanation in subsequent figures.
为便于对本实施例进行理解,首先对本公开实施例所公开的一种模型渲染方法进行详细介绍,本公开实施例所提供的模型渲染方法的执行主体一般为具有一定计算能力的计算机设备。To facilitate the understanding of this embodiment, a model rendering method disclosed in the embodiments of the present disclosure is firstly introduced in detail. The execution subject of the model rendering method provided in the embodiments of the present disclosure is generally a computer device with certain computing capabilities.
下面以执行主体为服务器为例对本公开实施例提供的模型渲染方法加以说明。The model rendering method provided by the embodiments of the present disclosure will be described below by taking the execution subject as a server as an example.
参见图1所示,为本公开实施例提供的模型渲染方法的流程图,所述方法包括S101~S105,其中:Referring to FIG. 1 , which is a flowchart of a model rendering method provided by an embodiment of the present disclosure, the method includes S101-S105, wherein:
S101:获取目标三维模型对应的颜色贴图。S101: Obtain a color map corresponding to the target three-dimensional model.
在本公开实施例中,目标三维模型可以是应用于目标游戏场景下,虚拟空间中的、待进行渲染的三维模型,例如虚拟人物模型、虚拟物体模型等等。目标三维模型可以是利用基于个人计算机(Personal Computer,PC)***的单位动画渲染和制作软件绘制得到的,例如3D Studio Max(简称3DS Max或3D Max)或者Maya等三维模型制作软件。In the embodiment of the present disclosure, the target three-dimensional model may be a three-dimensional model to be rendered in the virtual space applied in the target game scene, such as a virtual character model, a virtual object model, and the like. The target three-dimensional model can be obtained by using unit animation rendering and production software based on a personal computer (PC) system, such as 3D Studio Max (abbreviated as 3DS Max or 3D Max) or Maya and other three-dimensional model production software.
在目标三维模型制作完成之后,可以将制作好的目标三维模型展开,得到UV坐标系(在UV坐标中,U表示横向坐标轴,V表示纵向坐标轴)下的二维图像。展开后得到的二维图像中的每个UV坐标值可以对应到该目标三维模型表面上的每个点。After the target three-dimensional model is produced, the produced target three-dimensional model can be unfolded to obtain a two-dimensional image under the UV coordinate system (in UV coordinates, U represents the horizontal coordinate axis, and V represents the vertical coordinate axis). Each UV coordinate value in the two-dimensional image obtained after unfolding may correspond to each point on the surface of the target three-dimensional model.
颜色贴图(Color Map)中可以包含目标三维模型对应的虚拟角色本身的颜色信息。具体地,颜色贴图中包含的颜色信息与UV坐标值是相对应的,颜色贴图中可以包含目标三维模型展开后得到的二维图像中每个UV坐标值下的颜色信息。颜色贴图可以通过图像处理软件,例如绘图软件Photoshop或者其他绘图软件绘制得到。The color map (Color Map) may contain the color information of the virtual character itself corresponding to the target 3D model. Specifically, the color information contained in the color map corresponds to the UV coordinate value, and the color map may contain the color information under each UV coordinate value in the two-dimensional image obtained after the target three-dimensional model is unfolded. The color map can be drawn by image processing software, such as drawing software Photoshop or other drawing software.
S102:基于预设光照方向,生成用于表示所述目标三维模型在所述预设光照方向下的材质感光信息的材质捕获贴图以及在不同三维位置区域的感光信息的立方体贴图。S102: Based on a preset lighting direction, generate a material capture map for representing material photosensitive information of the target 3D model under the preset lighting direction and a cube map of photosensitive information in different three-dimensional position areas.
材质捕获(Material Capture,Matcap)贴图为在预设光照方向下的二维平面贴图。基于预设光照方向,生成的材质捕获贴图中可以包含预设光照方向下,当目标三维模型位姿不变的情况下,相机从目标三维模型的前面获取到的二维材质感光信息。具体地,材质捕获贴图中可以包含渲染目标三维模型中的目标材质(例如金属)表面的光照和反射等信息。利用材质捕获贴图中包含的信息对目标三维模型进行渲染时,可以呈现出渲染目标三维模型所使用的材质在光照下的光照信息。如图2所示的一种金属球的材质捕获贴图的效果示意图,从图2中可以看到材质捕获贴图中包含的受光区域、受光区域中的反光区域以及阴影区域。The Material Capture (Matcap) texture is a two-dimensional plane texture under the preset lighting direction. Based on the preset lighting direction, the generated material capture map can contain the 2D material photosensitive information obtained by the camera from the front of the target 3D model under the preset lighting direction when the pose of the target 3D model remains unchanged. Specifically, the material capture map may include information such as illumination and reflection on the surface of the target material (such as metal) in the rendering target 3D model. When the target 3D model is rendered by using the information contained in the material capture map, the lighting information of the material used to render the target 3D model under light can be presented. Figure 2 is a schematic diagram of the effect of a material capture map of a metal ball. From Figure 2, we can see the light-receiving area, the reflection area and the shadow area contained in the material capture map.
立方体贴图(Cube Map)可以通过烘焙得到。基于预设光照方向,生成的立方体贴图中可以包含在预设光照方向下、且目标三维模型位姿不变的情况下,相机从六个方向分别获取到的感光信息。立方体贴图中可以包含在上述预设光照方向下,目标三维模型中不同三维位置区域的感光信息,具体可以包括待渲染的阴影区域、反光区域等信息。如图3所示的一种金属球的立方体贴图的效果示意图,从图3中可以看到立方体贴图中包含的反光区域以及阴影区域。Cube maps can be obtained by baking. Based on the preset lighting direction, the generated cube map can contain the photosensitive information obtained by the camera from six directions under the preset lighting direction and the pose of the target 3D model remains unchanged. The cube map may contain the photosensitive information of different three-dimensional position areas in the target three-dimensional model under the above-mentioned preset lighting direction, specifically, it may include information such as shadow areas and reflective areas to be rendered. As shown in Figure 3, a schematic diagram of the effect of a cube map of a metal ball. From Figure 3, it can be seen that the reflective area and the shadow area contained in the cube map.
获取到的颜色贴图、以及生成的材质捕获贴图和立方体贴图可以添加到游戏引擎中,利用游戏引擎对颜色贴图、材质捕获贴图和立方体贴图中包含的信息进行处理,得到用于对目标三维模型进行渲染的渲染信息。The obtained color map, as well as the generated material capture map and cube map can be added to the game engine, and the information contained in the color map, material capture map and cube map can be processed by the game engine to obtain the target 3D model. Rendering information for rendering.
下面将详细介绍对上述贴图进行处理,得到用于对目标三维模型进行渲染的渲染信息的步骤。The steps of processing the above textures to obtain rendering information for rendering the target 3D model will be described in detail below.
S103:将所述颜色贴图与所述材质捕获贴图进行第一融合处理,得到第一融合图层。S103: Perform a first fusion process on the color map and the texture capture map to obtain a first fusion layer.
在该步骤中,将颜色贴图与材质捕获贴图进行第一融合处理的过程可以是:将颜色贴图中包含的目标三维模型对应的虚拟角色本身的颜色信息与材质捕获贴图中包含的目标三维模型的感光信息进行融合。这里,第一融合处理过程,也就是采样颜色贴图中包含的颜色信息与材质捕获贴图中包含的目标三维模型的感光信息的过程。得到的第一融合图层中包含将前述颜色信息和感光信息融合之后的第一渲染信息。In this step, the first fusion process of the color map and the material capture map may be: combining the color information of the virtual character itself corresponding to the target 3D model contained in the color map with the target 3D model contained in the material capture map Sensitivity information is fused. Here, the first fusion processing process is a process of color information included in the sampling color map and photosensitive information of the target 3D model included in the material capture map. The obtained first fusion layer includes the first rendering information after the aforementioned color information and photosensitive information are fused.
通常三维模型是由面片构成的,这里在对颜色贴图与材质捕获贴图进行第一融合处理的时候,在一种方式中,可以将颜色贴图中与目标三维模型的各个面片对应的子颜色贴图分别与材质捕获贴图进行第一融合处理,得到各个面片分别对应的子融合图层。然后对各个面片分别对应的子融合图层进行整合,得到第一融合图层。Usually the 3D model is composed of patches. Here, when performing the first fusion process on the color map and the material capture map, in one way, the sub-color corresponding to each patch of the target 3D model in the color map can be The first fusion processing is performed on the texture map and the material capture map respectively, and the sub-fusion layers corresponding to each patch are obtained. Then, the sub-fusion layers corresponding to each patch are integrated to obtain the first fusion layer.
其中,每个面片对应的子颜色贴图中包含该面片对应虚拟角色相应位置的颜色信息。每个面片分别对应的子融合图层中可以包含该面片的颜色信息与材质捕获贴图中包含的感光信息融合后的第一渲染信息。Wherein, the sub-color map corresponding to each patch contains the color information of the corresponding position of the virtual character corresponding to the patch. The sub-fusion layer corresponding to each patch may contain the first rendering information after the color information of the patch is fused with the photosensitive information contained in the material capture map.
然后按照各个面片分别在目标三维模型中的位置,对各个面片对应的子融合图层进行整合,得到第一融合图层。Then, according to the positions of each patch in the target three-dimensional model, the sub-fusion layers corresponding to each patch are integrated to obtain the first fusion layer.
S104:将所述立方体贴图与所述第一融合图层进行第二融合处理,得到第二融合图层。S104: Perform a second fusion process on the cubemap and the first fusion layer to obtain a second fusion layer.
在该步骤中,将立方体贴图与第一融合图层进行第一融合处理的过程可以是:将立方体贴图中包含的目标三维模型中不同三维位置区域的感光信息与前述第一融合图层中包含的渲染信息进行融合。这里,第二融合处理过程,也就是采样立方体贴图中包含的目标三维模型中不同三维位置区域的感光信息与前述第一融合图层中包含的渲染信息的过程。得到的第二融合图层中包含可以用于对目标三维模型进行渲染的第二渲染信息。In this step, the process of performing the first fusion process on the cube map and the first fusion layer may be: combining the photosensitive information of different three-dimensional position regions in the target three-dimensional model contained in the cube map with the above-mentioned first fusion layer. The rendering information is fused. Here, the second fusion processing process is the process of sampling the photosensitive information of different three-dimensional position regions in the target three-dimensional model contained in the cube map and the rendering information contained in the aforementioned first fusion layer. The obtained second fusion layer includes second rendering information that can be used to render the target three-dimensional model.
S105:基于所述第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。S105: Based on the second fusion layer, perform rendering processing on the target three-dimensional model to obtain a rendered target three-dimensional model.
在对目标三维模型进行渲染的时候,可以利用第二融合图层中包含的第二渲染信息直接进行渲染,得到的目标三维模型中融合了颜色贴图中存储的目标三维模型的各个点对应的颜色信息、材质捕获贴图中存储的表示目标三维模型在预设光照方向下的材质感光信息的平面光影信息、以及立方体贴图中存储的表示在预设光照方向下,目标三维模型的不同三维位置区域的感光信息的立体光影信息,从而使得渲染后的目标三维模型可以呈现出目标材质在预设光照方向下的立体视觉效果,如图5所述的一种金属球渲染完成之后的效果示意图。When rendering the target 3D model, the second rendering information contained in the second fusion layer can be used for direct rendering, and the obtained target 3D model is fused with the colors corresponding to each point of the target 3D model stored in the color map information, the planar light and shadow information stored in the material capture map representing the material photosensitive information of the target 3D model under the preset lighting direction, and the information stored in the cube map representing different 3D position areas of the target 3D model under the preset lighting direction The three-dimensional light and shadow information of the photosensitive information, so that the rendered three-dimensional model of the target can present the three-dimensional visual effect of the target material in the preset light direction, as shown in Figure 5, a schematic diagram of the effect of a metal ball after rendering.
为了使得目标三维模型的立体视觉效果更加真实,在一种实施方式中,可以响应于输入的感光参数信息,基于感光参数信息,对第二融合图层进行感光处理,得到感光处理后的第二融合图层。然后基于感光处理后的第二融合图层,对目标三维模型进行渲染处理,得到渲染后的目标三维模型。In order to make the stereoscopic effect of the target 3D model more realistic, in one embodiment, in response to the input photosensitive parameter information, based on the photosensitive parameter information, photosensitive processing may be performed on the second fusion layer to obtain the photosensitive processed second Blend layers. Then, based on the photosensitive-processed second fusion layer, the target three-dimensional model is rendered to obtain the rendered target three-dimensional model.
这里可以响应于在游戏引擎的操作界面上输入的感光参数信息。输入的感光参数信息可以增加目标三维模型中目标材质(例如金属)的感光效果,例如光影强度效果、反射强度效果、反射颜色等。因此,在又一实施方式中,输入的感光参数信息可以包括金属光影强度信息、金属反射强度信息、目标三维模型上的反光区域对应的环境反射颜色信息中的至少一种。Here, it may respond to the photosensitive parameter information input on the operation interface of the game engine. The input photosensitive parameter information can increase photosensitive effects of the target material (such as metal) in the target three-dimensional model, such as light and shadow intensity effects, reflection intensity effects, reflection colors, and the like. Therefore, in yet another embodiment, the input photosensitive parameter information may include at least one of metal light and shadow intensity information, metal reflection intensity information, and environment reflection color information corresponding to the reflection area on the target three-dimensional model.
因此在具体实施过程中,可以基于金属光影强度信息,调节目标三维模型中光影的强度。利用金属光影强度信息对第二融合图层进行感光处理,得到的感光处理后的第二融合图层中融合了金属光影强度信息。在利用感光处理后的第二融合图层对目标三维模型进行渲染后,目标三维模型上的金属的光影强度更大,也就是使得目标三维模型呈现出反射区域更亮、阴影区域更暗的视觉效果。Therefore, in a specific implementation process, the intensity of the light and shadow in the target three-dimensional model can be adjusted based on the intensity information of the metal light and shadow. The metal light and shadow intensity information is used to perform photosensitive processing on the second fused layer, and the metal light and shadow intensity information is fused in the obtained second fused layer after photosensitive processing. After the target 3D model is rendered with the second fusion layer after photosensitive processing, the light and shadow intensity of the metal on the target 3D model is greater, which means that the target 3D model presents a vision with brighter reflection areas and darker shadow areas. Effect.
在具体实施过程中,也可以基于金属反射强度信息,调节目标三维模型中反光区域的反射强度。利用金属反射强度信息对第二融合图层进行感光处理,得到的感光处理后的第二融合图层中融合了金属反射强度信息。在利用感光处理后的第二融合图层对目标三维模型进行渲染后,目标三维模型上金属的反光强度更大,也可以使得目标三维模型呈现的反射区域更亮、阴影区域更暗的视觉效果。In a specific implementation process, the reflection intensity of the reflective area in the target three-dimensional model may also be adjusted based on the metal reflection intensity information. The metal reflection intensity information is used to perform photosensitization processing on the second fusion layer, and the metal reflection intensity information is integrated in the second fusion layer obtained after the photosensitization processing. After rendering the target 3D model with the second fusion layer after photosensitive processing, the reflective intensity of the metal on the target 3D model is greater, and it can also make the target 3D model present a visual effect of brighter reflection areas and darker shadow areas .
在具体实施过程中,也可以基于目标三维模型上的反光区域对应的环境反射颜色信息,目标三维模型中反光区域的反射颜色。利用环境反射颜色信息对第二融合图层进行感光处理,得到的感光处理后的第二融合图层中融合了环境反射颜色信息。在利用感光处理后的第二融合图层对目标三维模型进行渲染后,目标三维模型上的金属的反光颜色可以更加符合目标游戏场景下的环境颜色。In a specific implementation process, the reflection color of the reflective area in the target three-dimensional model may also be determined based on the environmental reflection color information corresponding to the reflective area on the target three-dimensional model. Sensitization processing is performed on the second fusion layer by using the environmental reflection color information, and the environmental reflection color information is integrated in the obtained second fusion layer after the sensitivity processing. After the target 3D model is rendered by using the photosensitive second fusion layer, the reflective color of the metal on the target 3D model can be more in line with the environment color in the target game scene.
在具体实施过程中,也可以利用上述感光参数信息中的多种对第二融合图层进行感光处理,从而更能增加立体视觉效果的真实性。In a specific implementation process, multiple types of the above-mentioned photosensitive parameter information may also be used to perform photosensitization processing on the second fusion layer, so as to further increase the authenticity of the stereoscopic visual effect.
为了使得目标三维模型的立体视觉效果更加真实,在一种实施方式中,还可以对目标三维模型中受光区域的颜色进行色阶处理,使得目标三维模型中的受光区域具有颜色变化的视觉效果。具体地,可以包括基于色阶贴图中的色阶信息,对第二融合图层进行色阶处理,得到色阶处理后的第二融合图层;色阶信息中包括各个色阶分别对应的颜色值、各个色阶在目标三维模型中受光区域的占比信息和各个色阶中相邻色阶的颜色融合信息。然后基于色阶处理后的第二融合图层,对目标三维模型进行渲染处理,得到渲染后的目标三维模型。In order to make the stereoscopic effect of the target 3D model more realistic, in an embodiment, the color of the light-receiving area in the target 3D model may also be color-graded, so that the light-receiving area in the target 3D model has a visual effect of color change. Specifically, it may include performing color scale processing on the second fusion layer based on the color scale information in the color scale map to obtain the second fusion layer after the color scale processing; the color scale information includes the colors corresponding to each color scale value, the proportion information of each color level in the light-receiving area of the target 3D model, and the color fusion information of adjacent color levels in each color level. Then, based on the second fusion layer after the color scale processing, the target three-dimensional model is rendered to obtain the rendered target three-dimensional model.
这里,色阶贴图可以是通过绘图软件制作的表示目标三维模型中受光区域的颜色变化特点的贴图。色阶贴图中可以包含多个色阶。色阶贴图中的各个色阶可以按照颜色深浅变化的顺序进行排列。如图4所示的一种色阶贴图的效果示意图,可以包含4个色阶,且4个色阶按照由深到浅的顺序从左到右排列。并且每个色阶在整个色阶贴图中的占比信息可以是不同的。相邻的两个色阶在临界位置的颜色值可以是将这两个色阶的颜色值进行融合之后的颜色融合值。Here, the color scale map may be a map created by drawing software to represent the color change characteristics of the light-receiving area in the target three-dimensional model. Multiple levels can be included in a level map. Each color scale in the color scale map can be arranged in the order of color shade change. A schematic diagram of the effect of a color scale map as shown in FIG. 4 may include 4 color scales, and the 4 color scales are arranged from left to right in order from dark to light. And the proportion information of each color level in the whole color level map can be different. The color values at the critical positions of two adjacent color levels may be a color fusion value obtained by fusing the color values of the two color levels.
利用色阶贴图中的色阶信息,对第二融合图层进行色阶处理,得到的色阶处理后的第二融合图层中融合了色阶贴图中的色阶信息。在利用色阶处理后的第二融合图层对目标三维模型进行渲染后,目标三维模型上受光区域的颜色可以呈现出渐变的效果,从而可以避免从阴影区域到受光区域之间颜色发生跳变的情况,从而增加立体视觉效果的真实性。The color scale information in the color scale map is used to perform color scale processing on the second fusion layer, and the color scale information in the color scale map is integrated into the obtained second fusion layer after the color scale processing. After rendering the target 3D model with the second fusion layer after color scale processing, the color of the light-receiving area on the target 3D model can show a gradient effect, thereby avoiding color jumps from the shadow area to the light-receiving area situation, thereby increasing the authenticity of the stereoscopic visual effect.
为了使得目标三维模型的立体视觉效果更加真实,在一种实施方式中,还可以将目标三维模型中的光影形状进行处理,使得目标三维模型中的光影形状更加符合真实环境中的光影形状。具体地,可以基于环境贴图所表示的光影形状,对第二融合图层进行光影形状处理,得到光影形状处理后的第二融合图层。然后,基于光影形状处理后的第二融合图层,对目标三维模型进行渲染处理,得到渲染后的目标三维模型。In order to make the stereoscopic effect of the target 3D model more realistic, in an embodiment, the light and shadow shapes in the target 3D model may also be processed, so that the light and shadow shapes in the target 3D model are more consistent with the light and shadow shapes in the real environment. Specifically, based on the light and shadow shape represented by the environment map, the light and shadow shape processing may be performed on the second fusion layer to obtain the light and shadow shape processed second fusion layer. Then, based on the second fusion layer after the light and shadow shape processing, the target three-dimensional model is rendered to obtain the rendered target three-dimensional model.
这里,环境贴图可以是通过绘图软件制作的表示在预设光照方向下,目标三维模型的光影形状的贴图。环境贴图中包含反光区域的形状信息以及阴影区域的形状信息。Here, the environment map may be a map made by drawing software that represents the light and shadow shape of the target three-dimensional model under a preset lighting direction. The environment map contains the shape information of the reflective area and the shape information of the shadow area.
利用环境贴图中的光影形状信息,对第二融合图层进行光影形状处理,得到的光影形状处理后的第二融合图层中融合了环境贴图中的光影形状信息。在利用光影形状处理后的第二融合图层对目标三维模型进行渲染后,目标三维模型上反光区域的形状以及阴影区域的形状可以更加符合真实环境中的光影形状,从而增加立体视觉效果的真实性。The light and shadow shape information in the environment map is used to perform light and shadow shape processing on the second fusion layer, and the light and shadow shape information in the environment map is integrated into the obtained second fusion layer after the light and shadow shape processing. After rendering the target 3D model with the second fusion layer after light and shadow shape processing, the shape of the reflective area and the shape of the shadow area on the target 3D model can be more in line with the light and shadow shape in the real environment, thereby increasing the reality of the stereoscopic visual effect sex.
考虑到在一些情况下可能只需要对目标三维模型中的目标区域进行材质渲染,例如对虚拟人物衣服上的金属亮片进行金属渲染。因此,这里可以获取表示目标三维模型中待进行材质渲染的区域的通道贴图,通过通道贴图中存储的待进行材质渲染的区域信息,对待进行材质渲染的区域信息所指示的区域进行材质渲染。Considering that in some cases it may only be necessary to perform material rendering on the target area in the target 3D model, for example, perform metal rendering on the metal sequins on the clothes of the avatar. Therefore, the channel map indicating the area to be rendered in the target 3D model can be obtained here, and the area indicated by the area information to be rendered in the channel map is used to render the area indicated by the area information to be rendered in material.
在具体实施中,可以在到第一融合图层之后,将立方体贴图与第一融合图层进行第二融合处理之前,基于通道贴图,确定第一融合图层中与待进行材质渲染的区域对应的第一局部融合图层。得到的第一局部融合图层中可以包含待进行材质渲染的区域信息。在得到第一局部融合图层之后,可以将立方体贴图与包含待进行材质渲染的区域信息的第一局部融合图层进行第二融合处理得到第二融合图层。In a specific implementation, after the first fusion layer, before performing the second fusion process on the cube map and the first fusion layer, based on the channel map, it is determined that the area corresponding to the area to be rendered in the first fusion layer corresponds to The first local fusion layer of . The obtained first local fusion layer may include area information to be rendered with material. After the first local fusion layer is obtained, the cube map and the first local fusion layer including the area information to be rendered by material can be subjected to a second fusion process to obtain the second fusion layer.
在得到第二融合图层之后,对目标三维模型进行渲染处理之前,可以基于通道贴图,确定第二融合图层中与待进行材质渲染的区域对应的第二局部融合图层。然后基于第二局部融合图层,对目标三维模型进行渲染处理,得到渲染后的目标三维模型。After the second fusion layer is obtained and before the target three-dimensional model is rendered, a second local fusion layer corresponding to the area to be rendered in the second fusion layer may be determined based on the channel map. Then, based on the second partial fusion layer, the target three-dimensional model is rendered to obtain the rendered target three-dimensional model.
通过使用通道贴图,可以对待进行材质渲染的区域进行材质渲染,从而使得待进行材质渲染的区域呈现出特殊材质质感的效果。By using the channel map, material rendering can be performed on the area to be rendered by material, so that the area to be rendered by material can present the effect of special material texture.
在本公开的实施例中,可以通过用于材质渲染的算法对多种纹理贴图进行处理,得到用于对目标三维模型进行渲染的渲染信息。这里使用的算法主要包括光照着色算 法和光影算法。这里,以材质为金属为例,对多种纹理贴图进行算法处理的过程进行详述。In the embodiments of the present disclosure, various texture maps may be processed by an algorithm for material rendering to obtain rendering information for rendering a target three-dimensional model. The algorithms used here mainly include light shading algorithm and light and shadow algorithm. Here, taking the material as metal as an example, the process of algorithmic processing of various texture maps is described in detail.
在具体实施中,可以获取目标三维模型对应的颜色贴图和材质捕获贴图。In a specific implementation, a color map and a material capture map corresponding to the target 3D model may be obtained.
这里,可以假设颜色贴图中包含的目标三维模型对应的虚拟角色本身的颜色信息为a,材质捕获贴图中包含的在预设光照方向下,目标三维模型的二维光影信息为b,第一融合图层中包含的渲染信息为c,那么第一融合图层中包含的渲染信息可以利用光照着色算法包含的第一公式:Here, it can be assumed that the color information of the virtual character itself corresponding to the target 3D model contained in the color map is a, and the 2D light and shadow information of the target 3D model contained in the material capture map under the preset lighting direction is b, and the first fusion The rendering information contained in the layer is c, then the rendering information contained in the first fusion layer can use the first formula contained in the lighting and shading algorithm:
c=(saturate(((a>0.5)?(1.0-(1.1-2.0*(a-0.5))*(1.0-_b)):(2.0*a*b))))得到。c=(saturate(((a>0.5)?(1.0-(1.1-2.0*(a-0.5))*(1.0-_b)):(2.0*a*b)))) is obtained.
其中,saturate(x)函数的功能可以为:clamp i to range[0,1],也就是当x大于1的情况下令i等于1,当x小于0的情况下令i等于0,其中,i为任意变量。通过saturate(i)函数可以使得i在0和1之间取值。也就是通过上述第一公式,可以使得第一融合图层中包含的渲染信息在0~1之间取值。Among them, the function of the saturate(x) function can be: clamp i to range[0,1], that is, when x is greater than 1, i is set to be equal to 1, and when x is less than 0, i is set to be equal to 0, where i is arbitrary variable. The saturate(i) function can make i take a value between 0 and 1. That is, through the above first formula, the rendering information contained in the first fused layer can take a value between 0 and 1.
通过上述光照着色算法可以体现出目标三维模型基础的金属质感。The metal texture of the base of the target 3D model can be reflected through the above lighting and coloring algorithm.
接下来,获取目标三维模型对应的立方体贴图以及通道贴图。这里,假设立方体贴图中包含的目标三维模型的在预设光照方向下,目标三维模型中不同三维位置区域的光影信息d。假设通道贴图中包含的目标三维模型中待进行材质渲染的区域信息为y。这里通道贴图中的区域信息值可以存储在任一通道中。此外,还可以获取输入的金属光影强度信息x。Next, get the cube map and channel map corresponding to the target 3D model. Here, it is assumed that the target 3D model contained in the cube map has light and shadow information d of different 3D position areas in the target 3D model under a preset lighting direction. It is assumed that the area information to be rendered in the target 3D model contained in the channel map is y. The area information value in the channel map here can be stored in any channel. In addition, the input metal light and shadow intensity information x can also be obtained.
这里,先利用光影算法中的第二公式d1=smoothstep(d-0.1,d+0.1,0.17),得到立方体贴图中的光影信息进行平滑处理后的光影信息d1。其中,smoothstep函数的功能可以为:返回0和1之间的平滑差值。Here, first use the second formula d1=smoothstep(d−0.1, d+0.1, 0.17) in the light and shadow algorithm to obtain light and shadow information d1 after smoothing the light and shadow information in the cube map. Wherein, the function of the smoothstep function may be: return a smooth difference between 0 and 1.
再根据光影算法中的第三公式d2=c*(1-d1*x)+b*x*0.5,得到利用金属光影强度信息x对光影信息d1和第一融合图层中包含的渲染信息c进行光影强度融合处理,以及利用金属光影强度信息x对目标三维模型的二维光影信息b进行光影强度融合处理后,得到光影强度融合处理后的光影信息d2。Then according to the third formula d2=c*(1-d1*x)+b*x*0.5 in the light and shadow algorithm, the rendering information c contained in the light and shadow information d1 and the first fusion layer is obtained by using the metal light and shadow intensity information x After performing light and shadow intensity fusion processing, and using the metal light and shadow intensity information x to perform light and shadow intensity fusion processing on the two-dimensional light and shadow information b of the target three-dimensional model, the light and shadow information d2 after the light and shadow intensity fusion processing is obtained.
最后,根据光影算法中的第四公式f=c*(1.0-y)+d2*y,得到利用待进行材质渲染的区域信息y对第一融合图层中包含的渲染信息c以及光影强度融合处理后的光影信息d2进行区域选择处理,得到区域选择处理的光影渲染信息f。Finally, according to the fourth formula f=c*(1.0-y)+d2*y in the light and shadow algorithm, the rendering information c contained in the first fusion layer and the light and shadow intensity fusion by using the area information y to be rendered by the material are obtained The processed light and shadow information d2 is subjected to region selection processing to obtain light and shadow rendering information f for region selection processing.
在利用区域选择处理的光影渲染信息f对目标三维模型进行渲染处理的过程中,还可以获取输入的金属反射强度信息、输入的目标三维模型上的反光区域对应的环境 反射颜色信息、存储有目标三维模型中受光区域的颜色变化特点的色阶贴图、包含在预设光照方向下目标三维模型的光影形状信息的环境贴图。In the process of rendering the target 3D model using the light and shadow rendering information f of the area selection process, the input metal reflection intensity information, the environment reflection color information corresponding to the reflective area on the input target 3D model, and the stored target 3D model can also be obtained. The color scale map of the color change characteristics of the light-receiving area in the 3D model, and the environment map containing the light and shadow shape information of the target 3D model under the preset lighting direction.
其中,将金属反射强度信息与区域选择处理的光影渲染信息f进行融合,可以使得渲染后的目标三维模型上金属的反光强度更大;将目标三维模型上的反光区域对应的环境反射颜色信息与区域选择处理的光影渲染信息f进行融合,可以使得渲染后的目标三维模型上金属的反光颜色可以更加符合目标游戏场景下的环境颜色;将色阶贴图中存储的目标三维模型中受光区域的色阶信息与区域选择处理的光影渲染信息f进行融合,可以使得渲染后的目标三维模型受光区域的颜色可以呈现出渐变的效果;将环境贴图中包含的在预设光照方向下目标三维模型的光影形状信息与区域选择处理的光影渲染信息f进行融合,可以使得渲染后的目标三维模型上反光区域的形状以及阴影区域的形状可以更加符合真实环境中的光影形状。Among them, the metal reflection intensity information is fused with the light and shadow rendering information f of the area selection process, which can make the reflection intensity of the metal on the rendered target 3D model larger; the environment reflection color information corresponding to the reflective area on the target 3D model is combined with The light and shadow rendering information f of the area selection process is fused, so that the reflective color of the metal on the rendered target 3D model can be more in line with the ambient color of the target game scene; the color of the light-receiving area in the target 3D model stored in the color scale map The light and shadow rendering information f of the area selection process is fused with the order information, so that the color of the light-receiving area of the rendered target 3D model can present a gradient effect; the light and shadow of the target 3D model in the preset lighting direction contained in the environment map The shape information is fused with the light and shadow rendering information f of the area selection process, so that the shape of the reflective area and the shape of the shadow area on the rendered target 3D model can be more in line with the light and shadow shape in the real environment.
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。Those skilled in the art can understand that in the above method of specific implementation, the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process. The specific execution order of each step should be based on its function and possible The inner logic is OK.
基于同一发明构思,本公开实施例中还提供了与模型渲染方法对应的模型渲染装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述模型渲染方法相似,因此装置的实施可以参见方法的实施,重复之处不再赘述。Based on the same inventive concept, the embodiment of the present disclosure also provides a model rendering device corresponding to the model rendering method. Since the problem-solving principle of the device in the embodiment of the present disclosure is similar to the above-mentioned model rendering method in the embodiment of the present disclosure, the implementation of the device Reference can be made to the implementation of the method, and repeated descriptions will not be repeated.
参照图6所示,为本公开实施例提供的一种模型渲染装置的架构示意图,所述装置包括:获取模块601、生成模块602、第一处理模块603、第二处理模块604、第三处理模块605;其中,Referring to FIG. 6 , it is a schematic diagram of the architecture of a model rendering device provided by an embodiment of the present disclosure. The device includes: an acquisition module 601 , a generation module 602 , a first processing module 603 , a second processing module 604 , and a third processing module. Module 605; wherein,
获取模块601,用于获取目标三维模型对应的颜色贴图;An acquisition module 601, configured to acquire a color map corresponding to the target three-dimensional model;
生成模块602,用于基于预设光照方向,生成用于表示所述目标三维模型在所述预设光照方向下的材质感光信息的材质捕获贴图以及在不同三维位置区域的感光信息的立方体贴图;A generation module 602, configured to generate a material capture map for representing material photosensitive information of the target 3D model under the preset lighting direction and a cube map of photosensitive information in different three-dimensional position regions based on a preset lighting direction;
第一处理模块603,用于将所述颜色贴图与所述材质捕获贴图进行第一融合处理,得到第一融合图层;The first processing module 603 is configured to perform a first fusion process on the color map and the texture capture map to obtain a first fusion layer;
第二处理模块604,用于将所述立方体贴图与所述第一融合图层进行第二融合处理,得到第二融合图层;The second processing module 604 is configured to perform a second fusion process on the cube map and the first fusion layer to obtain a second fusion layer;
第三处理模块604,用于基于所述第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。The third processing module 604 is configured to perform rendering processing on the target 3D model based on the second fusion layer to obtain a rendered target 3D model.
在一种可选的实施方式中,第一处理模块603,具体用于:In an optional implementation manner, the first processing module 603 is specifically configured to:
将所述颜色贴图中与所述目标三维模型的各个面片对应的子颜色贴图分别与所述材质捕获贴图进行所述第一融合处理,得到所述各个面片分别对应的子融合图层;Performing the first fusion process on the sub-color maps corresponding to each patch of the target three-dimensional model in the color map and the material capture map respectively, to obtain sub-fusion layers corresponding to each patch;
对所述各个面片分别对应的所述子融合图层进行整合,得到所述第一融合图层。The sub-fusion layers corresponding to the respective patches are integrated to obtain the first fusion layer.
在一种可选的实施方式中,第三处理模块604,具体用于:In an optional implementation manner, the third processing module 604 is specifically configured to:
响应于输入的感光参数信息,基于所述感光参数信息,对所述第二融合图层进行感光处理,得到感光处理后的第二融合图层;In response to the input photosensitive parameter information, based on the photosensitive parameter information, perform photosensitization processing on the second fusion layer to obtain a photosensitive processed second fusion layer;
基于所述感光处理后的第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。Rendering is performed on the target three-dimensional model based on the photosensitive-processed second fusion layer to obtain a rendered target three-dimensional model.
在一种可选的实施方式中,所述感光参数信息包括金属光影强度信息、金属反射强度信息、所述目标三维模型上的反光区域对应的环境反射颜色信息中的至少一种。In an optional implementation manner, the photosensitive parameter information includes at least one of metal light and shadow intensity information, metal reflection intensity information, and environmental reflection color information corresponding to a reflective area on the three-dimensional target model.
在一种可选的实施方式中,In an alternative embodiment,
获取模块601,还用于获取表示所述目标三维模型中受光区域的颜色变化特点的色阶贴图;The obtaining module 601 is also used to obtain a color scale map representing the color change characteristics of the light-receiving area in the target three-dimensional model;
第三处理模块605,具体用于:The third processing module 605 is specifically used for:
基于所述色阶贴图中的色阶信息,对所述第二融合图层进行色阶处理,得到色阶处理后的第二融合图层;所述色阶信息中包括各个色阶分别对应的颜色值、所述各个色阶在所述目标三维模型中受光区域的占比信息和所述各个色阶中相邻色阶的颜色融合信息;Based on the color scale information in the color scale map, perform color scale processing on the second fusion layer to obtain the second fusion layer after the color scale processing; the color scale information includes the color scale corresponding to each color scale color value, the proportion information of each color level in the light-receiving area in the target three-dimensional model, and the color fusion information of adjacent color levels in each color level;
基于所述色阶处理后的第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。Based on the second fusion layer after the color scale processing, the target three-dimensional model is rendered to obtain the rendered target three-dimensional model.
在一种可选的实施方式中,获取模块601,还用于获取表示所述目标三维模型中待进行材质渲染的区域的通道贴图;In an optional implementation manner, the obtaining module 601 is further configured to obtain a channel map representing an area to be rendered in the target 3D model;
所述装置还包括:The device also includes:
第一确定模块,用于基于所述通道贴图,确定所述第一融合图层中与所述待进行材质渲染的区域对应的第一局部融合图层;A first determination module, configured to determine, based on the channel map, a first partial fusion layer in the first fusion layer corresponding to the region to be rendered with material;
第二处理模块604,具体用于:将所述立方体贴图与所述第一局部融合图层进行第二融合处理,得到所述第二融合图层;The second processing module 604 is specifically configured to: perform a second fusion process on the cubemap and the first local fusion layer to obtain the second fusion layer;
所述装置还包括:The device also includes:
第二确定模块,用于基于所述通道贴图,确定所述第二融合图层中与所述待进行材质渲染的区域对应的第二局部融合图层;A second determining module, configured to determine a second partial fusion layer in the second fusion layer corresponding to the area to be rendered in material based on the channel map;
第三处理模块605,具体用于:基于所述第二局部融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。The third processing module 605 is specifically configured to: perform rendering processing on the target 3D model based on the second local fusion layer to obtain a rendered target 3D model.
在一种可选的实施方式中,获取模块601,还用于获取在所述预设光照方向下,表示所述目标三维模型的光影形状的环境贴图;In an optional implementation manner, the acquisition module 601 is further configured to acquire an environment map representing the light and shadow shape of the target three-dimensional model under the preset illumination direction;
第三处理模块605,具体用于:The third processing module 605 is specifically used for:
基于所述环境贴图所表示的光影形状,对所述第二融合图层进行光影形状处理,得到光影形状处理后的第二融合图层;Based on the light and shadow shape represented by the environment map, performing light and shadow shape processing on the second fusion layer to obtain a light and shadow shape processed second fusion layer;
基于所述光影形状处理后的第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。Rendering is performed on the target 3D model based on the second fusion layer after the light and shadow shape processing, to obtain the rendered target 3D model.
关于装置中的各模块的处理流程、以及各模块之间的交互流程的描述可以参照上述方法实施例中的相关说明,这里不再详述。For the description of the processing flow of each module in the device and the interaction flow between the modules, reference may be made to the relevant description in the above method embodiment, and details will not be described here.
基于同一技术构思,本公开实施例还提供了一种计算机设备。参照图7所示,为本公开实施例提供的计算机设备700的结构示意图,包括处理器701、存储器702、和总线703。其中,存储器702用于存储执行指令,包括内存7021和外部存储器7022;这里的内存7021也称内存储器,用于暂时存放处理器701中的运算数据,以及与硬盘等外部存储器7022交换的数据,处理器701通过内存7021与外部存储器7022进行数据交换,当计算机设备700运行时,处理器701与存储器702之间通过总线703通信,使得处理器701执行以下指令:Based on the same technical idea, the embodiment of the present disclosure also provides a computer device. Referring to FIG. 7 , it is a schematic structural diagram of a computer device 700 provided by an embodiment of the present disclosure, including a processor 701 , a memory 702 , and a bus 703 . Among them, the memory 702 is used to store execution instructions, including a memory 7021 and an external memory 7022; the memory 7021 here is also called an internal memory, and is used to temporarily store calculation data in the processor 701 and exchange data with an external memory 7022 such as a hard disk. The processor 701 exchanges data with the external memory 7022 through the memory 7021. When the computer device 700 is running, the processor 701 communicates with the memory 702 through the bus 703, so that the processor 701 executes the following instructions:
获取目标三维模型对应的颜色贴图;Obtain the color map corresponding to the target 3D model;
基于预设光照方向,生成用于表示所述目标三维模型在所述预设光照方向下的材质感光信息的材质捕获贴图以及在不同三维位置区域的感光信息的立方体贴图;Based on the preset lighting direction, generate a material capture map for representing the material photosensitive information of the target 3D model under the preset lighting direction and a cube map of photosensitive information in different three-dimensional position areas;
将所述颜色贴图与所述材质捕获贴图进行第一融合处理,得到第一融合图层;performing a first fusion process on the color map and the texture capture map to obtain a first fusion layer;
将所述立方体贴图与所述第一融合图层进行第二融合处理,得到第二融合图层;performing a second fusion process on the cube map and the first fusion layer to obtain a second fusion layer;
基于所述第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。Based on the second fused layer, rendering processing is performed on the target three-dimensional model to obtain a rendered target three-dimensional model.
本公开实施例还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器运行时执行上述方法实施例中所述的模型渲染方法的步骤。其中,所述存储介质可以是易失性或非易失的计算机可读取存储介质。An embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the steps of the model rendering method described in the above-mentioned method embodiments are executed. . Wherein, the storage medium may be a volatile or non-volatile computer-readable storage medium.
本公开实施例还提供一种计算机程序,所述计算机程序被处理器运行时执行上述方法实施例中所述的模型渲染方法的步骤。Embodiments of the present disclosure further provide a computer program, which executes the steps of the model rendering method described in the foregoing method embodiments when the computer program is run by a processor.
本公开实施例还提供一种计算机程序产品,所述计算机产品承载有程序代码,所述程序代码包括的指令可用于执行上述方法实施例中所述的模型渲染方法的步骤,具体可参见上述方法实施例,在此不再赘述。The embodiment of the present disclosure also provides a computer program product, the computer product carries program code, and the instructions included in the program code can be used to execute the steps of the model rendering method described in the above method embodiment, for details, please refer to the above method The embodiment will not be repeated here.
其中,上述计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。Wherein, the above-mentioned computer program product may be specifically implemented by means of hardware, software or a combination thereof. In an optional embodiment, the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) etc. wait.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的***、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。Those skilled in the art can clearly understand that for the convenience and brevity of description, the specific working process of the above-described system and device can refer to the corresponding process in the foregoing method embodiments, which will not be repeated here. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices and methods may be implemented in other ways. The device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some communication interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动 硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor. Based on this understanding, the technical solution of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以所述权利要求的保护范围为准。Finally, it should be noted that: the above-mentioned embodiments are only specific implementations of the present disclosure, and are used to illustrate the technical solutions of the present disclosure, rather than limit them, and the protection scope of the present disclosure is not limited thereto, although referring to the aforementioned The embodiments have described the present disclosure in detail, and those skilled in the art should understand that any person familiar with the technical field can still modify the technical solutions described in the foregoing embodiments within the technical scope disclosed in the present disclosure Changes can be easily imagined, or equivalent replacements can be made to some of the technical features; and these modifications, changes or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the present disclosure, and should be included in this disclosure. within the scope of protection. Therefore, the protection scope of the present disclosure should be determined by the protection scope of the claims.

Claims (12)

  1. 一种模型渲染方法,包括:A model rendering method, comprising:
    获取目标三维模型对应的颜色贴图;Obtain the color map corresponding to the target 3D model;
    基于预设光照方向,生成用于表示所述目标三维模型在所述预设光照方向下的材质感光信息的材质捕获贴图以及在不同三维位置区域的感光信息的立方体贴图;Based on the preset lighting direction, generate a material capture map for representing the material photosensitive information of the target 3D model under the preset lighting direction and a cube map of photosensitive information in different three-dimensional position areas;
    将所述颜色贴图与所述材质捕获贴图进行第一融合处理,得到第一融合图层;performing a first fusion process on the color map and the texture capture map to obtain a first fusion layer;
    将所述立方体贴图与所述第一融合图层进行第二融合处理,得到第二融合图层;performing a second fusion process on the cube map and the first fusion layer to obtain a second fusion layer;
    基于所述第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。Based on the second fused layer, rendering processing is performed on the target three-dimensional model to obtain a rendered target three-dimensional model.
  2. 根据权利要求1所述的方法,其中,所述将所述颜色贴图与所述材质捕获贴图进行第一融合处理,得到第一融合图层,包括:The method according to claim 1, wherein the first fusion processing of the color map and the material capture map to obtain a first fusion layer comprises:
    将所述颜色贴图中与所述目标三维模型的各个面片对应的子颜色贴图分别与所述材质捕获贴图进行所述第一融合处理,得到所述各个面片分别对应的子融合图层;Performing the first fusion process on the sub-color maps corresponding to each patch of the target three-dimensional model in the color map and the material capture map respectively, to obtain sub-fusion layers corresponding to each patch;
    对所述各个面片分别对应的所述子融合图层进行整合,得到所述第一融合图层。The sub-fusion layers corresponding to the respective patches are integrated to obtain the first fusion layer.
  3. 根据权利要求1或2所述的方法,其中,所述基于所述第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型,包括:The method according to claim 1 or 2, wherein, performing rendering processing on the target 3D model based on the second fusion layer to obtain a rendered target 3D model, comprising:
    响应于输入的感光参数信息,基于所述感光参数信息,对所述第二融合图层进行感光处理,得到感光处理后的第二融合图层;In response to the input photosensitive parameter information, based on the photosensitive parameter information, perform photosensitization processing on the second fusion layer to obtain a photosensitive processed second fusion layer;
    基于所述感光处理后的第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。Rendering is performed on the target three-dimensional model based on the photosensitive-processed second fusion layer to obtain a rendered target three-dimensional model.
  4. 根据权利要求3所述的方法,其中,所述感光参数信息包括金属光影强度信息、金属反射强度信息、所述目标三维模型上的反光区域对应的环境反射颜色信息中的至少一种。The method according to claim 3, wherein the photosensitive parameter information includes at least one of metal light and shadow intensity information, metal reflection intensity information, and environmental reflection color information corresponding to a reflective area on the target three-dimensional model.
  5. 根据权利要求1或2所述的方法,其中,所述方法还包括:The method according to claim 1 or 2, wherein the method further comprises:
    获取表示所述目标三维模型中受光区域的颜色变化特点的色阶贴图;Acquiring a color scale map representing the color change characteristics of the light-receiving area in the target three-dimensional model;
    所述基于所述第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型,包括:The step of rendering the target 3D model based on the second fusion layer to obtain the rendered target 3D model includes:
    基于所述色阶贴图中的色阶信息,对所述第二融合图层进行色阶处理,得到色阶处理后的第二融合图层;所述色阶信息中包括各个色阶分别对应的颜色值、所述各个色阶在所述目标三维模型中受光区域的占比信息和所述各个色阶中相邻色阶的颜色融合信息;Based on the color scale information in the color scale map, perform color scale processing on the second fusion layer to obtain the second fusion layer after the color scale processing; the color scale information includes the color scale corresponding to each color scale color value, the proportion information of each color level in the light-receiving area in the target three-dimensional model, and the color fusion information of adjacent color levels in each color level;
    基于所述色阶处理后的第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。Based on the second fusion layer after the color scale processing, the target three-dimensional model is rendered to obtain the rendered target three-dimensional model.
  6. 根据权利要求1或2所述的方法,其中,所述方法还包括:The method according to claim 1 or 2, wherein the method further comprises:
    获取表示所述目标三维模型中待进行材质渲染的区域的通道贴图;Obtaining a channel map representing an area to be rendered in the target three-dimensional model;
    在得到所述第一融合图层之后,将所述立方体贴图与所述第一融合图层进行第二融合处理之前,所述方法还包括:After obtaining the first fusion layer, before performing the second fusion process on the cube map and the first fusion layer, the method further includes:
    基于所述通道贴图,确定所述第一融合图层中与所述待进行材质渲染的区域对应的第一局部融合图层;Based on the channel map, determine a first partial fusion layer corresponding to the area to be rendered in the first fusion layer in the first fusion layer;
    所述将所述立方体贴图与所述第一融合图层进行第二融合处理,得到第二融合图层,包括:The second fusion processing of the cube map and the first fusion layer is carried out to obtain the second fusion layer, including:
    将所述立方体贴图与所述第一局部融合图层进行第二融合处理,得到所述第二融合图层;performing a second fusion process on the cube map and the first partial fusion layer to obtain the second fusion layer;
    在所述得到所述第二融合图层之后,对所述目标三维模型进行渲染处理之前,所述方法还包括:After the second fusion layer is obtained, before rendering the target 3D model, the method further includes:
    基于所述通道贴图,确定所述第二融合图层中与所述待进行材质渲染的区域对应的第二局部融合图层;Based on the channel map, determine a second partial fusion layer in the second fusion layer corresponding to the area to be rendered with material;
    所述基于所述第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型,包括:The step of rendering the target 3D model based on the second fusion layer to obtain the rendered target 3D model includes:
    基于所述第二局部融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。Based on the second local fusion layer, the target three-dimensional model is rendered to obtain a rendered target three-dimensional model.
  7. 根据权利要求1或2所述的方法,其中,所述方法还包括:The method according to claim 1 or 2, wherein the method further comprises:
    获取在所述预设光照方向下,表示所述目标三维模型的光影形状的环境贴图;Obtaining an environment map representing the light and shadow shape of the target 3D model under the preset lighting direction;
    所述基于所述第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型,包括:The step of rendering the target 3D model based on the second fusion layer to obtain the rendered target 3D model includes:
    基于所述环境贴图所表示的光影形状,对所述第二融合图层进行光影形状处理,得到光影形状处理后的第二融合图层;Based on the light and shadow shape represented by the environment map, performing light and shadow shape processing on the second fusion layer to obtain a light and shadow shape processed second fusion layer;
    基于所述光影形状处理后的第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。Rendering is performed on the target 3D model based on the second fusion layer after the light and shadow shape processing, to obtain the rendered target 3D model.
  8. 一种模型渲染装置,包括:A model rendering device, comprising:
    获取模块,用于获取目标三维模型对应的颜色贴图;An acquisition module, configured to acquire a color map corresponding to the target three-dimensional model;
    生成模块,用于基于预设光照方向,生成用于表示所述目标三维模型在所述预设光照方向下的材质感光信息的材质捕获贴图以及在不同三维位置区域的感光信息的立方体贴图;A generating module, configured to generate, based on a preset lighting direction, a material capture map for representing material photosensitive information of the target 3D model under the preset lighting direction and a cube map of photosensitive information in different three-dimensional position areas;
    第一处理模块,用于将所述颜色贴图与所述材质捕获贴图进行第一融合处理,得到第一融合图层;A first processing module, configured to perform a first fusion process on the color map and the texture capture map to obtain a first fusion layer;
    第二处理模块,用于将所述立方体贴图与所述第一融合图层进行第二融合处理,得到第二融合图层;A second processing module, configured to perform a second fusion process on the cubemap and the first fusion layer to obtain a second fusion layer;
    第三处理模块,用于基于所述第二融合图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。The third processing module is configured to perform rendering processing on the target 3D model based on the second fusion layer to obtain a rendered target 3D model.
  9. 一种计算机设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当所述计算机设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1至7中任一项所述的模型渲染方法的步骤。A computer device, comprising: a processor, a memory, and a bus, the memory stores machine-readable instructions executable by the processor, and when the computer device is running, the processor and the memory pass through Bus communication, when the machine-readable instructions are executed by the processor, the steps of the model rendering method according to any one of claims 1 to 7 are executed.
  10. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器运行时执行如权利要求1至7中任一项所述的模型渲染方法的步骤。A computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the model rendering method according to any one of claims 1 to 7 are executed.
  11. 一种计算机程序,所述计算机程序被处理器运行时执行如权利要求1至7中任一项所述的模型渲染方法的步骤。A computer program, which executes the steps of the model rendering method according to any one of claims 1 to 7 when the computer program is run by a processor.
  12. 一种计算机程序产品,所述计算机程序产品包括计算机程序,所述计算机程序被处理器运行时执行如权利要求1至7中任一项所述的模型渲染方法的步骤。A computer program product, the computer program product comprising a computer program, when the computer program is executed by a processor, the steps of the model rendering method according to any one of claims 1 to 7 are executed.
PCT/CN2022/128075 2021-12-05 2022-10-27 Model rendering method and apparatus, computer device, and storage medium WO2023098358A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111471457.7A CN114119848B (en) 2021-12-05 2021-12-05 Model rendering method and device, computer equipment and storage medium
CN202111471457.7 2021-12-05

Publications (1)

Publication Number Publication Date
WO2023098358A1 true WO2023098358A1 (en) 2023-06-08

Family

ID=80366486

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/128075 WO2023098358A1 (en) 2021-12-05 2022-10-27 Model rendering method and apparatus, computer device, and storage medium

Country Status (2)

Country Link
CN (1) CN114119848B (en)
WO (1) WO2023098358A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117495995A (en) * 2023-10-26 2024-02-02 神力视界(深圳)文化科技有限公司 Method, device, equipment and medium for generating texture map and model training method

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119848B (en) * 2021-12-05 2024-05-14 北京字跳网络技术有限公司 Model rendering method and device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170287196A1 (en) * 2016-04-01 2017-10-05 Microsoft Technology Licensing, Llc Generating photorealistic sky in computer generated animation
CN112116692A (en) * 2020-08-28 2020-12-22 北京完美赤金科技有限公司 Model rendering method, device and equipment
CN112316420A (en) * 2020-11-05 2021-02-05 网易(杭州)网络有限公司 Model rendering method, device, equipment and storage medium
CN112489179A (en) * 2020-12-15 2021-03-12 网易(杭州)网络有限公司 Target model processing method and device, storage medium and computer equipment
CN113034661A (en) * 2021-03-24 2021-06-25 网易(杭州)网络有限公司 Method and device for generating MatCap map
CN114119848A (en) * 2021-12-05 2022-03-01 北京字跳网络技术有限公司 Model rendering method and device, computer equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107229905B (en) * 2017-05-05 2020-08-11 广州视源电子科技股份有限公司 Method and device for rendering color of lips and electronic equipment
CN108734754B (en) * 2018-05-28 2022-05-06 北京小米移动软件有限公司 Image processing method and device
CN110193193B (en) * 2019-06-10 2022-10-04 网易(杭州)网络有限公司 Rendering method and device of game scene
CN111627119B (en) * 2020-05-22 2023-09-15 Oppo广东移动通信有限公司 Texture mapping method and device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170287196A1 (en) * 2016-04-01 2017-10-05 Microsoft Technology Licensing, Llc Generating photorealistic sky in computer generated animation
CN112116692A (en) * 2020-08-28 2020-12-22 北京完美赤金科技有限公司 Model rendering method, device and equipment
CN112316420A (en) * 2020-11-05 2021-02-05 网易(杭州)网络有限公司 Model rendering method, device, equipment and storage medium
CN112489179A (en) * 2020-12-15 2021-03-12 网易(杭州)网络有限公司 Target model processing method and device, storage medium and computer equipment
CN113034661A (en) * 2021-03-24 2021-06-25 网易(杭州)网络有限公司 Method and device for generating MatCap map
CN114119848A (en) * 2021-12-05 2022-03-01 北京字跳网络技术有限公司 Model rendering method and device, computer equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117495995A (en) * 2023-10-26 2024-02-02 神力视界(深圳)文化科技有限公司 Method, device, equipment and medium for generating texture map and model training method

Also Published As

Publication number Publication date
CN114119848B (en) 2024-05-14
CN114119848A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
JP7386153B2 (en) Rendering methods and terminals that simulate lighting
WO2023098358A1 (en) Model rendering method and apparatus, computer device, and storage medium
US11694392B2 (en) Environment synthesis for lighting an object
CN112215934B (en) Game model rendering method and device, storage medium and electronic device
CN112316420B (en) Model rendering method, device, equipment and storage medium
EP2147412B1 (en) 3d object scanning using video camera and tv monitor
Li et al. Physically-based editing of indoor scene lighting from a single image
CN112669447A (en) Model head portrait creating method and device, electronic equipment and storage medium
WO2023098344A1 (en) Graphic processing method and apparatus, computer device, and storage medium
CN113658316B (en) Rendering method and device of three-dimensional model, storage medium and computer equipment
US20220230375A1 (en) Three-dimensional avatar generation and customization
WO2022063260A1 (en) Rendering method and apparatus, and device
CN109712226A (en) The see-through model rendering method and device of virtual reality
US20240087219A1 (en) Method and apparatus for generating lighting image, device, and medium
Ma et al. Neural compositing for real-time augmented reality rendering in low-frequency lighting environments
KR20060108271A (en) Method of image-based virtual draping simulation for digital fashion design
Lopez-Moreno et al. Non-photorealistic, depth-based image editing
CN115063330A (en) Hair rendering method and device, electronic equipment and storage medium
CN116485981A (en) Three-dimensional model mapping method, device, equipment and storage medium
JP2007272847A (en) Lighting simulation method and image composition method
CN115761105A (en) Illumination rendering method and device, electronic equipment and storage medium
CN112465941B (en) Volume cloud processing method and device, electronic equipment and storage medium
CN115131493A (en) Dynamic light special effect display method and device, computer equipment and storage medium
CN117058301B (en) Knitted fabric real-time rendering method based on delayed coloring
US11087523B2 (en) Production ray tracing of feature lines

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22900163

Country of ref document: EP

Kind code of ref document: A1