CN112116692B - Model rendering method, device and equipment - Google Patents

Model rendering method, device and equipment Download PDF

Info

Publication number
CN112116692B
CN112116692B CN202010888002.4A CN202010888002A CN112116692B CN 112116692 B CN112116692 B CN 112116692B CN 202010888002 A CN202010888002 A CN 202010888002A CN 112116692 B CN112116692 B CN 112116692B
Authority
CN
China
Prior art keywords
map
dimensional
information
target
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010888002.4A
Other languages
Chinese (zh)
Other versions
CN112116692A (en
Inventor
张峰
陈瑽
庄涛
辛奕坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Perfect Chijin Technology Co ltd
Original Assignee
Beijing Perfect Chijin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Perfect Chijin Technology Co ltd filed Critical Beijing Perfect Chijin Technology Co ltd
Priority to CN202010888002.4A priority Critical patent/CN112116692B/en
Publication of CN112116692A publication Critical patent/CN112116692A/en
Application granted granted Critical
Publication of CN112116692B publication Critical patent/CN112116692B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The embodiment of the invention provides a model rendering method, device and equipment, wherein the method comprises the following steps: acquiring a hand-drawn inherent color map, a normal map and a two-dimensional light image of a target object; the two-dimensional photo image carries photo information of a target object; combining the two-dimensional shadow image with the hand-painted intrinsic color map to obtain a target intrinsic color map of a target object; the target intrinsic color map and the normal map are output to render a three-dimensional model of the target object. According to the method, through combination of the two-dimensional shadow image and the hand-painted inherent color map, shadow information is directly added to the target inherent color map, so that the target inherent color map replaces a Metallic map or a Specular map in the current PRB manufacturing process, the map acquisition efficiency can be improved, the performance requirement of a rendering process on equipment can be reduced, the rendering efficiency of a three-dimensional model is improved, and the application range of the map is enlarged. The display picture of the three-dimensional model can be prevented from being influenced by the PBR effect through the target inherent color map.

Description

Model rendering method, device and equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a device for rendering a model.
Background
The physical-based rendering (PHYSICALLY BASED RENDERING, PBR) technique is a micro-surface object-based rendering technique. PBR technology, which has a prominent performance in simulating illumination reflection, is often applied to various advanced games and video production.
Currently, there are mainly two types of workflows for making PBR maps, namely, a metalization (metalic) based workflow and a Specular (Specular) based workflow, according to the type of map. Whatever the workflow, the steps of modeling, outputting normal map, intrinsic color map, and drawing roughness (Roughness) map, metallic map, or Specular map are required. According to the physical information provided by the maps, shadows, highlights, reflections and Fresnel effects can be calculated in real time, and reasonable illumination reflection is simulated for three-dimensional scenes under different illumination conditions. However, the current PBR technology has the disadvantages of long workflow, high resource consumption and low scene manufacturing efficiency.
In addition, the current process of creating a three-dimensional scene using PBR technology cannot be adapted to all devices. For equipment with middle-low end performance, a large number of PBR mapping can cause a phenomenon that the three-dimensional scene display speed is low and even rendering cannot be loaded. If the PBR effect is stopped, the display scene is flat, and the visual difference between the display scene and the three-dimensional scene using the PBR effect is too large.
In summary, how to improve the efficiency of mapping, and make mapping fit with various devices becomes a technical problem to be solved.
Disclosure of Invention
The embodiment of the invention provides a model rendering method, a device and equipment, which are used for improving the model rendering efficiency and adapting a map to various equipment.
In a first aspect, an embodiment of the present invention provides a model rendering method, including:
Acquiring a hand-drawn inherent color map, a normal map and a two-dimensional light image of a target object; the two-dimensional photo image carries photo information of a target object;
combining the two-dimensional shadow image with the hand-painted intrinsic color map to obtain a target intrinsic color map of a target object;
the target intrinsic color map and the normal map are output to render a three-dimensional model of the target object.
In one possible embodiment, combining the two-dimensional shadow image with the hand-drawn intrinsic color map to obtain a target intrinsic color map of the target object includes:
Combining the two-dimensional shadow image and the hand-drawn inherent color map and inputting the combined two-dimensional shadow image and the hand-drawn inherent color map into a color channel RGBA (red, green and blue) channel to obtain a target inherent color map; wherein the RGBA channel comprises a shadow component Alpha channel for carrying a two-dimensional shadow image.
In one possible embodiment, after outputting the target intrinsic color map and the normal map, the method further includes: and rendering the three-dimensional model through the target inherent color map and the normal map.
In one possible embodiment, rendering a three-dimensional model with a target native color map and a normal map includes: calculating the normal map to obtain normal information of the target object; calculating the inherent color map of the target to obtain the environment reflected light information and color information of the target object; and rendering a three-dimensional model by adopting normal line information, environment reflected light information and color information.
In one possible embodiment, the RGBA channels of the target native color map include Alpha channels for carrying two-dimensional shadow images.
Calculating the normal map to obtain the normal information of the target object, including: the normal map is sampled to import normal vector information of the target object carried by the normal map into the UV-unfolding layer of the three-dimensional model.
Calculating the target inherent color map to obtain the environment reflected light information and color information of the target object, wherein the method comprises the following steps: and sampling the target inherent color map to guide the environment reflected light information carried in the Alpha channel and the color information carried in the RGB channel into the UV unfolding layer of the three-dimensional model.
Rendering a three-dimensional model using normal information, ambient reflected light information, and color information, comprising: based on the UV unfolding layer of the three-dimensional model, a three-dimensional model with color effect and light shadow effect is generated.
In one possible embodiment, sampling the target intrinsic color map to direct the ambient reflected light information carried in the Alpha channel into the UV-spreading layer of the three-dimensional model includes: sampling the Alpha channel to obtain indirect diffuse reflection illumination and specular reflection illumination of a target object; and processing the indirect diffuse reflection illumination and the specular reflection illumination of the target object through the material environment blocking mapping, and importing the processing result as environment reflection light information into the UV unfolding layer of the three-dimensional model.
In one possible embodiment, calculating the target intrinsic color map to obtain color information of the target object includes: and simulating the color information carried in the RGB channel through the function, and importing the function simulation value into the UV unfolding layer of the three-dimensional model.
In one possible embodiment, the target intrinsic color map comprises a two-dimensional map bearing ambient reflected light information. Rendering a three-dimensional model through a target intrinsic color map and a normal map, comprising: if the physical-based rendering effect is stopped in the running environment where the target object is located, sampling the target inherent color map to guide color information carried in the RGB channel and environment reflected light information carried in the two-dimensional map into a UV unfolding layer of the three-dimensional model; sampling the normal map to import normal vector information of a target object carried by the normal map into a UV unfolding layer of the three-dimensional model; based on the UV unfolding layer of the three-dimensional model, a three-dimensional model with color effect and light shadow effect is generated.
In one possible embodiment, the two-dimensional shadow image is a two-dimensional gray scale image, the two-dimensional gray scale image including highlight region information of the target object.
In one possible embodiment, the two-dimensional photoperiod image is stored in a reflection source of a reflection probe.
In a second aspect, an embodiment of the present invention provides a model rendering apparatus including:
The acquisition module is used for acquiring a hand-painted inherent color map, a normal map and a two-dimensional light image of the target object; the two-dimensional photo image carries photo information of a target object;
The merging module is used for merging the two-dimensional shadow image and the hand-painted inherent color map to obtain a target inherent color map of the target object;
And the output module is used for outputting the target inherent color map and the normal map so as to render the three-dimensional model of the target object.
In one possible embodiment, the merging module is specifically configured to: combining the two-dimensional shadow image and the hand-drawn inherent color map and inputting the combined two-dimensional shadow image and the hand-drawn inherent color map into a color channel RGBA (red, green and blue) channel to obtain a target inherent color map; wherein the RGBA channel comprises a shadow component Alpha channel for carrying a two-dimensional shadow image.
In one possible embodiment, the model rendering apparatus further includes a rendering module for rendering the three-dimensional model through the target intrinsic color map and the normal map after the output module outputs the target intrinsic color map and the normal map.
In one possible embodiment, the rendering module is specifically configured to: calculating the normal map to obtain normal information of the target object; calculating the inherent color map of the target to obtain the environment reflected light information and color information of the target object; and rendering a three-dimensional model by adopting normal line information, environment reflected light information and color information.
In one possible embodiment, the RGBA channels of the target native color map include Alpha channels for carrying two-dimensional shadow images.
The rendering module calculates the normal map, and is specifically used for when the normal information of the target object is acquired: the normal map is sampled to import normal vector information of the target object carried by the normal map into the UV-unfolding layer of the three-dimensional model.
The rendering module calculates the target inherent color map, and is specifically used for acquiring the environment reflected light information and the color information of the target object: and sampling the target inherent color map to guide the environment reflected light information carried in the Alpha channel and the color information carried in the RGB channel into the UV unfolding layer of the three-dimensional model.
The rendering module adopts normal line information, environment reflected light information and color information, and is specifically used for rendering the three-dimensional model: based on the UV unfolding layer of the three-dimensional model, a three-dimensional model with color effect and light shadow effect is generated.
In one possible embodiment, when the rendering module calculates the target intrinsic color map to guide the ambient reflection light information carried in the Alpha channel into the UV expansion layer of the three-dimensional model, the rendering module is specifically configured to: sampling the Alpha channel to obtain indirect diffuse reflection illumination and specular reflection illumination of a target object; and processing the indirect diffuse reflection illumination and the specular reflection illumination of the target object through the material environment blocking mapping, and importing the processing result as environment reflection light information into the UV unfolding layer of the three-dimensional model.
In one possible embodiment, the rendering module calculates the target intrinsic color map, and when obtaining the color information of the target object, the rendering module is specifically configured to: and simulating the color information carried in the RGB channel through the function, and importing the function simulation value into the UV unfolding layer of the three-dimensional model.
In one possible embodiment, the target intrinsic color map comprises a two-dimensional map bearing ambient reflected light information. When the rendering module renders the three-dimensional model through the target inherent color map and the normal map, the rendering module is specifically used for: if the physical-based rendering effect is stopped in the running environment where the target object is located, sampling the target inherent color map to guide color information carried in the RGB channel and environment reflected light information carried in the two-dimensional map into a UV unfolding layer of the three-dimensional model; sampling the normal map to import normal vector information of a target object carried by the normal map into a UV unfolding layer of the three-dimensional model; based on the UV unfolding layer of the three-dimensional model, a three-dimensional model with color effect and light shadow effect is generated.
In one possible embodiment, the two-dimensional shadow image is a two-dimensional gray scale image, the two-dimensional gray scale image including highlight region information of the target object.
In one possible embodiment, the two-dimensional photoperiod image is stored in a reflection source of a reflection probe.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor and a memory, where the memory stores executable code, and when the executable code is executed by the processor, causes the processor to at least implement the model rendering method described above.
Embodiments of the present invention also provide a system including a processor and a memory having stored therein at least one instruction, at least one program, code set, or instruction set that is loaded and executed by the processor to implement the model rendering method described above.
Embodiments of the present invention provide a computer readable medium having stored thereon at least one instruction, at least one program, code set, or instruction set, loaded and executed by a processor to implement the model rendering method described above.
Embodiments of the present invention provide a non-transitory machine-readable storage medium having executable code stored thereon, which when executed by a processor of an electronic device, causes the processor to at least implement the model rendering method in the first aspect.
In the technical scheme provided by the embodiment of the invention, for a target object to be rendered, a hand-painted inherent color map, a normal map and a two-dimensional light shadow image of the target object are obtained. Since the highlight information, shadow information and other light and shadow information can be directly added into the hand-drawn intrinsic color map, the two-dimensional light and shadow image carrying the light and shadow information of the target object can be directly combined with the hand-drawn intrinsic color map to obtain the target intrinsic color map of the target object. Because the target intrinsic color map has the intrinsic color information and the light shadow information of the target object, the target intrinsic color map not only can replace the intrinsic color map in the current PRB manufacturing process, but also can be used for realizing the calculation of the subsequent light shadow information and replacing the Metallic map or the speculum map in the current PRB manufacturing process, thereby avoiding the complex manufacturing processes of the Metallic map and the speculum map in the current PRB manufacturing process, greatly simplifying the map manufacturing process, reducing the map acquisition time and improving the map acquisition efficiency. And finally, the target inherent color map and the normal map are output, so that the rendering of the three-dimensional model of the target object can be realized, a plurality of PRB maps in the current PRB manufacturing process are not needed, the phenomena that the three-dimensional model is slow in loading rendering speed and even cannot be loaded for rendering due to the fact that the number of the PRB maps is too large are avoided, the performance requirement of the rendering process on equipment is greatly reduced, the display speed of the three-dimensional model is improved, the rendering efficiency of the three-dimensional model is improved, and the application range of the map is enlarged. Through the combination of the two-dimensional shadow image and the hand-painted inherent color map, the display picture of the three-dimensional model cannot be excessively different before and after the PBR effect is closed, and the influence of the PBR effect on the display picture of the three-dimensional model is effectively avoided.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a model rendering method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a three-dimensional model rendering process according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a three-dimensional model according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another three-dimensional model according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a model rendering device according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device corresponding to the model rendering apparatus provided in the embodiment shown in fig. 5.
Detailed Description
The present disclosure will now be discussed with reference to several exemplary embodiments. It should be understood that these embodiments are discussed only to enable those of ordinary skill in the art to better understand and thus practice the teachings of the present invention, and are not meant to imply any limitation on the scope of the invention.
As used herein, the term "comprising" and variants thereof are to be interpreted as meaning "including but not limited to" open-ended terms. The term "based on" is to be interpreted as "based at least in part on". The terms "one embodiment" and "an embodiment" are to be interpreted as "at least one embodiment. The term "another embodiment" is to be interpreted as "at least one other embodiment".
In addition, the sequence of steps in the method embodiments described below is only an example and is not strictly limited.
The model rendering scheme provided by the embodiment of the invention can be executed by an electronic device, and the electronic device can be a server. The server may be a physical server comprising an independent host, or may be a virtual server carried by a host cluster, or may be a cloud server. The electronic device may also be a terminal device such as a tablet computer, a PC, a notebook computer, etc.
The model rendering scheme provided by the embodiment of the invention is suitable for mapping of various objects. Various objects include, for example, texture materials, physical objects, virtual scenes. The physical objects are, for example, characters and props in the game. The mapping of various objects is mainly used for the rendering flow of the respective objects.
In fact, in the conventional process of mapping, three-dimensional modeling (including steps of initial model, high model, low model normal, etc.) is required to bake the AO normal, transfer out the conventional intrinsic color mapping, and to add various details, etc., the conventional intrinsic color mapping and normal mapping can be manufactured. Of course, in order to make the three-dimensional model exhibit the shadow effect, the highlight map may also be manufactured based on the conventional inherent color map for color removal. However, the traditional map is complex in manufacturing process, poor in performance in the rendering process and poor in visual experience.
At present, in order to improve the visual experience of users, a PBR mapping production process based on a PBR technology is also provided. There are mainly two kinds of partitioning by the type of map that is finally output, namely, metallic-based workflow and speclar-based workflow. Regardless of the workflow, three-dimensional modeling, normal mapping, intrinsic color mapping, and mapping Roughness, metallic mapping, or Specular mapping are required. The Metallic-based workflow typically outputs traditional native color maps, normal maps, roughness maps, metallic maps; whereas a job flow based on specul will output traditional native color maps, normal maps, roughness maps, specul maps. In the rendering process, shadows, highlights, reflections and Fresnel effects can be calculated in real time according to physical information provided by the maps, and reasonable illumination reflection is simulated for three-dimensional scenes under different illumination conditions. However, the existing PBR technology for manufacturing the three-dimensional scene still has the problems of long workflow, large resource consumption, low scene manufacturing efficiency and the like.
In addition, because the number of PBR maps is large, performance requirements are also imposed on devices that render using PBR techniques. For equipment with middle-low end performance, a large number of PBR mapping can cause a phenomenon that the three-dimensional scene display speed is low and even rendering cannot be loaded. If the PBR effect is stopped, the display scene is flat, and the visual difference between the display scene and the three-dimensional scene using the PBR effect is too large. Therefore, the three-dimensional scene manufactured by the existing PBR technology cannot be suitable for all devices.
In summary, how to improve the efficiency of mapping, and make mapping fit with various devices becomes a technical problem to be solved.
In order to solve at least one technical problem, the core idea of the model rendering scheme provided by the embodiment of the invention is as follows:
And for a target object to be rendered, acquiring a hand-drawn inherent color map, a normal map and a two-dimensional light shadow image of the target object. Because the highlight information, shadow information and other light and shadow information can be added in the hand-painted inherent color map, the two-dimensional light and shadow image carrying the light and shadow information of the target object can be directly combined with the hand-painted inherent color map to obtain the target inherent color map of the target object. Because the target intrinsic color map has the intrinsic color information and the light shadow information of the target object, the target intrinsic color map not only can replace the intrinsic color map in the current PRB manufacturing process, but also can be used for realizing the calculation of the subsequent light shadow information and replacing the Metallic map or the speculum map in the current PRB manufacturing process, thereby effectively avoiding the complex manufacturing processes of the Metallic map and the speculum map in the current PRB manufacturing process, greatly simplifying the map manufacturing process, reducing the map acquisition time and improving the map acquisition efficiency. And finally, the target inherent color map and the normal map are output, so that the rendering of the three-dimensional model of the target object can be realized, a plurality of PRB maps in the current PRB manufacturing process are not needed, the phenomena that the three-dimensional model is slow in loading rendering speed and even cannot be loaded for rendering due to the fact that the number of the PRB maps is too large are avoided, the performance requirement of the rendering process on equipment is greatly reduced, the display speed of the three-dimensional model is improved, the rendering efficiency of the three-dimensional model is improved, and the application range of the map is enlarged.
In addition, through the combination of the two-dimensional light shadow image bearing the light shadow information of the target object and the hand-painted inherent color map, the inherent color map of the target object has the inherent color information and the light shadow information of the target object, so that the display picture of the three-dimensional model of the target object is not excessively different before and after the PBR effect is closed, and the influence of the PBR effect on the display picture of the three-dimensional model is effectively avoided.
Having described the basic ideas of a model rendering scheme, various non-limiting embodiments of the present invention are specifically described below.
The execution of the model rendering method is described below in connection with the following embodiments.
Fig. 1 is a flowchart of a model rendering method according to an embodiment of the present invention. As shown in fig. 1, the model rendering method includes the steps of:
101. Acquiring a hand-drawn inherent color map, a normal map and a two-dimensional light image of a target object; the two-dimensional photo image carries photo information of the target object.
102. And combining the two-dimensional shadow image with the hand-painted intrinsic color map to obtain the target intrinsic color map of the target object.
103. The target intrinsic color map and the normal map are output to render a three-dimensional model of the target object.
In the model rendering method shown in fig. 1, the target intrinsic color map not only can replace the intrinsic color map in the current PRB fabrication process, but also can replace the Metallic map or the speculum map in the current PRB fabrication process, so as to avoid the complex fabrication processes of the Metallic map and the speculum map in the current PRB fabrication process. And finally, the output target inherent color map and normal line map can realize the rendering of the three-dimensional model of the target object, multiple PRB maps in the current PRB manufacturing process are not needed, the performance requirement of the rendering process on equipment is reduced, the display speed of the three-dimensional model is improved, and the application range of the map is enlarged.
To render a three-dimensional model of a target object, a texture (texture) for rendering the three-dimensional model needs to be acquired first. In practical applications, the target object may be a virtual scene. Or the target object may also be a character or object in the virtual scene. It will be appreciated that the map may be a two-dimensional image, primarily for presenting model surface information so that the device may render the appearance of the model through the map. In practical applications, there are various types of maps. Different types of maps are used to present different types of surface information. Various kinds of mapping are described below, and are not developed here.
In order to acquire a map for rendering a three-dimensional model, a hand-drawn natural color map, a normal map, and a two-dimensional light-shadow image of a target object are first acquired 101.
The intrinsic color refers to the color of the target object under the normal light source, so that the intrinsic color map in the traditional map making process and the PRB making process generally only contains color information. Different from the inherent color map in the traditional map making process and the PRB making process, the hand-painted inherent color map of the target object can also be added with light and shadow information such as highlight information, shadow information and the like besides color information. In practical applications, the hand-drawn intrinsic color map is implemented as a hand-drawn diffuse reflection map, for example. Optionally, to facilitate adding the light and shadow information, a light and shadow component channel (i.e., alpha channel) for carrying the light and shadow information is also created in the hand-drawn native color map. Thus, the target intrinsic color map comprising the intrinsic color information and the light shadow information can be manufactured by using the hand-painted intrinsic color map so as to replace the Metallic map or the speculum map in the current PRB manufacturing process, thereby simplifying the map manufacturing process.
Normal Mapping (Normal Mapping) is mainly used to describe the concave-convex information of the object surface. The baking process of the normal map is to make a normal line at virtually every point of the concave-convex surface of the target object, and mark the direction of the normal line by the color channel. Each texel in the normal map represents the direction of the surface normal vector. The normal map is utilized to carefully render the concave-convex shape of the object surface, so that the rendered object surface has more accurate illumination direction and reflection effect.
The two-dimensional photo-image of the target object is mainly used for bearing the photo-image information of the target object. In practical applications, the two-dimensional shadow image may be a two-dimensional gray scale image produced based on the target object. The two-dimensional gray scale map includes highlight region information of the target object. To simplify the manufacturing process, a hand-drawn gray scale image may be selected as the two-dimensional shadow image.
Alternatively, the two-dimensional shadow image may be stored in a Reflection source (Cube Map) of a Reflection Probe (Reflection Probe). The reflective probe is mainly used for controlling reflective information of light rays in a scene. Cube Map is typically used as a reflection source for objects with reflective properties. Cube Map may be a collection of multiple independent square textures that may be combined and mapped into a single texture.
Further, after the hand-drawn intrinsic color map, the normal line map, and the two-dimensional shadow image of the target object are acquired, the two-dimensional shadow image and the hand-drawn intrinsic color map are combined 102 to obtain the target intrinsic color map of the target object, and finally the target intrinsic color map and the normal line map are output 103 to render the three-dimensional model of the target object.
From the above description, it is known that the light and shadow information can be directly added to the hand-drawn intrinsic color map of the target object, and the two-dimensional light and shadow image includes the light and shadow information of the target object. Thus, the two-dimensional shadow image may be combined with the hand-drawn intrinsic color map to obtain a target intrinsic color map having shadow information of the target object.
Specifically, combining the two-dimensional shadow image with the hand-drawn intrinsic color map to obtain the target intrinsic color map of the target object can be implemented as: and merging the two-dimensional shadow image and the hand-drawn intrinsic color map, and inputting the merged two-dimensional shadow image and the hand-drawn intrinsic color map into a color channel (RGBA channel) to obtain the target intrinsic color map.
Specifically, a shadow component channel (i.e., alpha channel) for carrying shadow information is created on the basis of the three color component channels contained in the hand-drawn native color map. In the Alpha channel, the light and shadow information of the target object is produced by drawing a two-dimensional light and shadow image (such as a two-dimensional gray scale image). For example, a two-dimensional gray scale map is drawn, and the two-dimensional gray scale map is stored in the Alpha channel of the RGBA channel. In another example, this process may also be implemented by a Look Up Table (LUT). If the two-dimensional gray level map is drawn, the drawn two-dimensional gray level map is stored in a color lookup table in a color gamut conversion mode, and then the color lookup table is stored in an Alpha channel of the RGBA channel.
Optionally, after the two-dimensional photopic image is merged and input into the Alpha channel of the hand-drawn intrinsic color map, calculation can be performed based on the photopic information of the target object carried in the Alpha channel, so as to obtain the environment diffuse reflection information of the target object, and the environment diffuse reflection information of the target object is merged into the target intrinsic color map. The ambient diffuse reflection information may be implemented as an ambient diffuse reflection map. For devices that are adapted to various capabilities, the ambient diffuse reflection map may be implemented as a two-dimensional map, for example, a two-dimensional map bearing ambient reflected light information.
As can be seen from the above description, in practical application, the two-dimensional gray scale map may be a hand-drawn gray scale map. Because the hand-drawn gray-scale image carries light and shadow information expressed by various gray values, the hand-drawn gray-scale image and the hand-drawn inherent color paste image are combined, so that target objects with light and shadow effects can be rendered by different performance devices, and the influence of PBR effects on the display picture of the three-dimensional model can be avoided.
In practice, the RGBA channel includes three color component channels and an Alpha channel. The three color component channels, namely Red (Red, R) channel, green (Green, G) channel, blue (Blue, B) channel, are mainly used for carrying color information in the hand-drawn inherent color map.
Optionally, the color information carried by the RGB channels is converted into a two-dimensional map, so that the color information of the target object can be read out by sampling the two-dimensional map. Or the color information carried by the RGB channel can be converted into a color lookup table, so that the color information of the target object can be obtained through inquiring the color lookup table. In practice, the process of converting the color information carried by the RGB channels into a color look-up table is: the color information is mapped from the color space in which the RGB channels are located to the color space in which the color look-up table is located. Through the mode of converting and storing the color information, the performance of various devices is adapted, the rendering efficiency is improved, and the storage space is saved.
Of course, the color channels may also be referred to as color spaces, composite color channels, and are not limited herein. Whatever the designation, what is actually emphasized is the type of information carried in the RGBA channel.
The RGBA channels include a shadow component channel, i.e., alpha channel, for carrying a two-dimensional shadow image. Optionally, a two-dimensional gray scale map for describing the shadow information of the target object is stored in the Alpha channel.
Specifically, the lighter the color of a pixel in a two-dimensional gray scale map, the smaller the element value of an Alpha channel corresponding to the pixel, and the stronger the brightness to be described by the element value; the darker the color of a pixel in a two-dimensional gray scale map, the larger the element value of the Alpha channel corresponding to the pixel, and the weaker the brightness to be described by the element value. For example, assume that the element value of the Alpha channel ranges from 0 to 255. Based on this, the element value of the Alpha channel is 0, and the brightness to be described by the element value is strongest; the Alpha channel has an element value of 255, which is the weakest to describe.
Finally, after the target intrinsic color map of the target object is obtained by combining the two-dimensional shadow image and the hand-drawn intrinsic color map, the target intrinsic color map and the normal line map may be output for rendering the three-dimensional model of the target object. Thus, since the target intrinsic color map has the light and shadow information of the target object, a three-dimensional model having a color effect and a light and shadow effect can be rendered by the target intrinsic color map and the normal map.
Optionally, after outputting the target intrinsic color map and the normal map in 103, the three-dimensional model is rendered by the target intrinsic color map and the normal map. It is assumed that the RGBA channels of the target native color map comprise Alpha channels for carrying two-dimensional light shadow images. How a three-dimensional model is rendered by target native color mapping and normal mapping is described below in connection with specific examples.
Fig. 2 is a schematic diagram of a three-dimensional model rendering process according to an embodiment of the present invention. As shown in fig. 2, the process of rendering the three-dimensional model includes the following steps:
201. And calculating the normal map to obtain the normal information of the target object.
Specifically, the normal map is sampled to import normal vector information of the target object carried by the normal map into the UV-unfolding layer of the three-dimensional model. For example, assuming that normal vector information is normal vector information, the normal vector information is typically encoded in the RGB color channels of the texture of the normal map. Thus, the process of sampling the normal map may be: coordinate information is acquired from the texture of the normal map, a code of normal vector information is calculated based on the coordinate information, and the calculated code is introduced into the UV expansion layer of the three-dimensional model according to the corresponding relation between the normal map and the UV expansion layer of the three-dimensional model.
202. And calculating the target inherent color map to acquire the environment reflected light information and the color information of the target object.
In practice, the ambient reflected light information environment of the target object may be calculated by the ambient bi-directional reflection distribution function (ENVIRMENT BIDIRECTIONAL REFLECTANCE DISTRIBUTION FUNCTION, ENVIRMENT BRDF) to obtain the ambient reflected light information of the target object, such as image-based illumination (Image Based Lighting, IBL). The ambient bi-directional reflection distribution function can be expressed as the following formula:
Where l denotes the direction of incidence and v denotes the direction of observation. In fact, in order to facilitate the mobile terminal device to acquire the environmental reflected light information of the target object in real time, the foregoing environmental bidirectional reflection distribution function may be simplified into: the ambient reflected light information of the target object is the product of the LD term and the DFG term, denoted envSpecular =ld×dfg. The LD term is the result of summing the incident light. The LD term may be implemented as a two-dimensional gray scale map of the light and shadow information of the target object. The DFG term may be implemented as a pre-computed two-dimensional map. Therefore, finally, the acquisition process of the environment reflected light information of the target object is greatly simplified through the sampling process of the two-dimensional gray level map and the two-dimensional map.
Based on the above simplified thought, the calculation of the target intrinsic color map to obtain the environmental reflected light information and color information of the target object may be implemented as:
and sampling the target inherent color map to guide the environment reflected light information carried in the Alpha channel and the color information carried in the RGB channel into the UV unfolding layer of the three-dimensional model.
In an alternative embodiment, sampling the target intrinsic color map to direct the ambient reflected light information carried in the Alpha channel into the UV-unfolding layer of the three-dimensional model may be implemented as:
Sampling the Alpha channel to obtain indirect diffuse reflection illumination and specular reflection illumination of a target object; and processing the indirect diffuse reflection illumination and the specular reflection illumination of the target object through the material environment blocking mapping, and importing the processing result as environment reflection light information into the UV unfolding layer of the three-dimensional model.
It is assumed that a two-dimensional gray scale map describing light and shadow information of a target object is stored in the Alpha channel. It is assumed that the two-dimensional gray Map is stored in Cube Map. Based on the assumption, the Cube Map is sampled by a texture sampling function (texCUBElod), and indirect diffuse reflection illumination (Indirect Diffuse) and Specular reflection illumination (speculum) of the target object are output. In practice, the texCUBElod function is a sampling function of the CUBE material. If the device for rendering the target object does not support the sampling function, another sampling function (texCUBEbias) may be used to implement the above-described functions. Therefore, the process of calculating the environment reflected light information of the target object is actually a process of sampling the two-dimensional gray scale image, and the calculation process of the environment reflected light information is greatly simplified.
Because the texCUBElod function has lower sampling efficiency, the device with middle and low end performance can select to directly use the two-dimensional gray level diagram to acquire the environment reflected light information of the target object, so that the calculation cost caused by the texCUBElod function is avoided, and the calculation efficiency of the light and shadow information is improved.
Furthermore, in order to further optimize the ambient light reflection information and improve the rendering effect of the shadow information, the ambient blocking (Ambient Occlusiont, AO) map may be used to perform optimization processing on the indirect diffuse reflection illumination and the specular reflection illumination of the target object, and the processing result is used as the ambient reflection light information to be led into the UV unfolding layer of the three-dimensional model.
The AO map is used to represent the area, such as the shadow area, of the target object, where the light is blocked. Thus, the AO map may also express shading information. AO maps are typically generated by three-dimensional model baking. In practice, the AO maps include texture AO and screen space environment occlusion special effects (SCREEN SPEACE Ambient Occlusion, SSAO) provided by the texture map. Alternatively, the AO-map for optimizing the processing of indirect diffuse and specular illumination may be the material AO. For example, in units, the quality AO may be used to optimize the computation of indirect diffuse illumination using a standard shader (STANDARD SHADER).
In practice, the process may be implemented as: multiplying the calculated value of the indirect diffuse reflection illumination with the AO mapping to obtain an optimized indirect diffuse reflection illumination value; and multiplying the calculated value of the specular reflection illumination with the AO mapping to obtain an optimized specular reflection illumination value. Because the material AO has low requirements on the performance of the equipment, the optimization mode can be widely applied to various equipment. Such as this optimization is adapted to low-and medium-end performance mobile devices.
Therefore, through sampling the two-dimensional shadow image in the Alpha channel, the environment reflected light information of the target object can be calculated, the calculation process of the environment reflected light information is greatly simplified, and the rendering efficiency of the shadow effect is improved.
In another embodiment, for the RGB channels, sampling the target intrinsic color map to introduce the color information carried by the RGB channels into the UV-unfolding layer of the three-dimensional model may be implemented as: it is assumed that the color information carried by the RGB channels is stored in a color look-up table. And performing color gamut conversion processing on the element values in the color lookup table one by one according to the corresponding relation between each element in the color lookup table and each element in the UV unfolding layer of the three-dimensional model to obtain the corresponding element values in the UV unfolding layer of the three-dimensional model.
In yet another embodiment, for the color information carried by the RGB channels, the color information is imported into the UV-spreading layer of the three-dimensional model, which may be further implemented as: and simulating the color information carried in the RGB channel through the function, and importing the function simulation value into the UV unfolding layer of the three-dimensional model.
For example, for the mobile terminal device, a function for simulating the color lookup table may be preset, and further, a conversion result of performing color gamut conversion through the color lookup table is simulated through the function, and the simulated conversion result is imported into the UV expansion layer of the three-dimensional model.
Therefore, by means of simulating the color lookup table through the function, the RGB channels are not required to be sampled, color information can be imported into the UV unfolding layer of the three-dimensional model, and the bandwidth requirement in the lookup process is reduced.
203. And rendering a three-dimensional model by adopting normal line information, environment reflected light information and color information.
Specifically, a three-dimensional model having a color effect and a shadow effect is generated based on the UV-developed layer of the three-dimensional model. In effect, the UV map is a planar representation of the three-dimensional model surface texture. U refers to the horizontal axis of the two-dimensional space (i.e., plane) and V refers to the vertical axis of the two-dimensional space. The process of creating a UV map is called UV unfolding. After the normal vector information, the indirect diffuse reflection illumination, the specular reflection illumination and the color information are introduced into the UV development layer of the three-dimensional model, the three-dimensional model with the color effect and the shadow effect is baked by the UV map in the UV development layer of the three-dimensional model.
Through the above steps 201 to 203, a three-dimensional model of the target object can be rendered based on the target intrinsic color map and the normal map.
In another alternative embodiment, the target intrinsic color map is assumed to comprise a two-dimensional map bearing ambient reflected light information. Rendering the three-dimensional model through the target inherent color map and the normal map can also be realized as follows: if the physical-based rendering effect is stopped in the running environment where the target object is located, sampling the target inherent color map to guide color information carried in an RGB channel and environment reflected light information carried in a two-dimensional map into a UV unfolding layer of the three-dimensional model; sampling the normal map to import normal vector information of a target object carried by the normal map into a UV unfolding layer of the three-dimensional model; based on the UV unfolding layer of the three-dimensional model, a three-dimensional model with color effect and light shadow effect is generated. Therefore, the phenomenon that the contrast of the three-dimensional model display effect is overlarge before and after the PBR effect is closed can be further avoided, and the user visual experience is improved. In practical applications, the two-dimensional map bearing the information of the ambient reflection light may be an ambient light diffuse reflection map. The two-dimensional map contains at least shadow information of the target object.
The following describes the execution procedure of the model rendering method shown in fig. 1 and the three-dimensional model rendering method shown in fig. 2, taking the example that the target object is a three-dimensional scene:
Assume that the target object is a three-dimensional scene. Assuming that the first device is used to obtain the map, the second device is used to render a three-dimensional model of the target object.
Based on the assumption, the first device acquires a hand-drawn intrinsic color map, a normal map and a two-dimensional shadow image of the three-dimensional scene, merges the two-dimensional shadow image and the hand-drawn intrinsic color map into a target intrinsic color map of the three-dimensional scene, and outputs the target intrinsic color map to the second device. The target inherent color map of the three-dimensional scene contains light shadow information of the three-dimensional scene. And the second equipment receives the target inherent color map and the normal map of the three-dimensional scene, calculates the normal map to acquire normal information of the three-dimensional scene, and calculates the target inherent color map to acquire environment reflected light information and color information of the three-dimensional scene. Thus, the second device may render a three-dimensional model using the normal information, the ambient reflected light information, and the color information, as shown in fig. 3.
The second device is assumed to be a low-medium end device. The second device may also turn off rendering effects based on the ambient reflected light information. In this way, after receiving the target intrinsic color map and the normal map of the three-dimensional scene, the second device calculates the normal map to obtain normal information of the three-dimensional scene, and calculates the target intrinsic color map to obtain color information and shadow information of the three-dimensional scene. Optionally, the target native color map comprises a two-dimensional map carrying shadow information. Thus, the second device may render a three-dimensional model using the normal information, the color information, and the shadow information, as shown in fig. 4.
In addition, as can be found from the three-dimensional models shown in fig. 3 and 4, the display picture of the three-dimensional model can be prevented from being influenced by the PBR effect by hand-drawing the inherent color map, so that the map is adapted to various devices, and the application range of the map is further enlarged.
In the model rendering method shown in fig. 1, a hand-drawn intrinsic color map, a normal map and a two-dimensional light image of a target object to be rendered are first obtained for the target object. Since the highlight information, shadow information and other light and shadow information can be directly added into the hand-drawn intrinsic color map, the two-dimensional light and shadow image carrying the light and shadow information of the target object can be directly combined with the hand-drawn intrinsic color map to obtain the target intrinsic color map of the target object. Because the target intrinsic color map has the intrinsic color information and the light shadow information of the target object, the target intrinsic color map not only can replace the intrinsic color map in the current PRB manufacturing process, but also can be used for realizing the calculation of the subsequent light shadow information and replacing the Metallic map or the speculum map in the current PRB manufacturing process, thereby avoiding the complex manufacturing processes of the Metallic map and the speculum map in the current PRB manufacturing process, greatly simplifying the map manufacturing process, reducing the map acquisition time and improving the map acquisition efficiency. And finally, the target inherent color map and the normal map are output, so that the rendering of the three-dimensional model of the target object can be realized, a plurality of PRB maps in the current PRB manufacturing process are not needed, the phenomena that the three-dimensional model is slow in loading rendering speed and even cannot be loaded for rendering due to the fact that the number of the PRB maps is too large are avoided, the performance requirement of the rendering process on equipment is greatly reduced, the display speed of the three-dimensional model is improved, the rendering efficiency of the three-dimensional model is improved, and the application range of the map is enlarged. Through the combination of the two-dimensional shadow image and the hand-painted inherent color map, the display picture of the three-dimensional model cannot be excessively different before and after the PBR effect is closed, and the influence of the PBR effect on the display picture of the three-dimensional model is effectively avoided.
A model rendering apparatus of one or more embodiments of the present invention will be described in detail below. Those skilled in the art will appreciate that these model rendering devices may each be configured by the steps taught by the present solution using commercially available hardware components.
Fig. 5 is a schematic structural diagram of a model rendering device according to an embodiment of the present invention, as shown in fig. 5, where the model rendering device includes: a communication module 11, a generation module 12 and an editing module 13.
An obtaining module 11, configured to obtain a hand-drawn intrinsic color map, a normal map, and a two-dimensional light image of a target object; the two-dimensional photo image carries photo information of the target object;
A merging module 12, configured to merge the two-dimensional shadow image and the hand-drawn intrinsic color map to obtain a target intrinsic color map of the target object;
An output module 13, configured to output the target intrinsic color map and the normal map to render a three-dimensional model of the target object.
Optionally, the merging module 12 is specifically configured to: merging the two-dimensional shadow image and the hand-drawn inherent color map and inputting the merged two-dimensional shadow image and the hand-drawn inherent color map into a color channel RGBA (red, green and blue) channel to obtain the target inherent color map; wherein the RGBA channel comprises a shadow component Alpha channel for carrying the two-dimensional shadow image.
Optionally, the model rendering device further includes a rendering module, configured to render the three-dimensional model through the target intrinsic color map and the normal map after the output module outputs the target intrinsic color map and the normal map.
Optionally, the rendering module is specifically configured to: calculating the normal map to acquire normal information of the target object; calculating the target inherent color map to obtain environment reflected light information and color information of the target object; and rendering the three-dimensional model by adopting the normal information, the environment reflected light information and the color information.
Optionally, the RGBA channels of the target native color map comprise Alpha channels for carrying the two-dimensional shadow image.
The rendering module calculates the normal map, and is specifically configured to: and sampling the normal map to import normal vector information of the target object carried by the normal map into a UV unfolding layer of the three-dimensional model.
The rendering module calculates the target inherent color map, and is specifically configured to: and sampling the target inherent color map to guide the environment reflected light information carried in the Alpha channel and the color information carried in the RGB channel into a UV unfolding layer of the three-dimensional model.
The rendering module is specifically configured to, when rendering the three-dimensional model by using the normal information, the environment reflected light information and the color information: the three-dimensional model is generated with a color effect and a shadow effect based on the UV-spreading layer of the three-dimensional model.
Optionally, when the rendering module calculates the target intrinsic color map to guide the ambient reflection light information carried in the Alpha channel into the UV expansion layer of the three-dimensional model, the rendering module is specifically configured to: sampling the Alpha channel to obtain indirect diffuse reflection illumination and specular reflection illumination of the target object; and processing the indirect diffuse reflection illumination and the specular reflection illumination of the target object through a material environment blocking mapping, and importing the processing result as the environment reflection light information into the UV unfolding layer of the three-dimensional model.
Optionally, the rendering module calculates the target intrinsic color map, and is specifically configured to: and simulating the color information carried in the RGB channel through a function, and importing a function simulation value into a UV unfolding layer of the three-dimensional model.
Optionally, the target intrinsic color map comprises a two-dimensional map bearing ambient reflected light information. The rendering module is specifically configured to, when rendering the three-dimensional model through the target inherent color map and the normal map: if the physical-based rendering effect is stopped in the running environment where the target object is located, sampling the target inherent color map to guide the color information carried in an RGB channel and the environment reflected light information carried in the two-dimensional map into a UV unfolding layer of the three-dimensional model; sampling the normal map to guide normal vector information of the target object carried by the normal map into a UV unfolding layer of the three-dimensional model; the three-dimensional model is generated with a color effect and a shadow effect based on the UV-spreading layer of the three-dimensional model.
Optionally, the two-dimensional shadow image is a two-dimensional gray scale image, and the two-dimensional gray scale image includes highlight region information of the target object.
Optionally, the two-dimensional photoperiod image is stored in a reflection source of a reflection probe.
The model rendering device shown in fig. 5 may perform the method provided in the foregoing embodiments, and for the parts of this embodiment that are not described in detail, reference may be made to the related descriptions of the foregoing embodiments, which are not repeated here.
In one possible design, the structure of the model rendering apparatus shown in fig. 5 may be implemented as an electronic device. As shown in fig. 6, the electronic device may include: a processor 21, and a memory 22. Wherein said memory 22 has stored thereon executable code which, when executed by said processor 21, at least enables said processor 21 to implement a model rendering method as provided in the previous embodiments. The electronic device may further include a communication interface 23 for communicating with other devices or a communication network.
In addition, embodiments of the present invention provide a non-transitory machine-readable storage medium having executable code stored thereon, which when executed by a processor of a wireless router, causes the processor to perform the model rendering method provided in the foregoing embodiments.
The apparatus embodiments described above are merely illustrative, wherein the various modules illustrated as separate components may or may not be physically separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The systems, methods and apparatus of embodiments of the present invention may be implemented as pure software (e.g., a software program written in Java), as pure hardware (e.g., a special purpose ASIC chip or FPGA chip), or as a system that combines software and hardware (e.g., a firmware system with fixed code or a system with general purpose memory and a processor), as desired.
Another aspect of the invention is a computer readable medium having stored thereon computer readable instructions which, when executed, implement the entity switching method of embodiments of the present invention.
The foregoing description of embodiments of the invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The scope of the claimed subject matter is limited only by the following claims.

Claims (11)

1. A method of model rendering, the method comprising:
Acquiring a hand-drawn inherent color map, a normal map and a two-dimensional light image of a target object; the two-dimensional photo image carries photo information of the target object;
Combining the two-dimensional shadow image with the hand-painted inherent color map to obtain a target inherent color map of the target object;
Outputting the target inherent color map and the normal map to render a three-dimensional model of the target object;
The outputting the target inherent color map and the normal map further comprises: calculating the normal map to acquire normal information of the target object; calculating the target inherent color map to obtain environment reflected light information and color information of the target object; rendering the three-dimensional model by adopting the normal information, the environment reflected light information and the color information;
the RGBA channel of the target inherent color map comprises an Alpha channel for bearing the two-dimensional shadow image;
The calculating the normal map to obtain the normal information of the target object includes: sampling the normal map to guide normal vector information of the target object carried by the normal map into a UV unfolding layer of the three-dimensional model; the calculating the target inherent color map to obtain the environment reflected light information and the color information of the target object includes: sampling the target inherent color map to guide the environment reflected light information carried in the Alpha channel and the color information carried in the RGB channel into a UV unfolding layer of the three-dimensional model; the rendering the three-dimensional model using the normal information, the ambient reflected light information, and the color information includes: the three-dimensional model is generated with a color effect and a shadow effect based on the UV-spreading layer of the three-dimensional model.
2. The method of claim 1, wherein the merging the two-dimensional shadow image with the hand-drawn native color map to obtain a target native color map of the target object comprises:
Merging the two-dimensional shadow image and the hand-drawn inherent color map and inputting the merged two-dimensional shadow image and the hand-drawn inherent color map into a color channel RGBA (red, green and blue) channel to obtain the target inherent color map;
wherein the RGBA channel comprises a shadow component Alpha channel for carrying the two-dimensional shadow image.
3. The method of claim 1, wherein said sampling said target intrinsic color map to direct said ambient reflected light information carried in said Alpha channel into a UV-spreading layer of said three-dimensional model comprises:
Sampling the Alpha channel to obtain indirect diffuse reflection illumination and specular reflection illumination of the target object;
And processing the indirect diffuse reflection illumination and the specular reflection illumination of the target object through a material environment blocking mapping, and importing the processing result as the environment reflection light information into the UV unfolding layer of the three-dimensional model.
4. The method of claim 1, wherein the computing the target intrinsic color map to obtain color information of the target object comprises:
and simulating the color information carried in the RGB channel through a function, and importing a function simulation value into a UV unfolding layer of the three-dimensional model.
5. The method of claim 1, wherein the target intrinsic color map comprises a two-dimensional map bearing ambient reflected light information;
rendering the three-dimensional model through the target intrinsic color map and the normal map, including:
If the physical-based rendering effect is stopped in the running environment where the target object is located, sampling the target inherent color map to guide the color information carried in an RGB channel and the environment reflected light information carried in the two-dimensional map into a UV unfolding layer of the three-dimensional model;
Sampling the normal map to guide normal vector information of the target object carried by the normal map into a UV unfolding layer of the three-dimensional model;
The three-dimensional model is generated with a color effect and a shadow effect based on the UV-spreading layer of the three-dimensional model.
6. The method of claim 1, wherein the two-dimensional shadow image is a two-dimensional gray scale image comprising highlight region information of the target object.
7. The method of claim 1, wherein the two-dimensional photoperiod image is stored in a reflection source of a reflection probe.
8. A model rendering apparatus, the apparatus comprising:
The acquisition module is used for acquiring a hand-painted inherent color map, a normal map and a two-dimensional light image of the target object; the two-dimensional photo image carries photo information of the target object;
The merging module is used for merging the two-dimensional shadow image and the hand-painted inherent color map to obtain a target inherent color map of the target object;
the output module is used for outputting the target inherent color mapping and the normal mapping so as to render a three-dimensional model of the target object;
The model rendering device further comprises a rendering module, a model rendering module and a model rendering module, wherein the rendering module is used for calculating the normal map and acquiring normal information of the target object; calculating the target inherent color map to obtain environment reflected light information and color information of the target object; rendering the three-dimensional model by adopting the normal information, the environment reflected light information and the color information;
The RGBA channel of the target inherent color map comprises an Alpha channel for bearing the two-dimensional shadow image; the rendering module calculates the normal map, and is specifically configured to: sampling the normal map to guide normal vector information of the target object carried by the normal map into a UV unfolding layer of the three-dimensional model; the rendering module calculates the target inherent color map, and is specifically configured to: sampling the target inherent color map to guide the environment reflected light information carried in the Alpha channel and the color information carried in the RGB channel into a UV unfolding layer of the three-dimensional model; the rendering module is specifically configured to, when rendering the three-dimensional model by using the normal information, the environment reflected light information and the color information: the three-dimensional model is generated with a color effect and a shadow effect based on the UV-spreading layer of the three-dimensional model.
9. An electronic device, comprising: a memory, a processor; wherein the memory has stored thereon executable code which, when executed by the processor, causes the processor to perform the model rendering method of any one of claims 1 to 7.
10. A system comprising a processor and a memory, wherein the memory has stored therein at least one instruction, at least one program, code set, or instruction set, which is loaded and executed by the processor to implement the model rendering method of any one of claims 1 to 7.
11. A computer readable medium, characterized in that at least one instruction, at least one program, code set or instruction set is stored, said at least one instruction, at least one program, code set or instruction set being loaded and executed by a processor to implement the model rendering method according to any one of claims 1 to 7.
CN202010888002.4A 2020-08-28 2020-08-28 Model rendering method, device and equipment Active CN112116692B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010888002.4A CN112116692B (en) 2020-08-28 2020-08-28 Model rendering method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010888002.4A CN112116692B (en) 2020-08-28 2020-08-28 Model rendering method, device and equipment

Publications (2)

Publication Number Publication Date
CN112116692A CN112116692A (en) 2020-12-22
CN112116692B true CN112116692B (en) 2024-05-10

Family

ID=73804964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010888002.4A Active CN112116692B (en) 2020-08-28 2020-08-28 Model rendering method, device and equipment

Country Status (1)

Country Link
CN (1) CN112116692B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112581588A (en) * 2020-12-23 2021-03-30 广东三维家信息科技有限公司 Wallboard spray painting method and device and computer storage medium
CN112634425B (en) * 2020-12-30 2022-06-28 久瓴(江苏)数字智能科技有限公司 Model rendering method and device, storage medium and computer equipment
CN113034658B (en) * 2021-03-30 2022-10-04 完美世界(北京)软件科技发展有限公司 Method and device for generating model map
CN113223133A (en) * 2021-04-21 2021-08-06 深圳市腾讯网域计算机网络有限公司 Three-dimensional model color changing method and device
CN113240800A (en) * 2021-05-31 2021-08-10 北京世冠金洋科技发展有限公司 Three-dimensional temperature flow field thermodynamic diagram display method and device
CN113362440B (en) * 2021-06-29 2023-05-26 成都数字天空科技有限公司 Material map acquisition method and device, electronic equipment and storage medium
CN113793402B (en) * 2021-08-10 2023-12-26 北京达佳互联信息技术有限公司 Image rendering method and device, electronic equipment and storage medium
CN113822988A (en) * 2021-09-24 2021-12-21 中关村科学城城市大脑股份有限公司 Three-dimensional model baking method and system based on urban brain space-time construction component
CN114119848B (en) * 2021-12-05 2024-05-14 北京字跳网络技术有限公司 Model rendering method and device, computer equipment and storage medium
CN114119847B (en) * 2021-12-05 2023-11-07 北京字跳网络技术有限公司 Graphic processing method, device, computer equipment and storage medium
CN114327718A (en) * 2021-12-27 2022-04-12 北京百度网讯科技有限公司 Interface display method and device, equipment and medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1632755A (en) * 2003-12-23 2005-06-29 大众电脑股份有限公司 Method for testing overfrequency usage of video card and video card system
CN102346921A (en) * 2011-09-19 2012-02-08 广州市凡拓数码科技有限公司 Renderer-baking light mapping method of three-dimensional software
CN102945558A (en) * 2012-10-17 2013-02-27 沈阳创达技术交易市场有限公司 Optimizing method of high model rendering
CN104778739A (en) * 2015-03-27 2015-07-15 浙江慧谷信息技术有限公司 Computer-based real-time sketch rendering algorithm
CN105321200A (en) * 2015-07-10 2016-02-10 苏州蜗牛数字科技股份有限公司 Offline rendering preprocessing method
CN105574917A (en) * 2015-12-18 2016-05-11 成都君乾信息技术有限公司 Normal map reconstruction processing system and method for 3D models
CN106971418A (en) * 2017-04-28 2017-07-21 碰海科技(北京)有限公司 Hand-held household building materials convex-concave surface texture reconstructing device
CN107749077A (en) * 2017-11-08 2018-03-02 米哈游科技(上海)有限公司 A kind of cartoon style shadows and lights method, apparatus, equipment and medium
CN108304755A (en) * 2017-03-08 2018-07-20 腾讯科技(深圳)有限公司 The training method and device of neural network model for image procossing
CN108564646A (en) * 2018-03-28 2018-09-21 腾讯科技(深圳)有限公司 Rendering intent and device, storage medium, the electronic device of object
CN108765550A (en) * 2018-05-09 2018-11-06 华南理工大学 A kind of three-dimensional facial reconstruction method based on single picture
CN108986200A (en) * 2018-07-13 2018-12-11 北京中清龙图网络技术有限公司 The preprocess method and system of figure rendering
CN109961500A (en) * 2019-03-27 2019-07-02 网易(杭州)网络有限公司 Rendering method, device, equipment and the readable storage medium storing program for executing of Subsurface Scattering effect
CN110570510A (en) * 2019-09-10 2019-12-13 珠海天燕科技有限公司 Method and device for generating material map
CN111563951A (en) * 2020-05-12 2020-08-21 网易(杭州)网络有限公司 Map generation method and device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3269814B2 (en) * 1999-12-03 2002-04-02 株式会社ナムコ Image generation system and information storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1632755A (en) * 2003-12-23 2005-06-29 大众电脑股份有限公司 Method for testing overfrequency usage of video card and video card system
CN102346921A (en) * 2011-09-19 2012-02-08 广州市凡拓数码科技有限公司 Renderer-baking light mapping method of three-dimensional software
CN102945558A (en) * 2012-10-17 2013-02-27 沈阳创达技术交易市场有限公司 Optimizing method of high model rendering
CN104778739A (en) * 2015-03-27 2015-07-15 浙江慧谷信息技术有限公司 Computer-based real-time sketch rendering algorithm
CN105321200A (en) * 2015-07-10 2016-02-10 苏州蜗牛数字科技股份有限公司 Offline rendering preprocessing method
CN105574917A (en) * 2015-12-18 2016-05-11 成都君乾信息技术有限公司 Normal map reconstruction processing system and method for 3D models
CN108304755A (en) * 2017-03-08 2018-07-20 腾讯科技(深圳)有限公司 The training method and device of neural network model for image procossing
CN106971418A (en) * 2017-04-28 2017-07-21 碰海科技(北京)有限公司 Hand-held household building materials convex-concave surface texture reconstructing device
CN107749077A (en) * 2017-11-08 2018-03-02 米哈游科技(上海)有限公司 A kind of cartoon style shadows and lights method, apparatus, equipment and medium
CN108564646A (en) * 2018-03-28 2018-09-21 腾讯科技(深圳)有限公司 Rendering intent and device, storage medium, the electronic device of object
CN108765550A (en) * 2018-05-09 2018-11-06 华南理工大学 A kind of three-dimensional facial reconstruction method based on single picture
CN108986200A (en) * 2018-07-13 2018-12-11 北京中清龙图网络技术有限公司 The preprocess method and system of figure rendering
CN109961500A (en) * 2019-03-27 2019-07-02 网易(杭州)网络有限公司 Rendering method, device, equipment and the readable storage medium storing program for executing of Subsurface Scattering effect
CN110570510A (en) * 2019-09-10 2019-12-13 珠海天燕科技有限公司 Method and device for generating material map
CN111563951A (en) * 2020-05-12 2020-08-21 网易(杭州)网络有限公司 Map generation method and device, electronic equipment and storage medium

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
From pixels to physics: Probabilistic color de-rendering;Ying Xiong;《2012 IEEE Conference on Computer Vision and Pattern Recognition》;20120726;全文 *
基于可编程图形加速硬件的实时光线投射算法;张怡;张加万;孙济洲;柯永振;;***仿真学报(第18期);全文 *
张怡 ; 张加万 ; 孙济洲 ; 柯永振 ; .基于可编程图形加速硬件的实时光线投射算法.***仿真学报.2007,(第18期),全文. *
次世代游戏开发中角色模型与材质贴图技术实现研究;谢征;《中国优秀硕士论文全文数据库》;20190215;全文 *
浅析次世代游戏模型技术特征和制作流程;濮毅;吕明明;;景德镇学院学报;20170615(第03期);全文 *
论贴图在三维动画中的重要作用;梁骁;;美术教育研究;20171225(第24期);全文 *

Also Published As

Publication number Publication date
CN112116692A (en) 2020-12-22

Similar Documents

Publication Publication Date Title
CN112116692B (en) Model rendering method, device and equipment
CN111009026B (en) Object rendering method and device, storage medium and electronic device
US11257286B2 (en) Method for rendering of simulating illumination and terminal
CN112215934B (en) Game model rendering method and device, storage medium and electronic device
CN112316420B (en) Model rendering method, device, equipment and storage medium
US11711563B2 (en) Methods and systems for graphics rendering assistance by a multi-access server
WO2021249091A1 (en) Image processing method and apparatus, computer storage medium, and electronic device
CN114820905B (en) Virtual image generation method and device, electronic equipment and readable storage medium
WO2023066121A1 (en) Rendering of three-dimensional model
US6833836B2 (en) Image rendering process for displaying three-dimensional image information on a two-dimensional screen
CN112262413A (en) Real-time synthesis in mixed reality
CN111476851A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112819941A (en) Method, device, equipment and computer-readable storage medium for rendering water surface
US20240087219A1 (en) Method and apparatus for generating lighting image, device, and medium
CN112446943A (en) Image rendering method and device and computer readable storage medium
CN116363288A (en) Rendering method and device of target object, storage medium and computer equipment
US11804008B2 (en) Systems and methods of texture super sampling for low-rate shading
CN111784814A (en) Virtual character skin adjusting method and device
WO2023051590A1 (en) Render format selection method and device related thereto
CN115845369A (en) Cartoon style rendering method and device, electronic equipment and storage medium
CN114820904A (en) Illumination-supporting pseudo-indoor rendering method, apparatus, medium, and device
CN115035231A (en) Shadow baking method, shadow baking device, electronic apparatus, and storage medium
CN116524102A (en) Cartoon second-order direct illumination rendering method, device and system
CN116934950A (en) Skin rendering method and device, computer storage medium and electronic equipment
US20190114825A1 (en) Light fields as better backgrounds in rendering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant