CN118115644A - Light source eliminating method and rendering engine - Google Patents

Light source eliminating method and rendering engine Download PDF

Info

Publication number
CN118115644A
CN118115644A CN202211480728.XA CN202211480728A CN118115644A CN 118115644 A CN118115644 A CN 118115644A CN 202211480728 A CN202211480728 A CN 202211480728A CN 118115644 A CN118115644 A CN 118115644A
Authority
CN
China
Prior art keywords
light source
rendering
target
cache
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211480728.XA
Other languages
Chinese (zh)
Inventor
李洪珊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Cloud Computing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Cloud Computing Technologies Co Ltd filed Critical Huawei Cloud Computing Technologies Co Ltd
Priority to CN202211480728.XA priority Critical patent/CN118115644A/en
Priority to PCT/CN2023/101625 priority patent/WO2024109006A1/en
Publication of CN118115644A publication Critical patent/CN118115644A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The embodiment of the application provides a light source eliminating method and a rendering engine, which are used for reducing the operation amount of light source elimination. The method can be used for rendering application, wherein the rendering application comprises a plurality of three-dimensional models and a plurality of light sources, each three-dimensional model in the plurality of three-dimensional models comprises a plurality of cache points, and the method is particularly used for rendering the application: respectively calculating a plurality of illumination intensity values of a plurality of light sources at a target cache point, wherein the plurality of cache points comprise the target cache point, and the target cache point is included in a visual space corresponding to a first viewpoint; determining an indication target light source from the plurality of light sources according to the plurality of illumination intensity values and the illumination intensity threshold value, and performing coloring calculation on the target cache point based on the at least one target light source to obtain a rendering result corresponding to the first viewpoint.

Description

Light source eliminating method and rendering engine
Technical Field
The present application relates to the field of computer technologies, and in particular, to a light source rejection method and a rendering engine.
Background
Rendering refers to outputting a real picture simulating the same model and illumination conditions in the real world according to three-dimensional model data (including object geometric models, surface materials and the like) and light data (including light source positions, colors, intensities and the like). Delayed rendering is one way to achieve rendering. In the delay rendering, the first calculation is performed to remove the surface patch of the three-dimensional space which cannot be seen on the user screen, the surface element, the vertex information and the like which can be seen by the user are stored in the global buffer area, and then the coloring calculation is performed according to the information stored in the buffer area, so that the coloring calculation is performed only on the information which can be seen by the user, and the calculation amount of the coloring calculation in the rendering process can be reduced.
Light source rejection is one means of achieving delayed rendering. Current light source rejection includes screen space based light source rejection and camera space based light source rejection. The screen space-based light source rejection refers to dividing a screen space seen by a user into a plurality of two-dimensional lattices of the same size, calculating an illumination result of each light source at each two-dimensional lattice, and if the illumination result of the light source at the two-dimensional lattice is smaller than a threshold value, not calculating the influence of the light source on the two-dimensional lattice in a coloring calculation stage. The light source elimination based on the camera space refers to dividing the calculated camera space into a plurality of three-dimensional lattices with the same size, calculating the illumination result of each light source at each three-dimensional lattice, and if the illumination result of the light source at the three-dimensional lattice is smaller than a threshold value, not calculating the influence of the light source on the three-dimensional lattice in the coloring calculation stage. Both the screen space and the camera space are generated based on multiple views, which requires calculation of the light source rejection result at each view, resulting in a large amount of computation for light source rejection.
Disclosure of Invention
The embodiment of the application provides a light source eliminating method and rendering trigger, which are used for sharing light source eliminating results among a plurality of visual angles and reducing the operation amount of light source eliminating.
In a first aspect, an embodiment of the present application provides a light source culling method, which may be used in a rendering application, where the rendering application includes a plurality of three-dimensional models and a plurality of light sources, and each three-dimensional model in the plurality of three-dimensional models includes a plurality of cache points, and the method includes: respectively calculating a plurality of illumination intensity values of a plurality of light sources at a target cache point, wherein the plurality of cache points comprise the target cache point, and the target cache point is included in a visual space corresponding to a first viewpoint; determining an indication target light source from the plurality of light sources according to the plurality of illumination intensity values and the illumination intensity threshold value, and performing coloring calculation on the target cache point based on the at least one target light source to obtain a rendering result corresponding to the first viewpoint.
In the embodiment of the application, the light source eliminating calculation of the three-dimensional model takes the cache points as granularity, so that the light source eliminating results of the same cache point under different viewpoints can be shared, the light source eliminating results of the same cache point under different viewpoints do not need to be calculated, and the calculation amount of light source eliminating can be reduced.
In one possible design, the visual space corresponding to the second viewpoint includes a target cache point, and the method may further include: and coloring calculation is carried out on the target cache points based on at least one target light source so as to obtain a rendering result corresponding to the second viewpoint. Through the design, the light source eliminating result of the target cache point under the second view point can share the light source eliminating result of the target cache point under the first view point, the light source eliminating result of the target cache point under the second view point does not need to be calculated, and the operation amount of light source eliminating can be reduced.
In one possible design, after determining at least one target light source from the plurality of light sources according to the plurality of light intensity values and the light intensity threshold value, the method may further include: and storing the corresponding relation between the target cache point and the at least one target light source. In this way, the corresponding relation between the target cache point and at least one target light source is stored in the memory, and the light source eliminating result of the target cache point can be obtained by reading the memory in the later period, so that the light source eliminating result of the target cache point does not need to be repeatedly calculated.
In one possible design, the cache point is one or more of the following: patches, texels, points in a point cloud. In this way, the cache points are set as the points in the surface patch, the texel point or the point cloud, which means that the points in the surface patch, the texel point or the point cloud are taken as granularity for light source rejection in the rendering process, so that the accuracy of light source rejection can be improved, and the problem of excessive or insufficient light source rejection is solved.
In one possible design, the cache points are located on the surface of the three-dimensional model, and the number of cache points included in different three-dimensional models is different.
In one possible design, the method may further include: providing a configuration interface, wherein the configuration interface is used for receiving the illumination intensity threshold set by a user. Therefore, the user can input the illumination intensity threshold value by himself through the configuration interface, and adaptability is improved.
In one possible design, an illumination intensity value of each of the at least one target light source at the target cache point is greater than or equal to the illumination intensity threshold.
In a second aspect, an embodiment of the present application provides a rendering engine, where the rendering engine is used for a rendering application, the rendering application includes a plurality of three-dimensional models and a plurality of light sources, each three-dimensional model includes a plurality of cache points, and the rendering engine includes a processing unit and a storage unit; the processing unit is used for acquiring target cache points from the storage unit, respectively calculating a plurality of illumination intensity values of the light sources at the target cache points, wherein the plurality of cache points comprise the target cache points, and the target cache points are included in a visual space corresponding to a first viewpoint; determining at least one target light source from the plurality of light sources according to the plurality of illumination intensity values and the illumination intensity threshold; and performing coloring calculation on the target cache points based on the at least one target light source to obtain a rendering result corresponding to the first viewpoint.
In one possible design, the target cache point is included in the visual space corresponding to the second viewpoint, and the processing unit is further configured to: and coloring calculation is carried out on the target cache points based on the at least one target light source so as to obtain a rendering result corresponding to the second viewpoint.
In one possible design, after determining at least one target light source from the plurality of light sources according to the plurality of light intensity values and the light intensity threshold, the storage unit is further to: and storing the corresponding relation between the target cache point and the at least one target light source.
In one possible design, the cache point is one or more of the following: patches, texels, points in a point cloud.
In one possible design, the cache points are located on the surface of the three-dimensional model, and the number of cache points included in different three-dimensional models is different.
In one possible design, the rendering engine may further include a configuration interface for receiving the illumination intensity threshold set by a user.
In one possible design, an illumination intensity value of each of the at least one target light source at the target cache point is greater than or equal to the illumination intensity threshold.
In a third aspect, embodiments of the present application provide a computer program product comprising instructions which, when executed by a cluster of computer devices, cause the cluster of computer devices to perform a method as provided by the first aspect or any of the possible designs of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium comprising computer program instructions which, when executed by a cluster of computing devices, perform a method as provided by the first aspect or any of the possible designs of the first aspect.
In a fifth aspect, embodiments of the present application provide a cluster of computing devices, comprising at least one computing device, each computing device comprising a processor and a memory; the processor of the at least one computing device is to execute instructions stored in the memory of the at least one computing device to cause the computing device to perform the method as provided by the first aspect or any possible design of the first aspect.
In some possible designs, the cluster of computing devices includes a computing device including a processor and a memory; the processor is configured to execute instructions stored in the memory to run the rendering engine of the second aspect or any of the possible designs of the second aspect to cause the computing device to perform the method as provided by the first aspect or any of the possible designs of the first aspect.
In some possible designs, the cluster of computing devices includes at least two computing devices, each computing device including a processor and a memory. The processors of the at least two computing devices are configured to execute instructions stored in the memory of the at least two computing devices to run the rendering engine of the second aspect or any of the possible designs of the second aspect to cause the cluster of computing devices to perform the method as provided by the first aspect or any of the possible designs of the first aspect.
Drawings
FIG. 1 is a schematic view of UV development in an embodiment of the application;
FIG. 2 is a schematic diagram of screen space based light source culling in an embodiment of the present application;
FIG. 3 is a schematic diagram of camera space-based light source rejection in an embodiment of the present application;
FIG. 4 is a schematic diagram of a rendering system according to an embodiment of the present application;
fig. 5 is a schematic flow chart of a light source eliminating method according to an embodiment of the present application;
Fig. 6 is another flow chart of a light source rejecting method according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a light source rejection result according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a rendering engine according to an embodiment of the present application;
Fig. 9 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
First, some terms related to embodiments of the present application and related arts will be explained for easy understanding by those skilled in the art.
A three-dimensional model is a polygonal representation of an object, typically displayed with a computer or other video device. The shape of the three-dimensional model may be various, and for example, may be a sphere, a cone, a curved object, a planar object, a surface irregularity object, or the like. The displayed object may be a real world entity or an imaginary object. Three-dimensional models are often generated using specialized software, such as three-dimensional modeling tools, but may be generated using other methods. As data for the set of points and other components, the three-dimensional model may be generated manually or according to a certain algorithm.
The patches according to the embodiments of the present application may also be referred to as grids, which divide the surface of a three-dimensional model into numerous tiny planes. These planes may be any polygon, with triangles and quadrilaterals being common. The shape of the patches of the three-dimensional model may also be different. For example, the shape of the face of the sphere and the face of the curved object may be quite different. The size of the patch may be set as needed, and the size of the patch may be set smaller as the accuracy requirement for the rendered image is higher. The position, shape or size of the dough sheet in the three-dimensional model, etc. can be described by geometric parameters.
The texel point according to the embodiment of the present application may also be referred to as texel, or the like, and is the minimum constituent unit of each of a plurality of two-dimensional planes when the surface of the three-dimensional model is mapped to the plurality of two-dimensional planes. For example, the surface of the three-dimensional model may be developed by UV to obtain a plurality of two-dimensional planes, where U refers to the horizontal axis of the two-dimensional plane and V refers to the vertical axis of the two-dimensional plane. For example, referring to fig. 1, the car seat of fig. 1 (a) is UV-unfolded as shown in fig. 1 (b). The two-dimensional plane thus unfolded may be referred to as a texture, which consists of texels.
The point cloud related to the embodiment of the application is a data set obtained by mapping the surface of a three-dimensional model. The points in the point cloud are the smallest constituent units in the point cloud.
It should be noted that, the patch, the texel point, and the point in the point cloud according to the embodiment of the present application are different expressions on the surface of the three-dimensional model, and the obtaining mode and implementation mode of the patch, the texel point, and the point in the point cloud according to the embodiment of the present application are not limited.
The cache points related to the embodiment of the application can also be called cache units, surface cache points and the like, are positioned on the surface of the three-dimensional model and are used for storing operation results in the rendering process, and the operation results can comprise light source rejection results, coloring results, light effect results and the like. In the embodiment of the application, the cache point is used for storing the light source eliminating result as an example. It should be appreciated that the number of cache points for different three-dimensional models may be different.
Optionally, the cache point may be one or more of the following: patches, texels, points in a point cloud. For example, if the cache point is a patch, the patch may store the light source rejection result of the patch, and then the light source rejection result of the patch may be obtained through the patch. For another example, if the buffer point is a texel point, the texel point may store the light source rejection result of the texel point, and then the light source rejection result of the texel point may be obtained through the texel point. For another example, if the cache point is a point in the point cloud, the point may store the light source rejection result of the point, and then the light source rejection result of the point may be obtained through the point.
It should be noted that, the cache point is used for storing the light source eliminating result, which can also be understood that there is a corresponding relationship (or association relationship) between the cache point and the light source eliminating result, so that the light source eliminating result of the cache point can be obtained through the corresponding relationship (or association relationship) between the cache point and the light source eliminating result.
The point of view to which embodiments of the present application relate may be understood as the focal point of an observer (e.g., a camera or human eye, etc.). The viewing angle referred to in the embodiments of the present application may be understood as an angle at which an observer observes a virtual scene. It should be noted that, in the embodiments of the present application, "viewpoint" and "viewing angle" may be used interchangeably.
Embodiments of the present application relate to one or more light sources. Light generated by the light source irradiates the three-dimensional model. The light source may be a point light source, a line light source, a surface light source, or the like. Light generated by the light source, upon contact with the surface of the three-dimensional model, may undergo one of refraction, reflection, or diffuse reflection at each contact point of the surface. And then into the eyes of the user.
The color calculation is to calculate the color result of the light rays emitted from the light source to the color point along the light ray direction and then emitted to the camera along the veiw ray direction for the direction of the line of sight (veiw ray) and the direction of the light ray (light ray) in the rendering process.
At least one of the present application means one or more, and a plurality means two or more. In addition, it should be understood that in the description of the present application, the words "first," "second," and the like are used merely for distinguishing between the descriptions and not for indicating or implying any relative importance or order. For example, the first view and the second view referred to below are used to distinguish different views, and do not indicate the order and priority of the two views, etc.
In the rendering technology, the light source elimination based on the screen space and the light source elimination based on the camera space are two common means for realizing delayed rendering, and the calculation amount of coloring calculation can be reduced. Fig. 2 shows a schematic diagram of a screen space based light source culling. As shown in fig. 2, the screen space is divided into a plurality of two-dimensional lattices, and light source rejection is performed with the two-dimensional lattices as granularity. On the one hand, the screen space lacks depth information, different pixels in the same two-dimensional grid can be illuminated by different light sources, for example, a chair and a back small room in fig. 2 can be illuminated by different light sources and illumination results are different, so that errors exist in light source rejection results, and the subsequent coloring calculation is not facilitated; on the other hand, the screen space is a space generated based on a plurality of viewing angles or a plurality of viewpoints, and light source rejection results under different viewing angles or different viewpoints need to be calculated for each two-dimensional lattice, so that the light source rejection calculation amount is large. Fig. 3 shows a schematic diagram of camera space based light source culling. As shown in fig. 3, the camera space is divided into a plurality of three-dimensional lattices, and light source rejection is performed with the three-dimensional lattices as granularity. Because the camera space is also a space generated based on multiple view angles or multiple view points, light source rejection results under different view angles or different view points are also required for each three-dimensional lattice, and the light source rejection calculation amount is large.
In view of this, the embodiment of the application provides a light source eliminating method, in which light source eliminating is performed with cache points as granularity, so that light source eliminating results at the same cache point under different view angles or different view points can be shared, and therefore, the light source eliminating results at the same cache point under each view angle or each view point do not need to be calculated, and the calculation amount of light source eliminating is reduced.
Next, a scheme provided by the embodiment of the present application is described in detail in connection with an application scenario.
A rendering system to which the present application is applied will be described first. Referring to fig. 4, fig. 4 is a schematic structural view of a rendering system according to the present application. The rendering system is used for rendering a two-dimensional image obtained by rendering a three-dimensional (or two-dimensional) model of a virtual scene through a rendering method, namely rendering the image. The rendering system of the present application may include: one or more terminal devices 100, a network device 200, and a rendering platform 300. Rendering platform 300 may be deployed on a cloud server, although embodiments of the application are not limited in this regard. Rendering platform 300 and terminal device 100 are typically deployed within different data centers.
The terminal device 100 may be a device that needs to display a rendered image in real time, for example, may be a virtual reality device (VR) for flight training, may be a computer for virtual games, a smart phone for virtual malls, and the like, and is not particularly limited herein. The terminal device may be a high-configuration, high-performance (e.g., multi-core, high-dominant frequency, large memory, etc.) device, or a low-configuration, low-performance (e.g., single-core, low-dominant frequency, small memory, etc.) device. In a particular embodiment, the terminal device 100 may include hardware, an operating system, and a rendering application client.
The network device 200 is used to transfer data between the terminal device 100 and the rendering platform 300 via a communication network of any communication mechanism/communication standard. The communication network may be a wide area network, a local area network, a point-to-point connection, or any combination thereof.
Rendering platform 300 includes one or more rendering nodes (three rendering nodes are illustrated in fig. 4). Rendering platform 300 may be implemented by one or more computing devices. The plurality of computing devices may constitute a cluster of computing devices. The functionality of the rendering node may be implemented in conjunction with one or more computing devices. The rendering nodes may include, from bottom to top, rendering hardware, virtualization services, rendering engines, rendering application servers, and the like. Wherein the rendering hardware includes computing resources, storage resources, and network resources. The computing resources may employ heterogeneous computing architectures, such as a central processing unit (central processing unit, CPU) +graphics processing unit (graphics processing unit, GPU) architecture, a cpu+ai chip, a cpu+gpu+ai chip architecture, and the like, which are not specifically limited herein. The storage resources may include memory, video memory, and other storage devices. The network resources may include network cards, port resources, address resources, and the like. The virtualization service is a service for virtualizing resources of the rendering node into vCPU and the like through a virtualization technology, and flexibly isolating mutually independent resources according to the needs of users so as to run application programs of the users. Typically, the virtualization service may include a Virtual Machine (VM) service and a container (container) service, and the VM and the container may run a rendering engine and a rendering application server. The rendering engine is used to implement a rendering algorithm. The rendering application server is used for calling a rendering engine to complete rendering of the rendering image.
The rendering application client on the terminal device 100 and the rendering application server of the rendering platform 300 may be collectively referred to as a rendering application. Common rendering applications may include: game applications, VR applications, movie special effects, animations, etc. The user inputs an operation instruction through the rendering application client, the rendering application client sends the operation instruction to the rendering application server, the rendering application server calls the rendering engine to generate a rendering result, the rendering result is sent to the rendering application client, and then the rendering result is converted into an image by the rendering application client and is presented to the user. In one possible implementation manner, a user may input an illumination intensity threshold through a rendering application client, the rendering application client sends the illumination intensity threshold set by the user to a rendering application server, the rendering application server schedules a rendering engine and configures the illumination intensity threshold, the rendering engine performs light source rejection according to the illumination intensity threshold, generates a rendering result according to light source rejection and de-employment, sends the rendering result to the rendering application client, and then the rendering result is converted into an image by the rendering application client to be presented to the user.
In a specific embodiment, the rendering application server and the rendering application client may be provided by a rendering application provider, and the rendering engine may be provided by a cloud service provider. For example, the rendering application may be a game application, where a game developer of the game application installs a game application server on a rendering platform provided by a cloud service provider, and the game developer of the game application provides a game application client to a user for downloading via the internet and installing on a terminal device of the user. In addition, cloud service providers also provide rendering engines that can provide computing power for gaming applications. In another particular embodiment, the rendering application client, the rendering application server, and the rendering engine may all be provided by a cloud service provider.
In some embodiments, a management device (not shown in fig. 4) may also be included in the rendering system. The management device may be a terminal device of the user and a device provided by a third party outside the rendering platform 300 of the cloud service provider. For example, the management device may be a device provided by a game developer. The game developer may manage the rendering application through the management device. It is understood that the management device may be disposed on the rendering platform 300, or may be disposed outside the rendering platform 300, which is not specifically limited herein.
The light source eliminating method according to the present application will be described in detail. The light source rejection method provided by the embodiment of the application can be executed by the rendering platform 300, or executed by a rendering node in the rendering platform 300, or realized by a rendering engine in the rendering node. For ease of description, illustrations of the various components in the rendering system are not described further below.
Fig. 5 is a schematic flow chart of a light source eliminating method according to an embodiment of the present application.
S501, the rendering platform acquires relevant information of the rendering application. The rendering application includes one or more three-dimensional models, and one or more light sources. For ease of understanding, embodiments of the present application take multiple models and multiple light sources as examples. Each of the plurality of three-dimensional models includes one or more cache points (hereinafter, a plurality of cache points are exemplified). The relevant information of the rendering application may include light source parameters of the plurality of light sources and information of each three-dimensional model. The information for each three-dimensional model may include, for example, a case of cache points mapped by the three-dimensional model.
The cache points involved in the embodiment of the application can be one or more of patches, texels and points in the point cloud. When the cache point is a patch, the information of each three-dimensional model can include the patch division condition of the surface of the three-dimensional model, the patch number, the geometric parameters of the patch and the like. When the cache points are texels, the information of each three-dimensional model may include texture mapping conditions of the surface of the three-dimensional model, and so on. When the cache point is a cloud in the point clouds, the information of each three-dimensional model may include a point cloud mapping condition of a surface of the three-dimensional model, and the like.
In a possible implementation manner, the relevant information of the rendering application may be information sent to the rendering platform by the terminal device, or may also be information sent to the rendering platform by the management device, where in fig. 5, the relevant information of the rendering application is taken as an example by the terminal device sending to the rendering platform.
S502, the rendering platform calculates a plurality of light intensity values of a plurality of light sources at the target cache points respectively. The plurality of cache points includes a target cache point.
For example, the rendering platform may perform surface cache initialization on each of the plurality of three-dimensional models to obtain a plurality of cache points of each three-dimensional model; for each three-dimensional model, the rendering platform calculates a plurality of illumination intensity values of a plurality of light sources at target cache points to be rendered respectively. The number of target cache points may be understood as one or a plurality.
In one possible implementation, the rendering platform may illuminate the target cache point with one or more light sources and calculate the illumination intensity values of the one or more light sources at the target cache point, respectively, to reduce the amount of computation of the illumination intensity. The embodiment of the application takes the example that the target cache point is illuminated by a plurality of light sources. For example, the rendering platform may establish a bitmap (bitmap) of the target cache point, where a correspondence exists between the bitmap and a plurality of light sources of the environment in which the target cache point is located; the rendering platform determines a plurality of light sources for illuminating the target cache point according to the bitmap, and calculates a plurality of illumination intensity values of the light sources at the target cache point respectively.
S503, the rendering platform determines at least one target light source from the plurality of light sources according to the plurality of illumination intensity values and the illumination intensity threshold value.
The illumination intensity threshold may be predefined by the rendering system, such as an illumination intensity threshold determined by the rendering system from historical rendering results. Or the illumination intensity threshold may be user-set, e.g., the rendering system may comprise a configuration interface operable to receive the user-set illumination intensity threshold. Optionally, the terminal device may send the illumination intensity threshold set by the user to the rendering platform along with related information of the rendering application.
The rendering platform compares the plurality of illumination intensity values with illumination intensity thresholds, respectively, and determines at least one target light source from the plurality of light sources. The illumination intensity value of each of the at least one target light source at the target cache point is greater than or equal to an illumination intensity threshold. For example, the rendering platform may traverse each light source, calculate an illumination intensity value of each light source at the target cache point, compare the calculated illumination intensity value with an illumination intensity threshold, if the illumination intensity value is greater than or equal to the illumination intensity threshold, set a value of the light source corresponding to the illumination intensity value on the bitmap to a first value (e.g. 1), if the illumination intensity value is less than the illumination intensity threshold, set a value of the light source corresponding to the illumination intensity threshold on the bitmap to a second value (e.g. 0), and after traversing the plurality of light sources applied in rendering (or traversing the plurality of light sources illuminating the target cache point), set the light source corresponding to the first value on the bitmap to be the target light source.
In the embodiment of the application, the light source rejection result is granularity of a cache point, the cache point can be one or more of a patch, a texel point or a point cloud, and because the generation of the patch, the texel point and the point in the point cloud is irrelevant to a view point or a view angle, namely the generation of the cache point is irrelevant to the view point or the view angle, the light source rejection result can be shared by a plurality of view points or a plurality of view angles, namely the generation of the cache point is irrelevant to the view point or the view angle, which means that the light intensity value of the target light source at the target cache point is larger than or equal to the light intensity threshold under different view points or different view angles.
S504, the rendering platform performs coloring calculation on the target cache points based on at least one target light source to obtain a rendering result corresponding to the first viewpoint. The visual space corresponding to the first viewpoint comprises the target cache point.
In the rendering calculation stage, the rendering platform may perform rendering calculation on the target cache point based on the at least one target light source, without considering the influence of light sources other than the at least one target light source in the plurality of light sources on the target cache point. For example, the rendering platform may color the target cache point according to the light source corresponding to the first value on the bitmap of the target cache point. Taking a plurality of light sources including a light source 1, a light source 2, a light source 3 and a light source 4 as an example, the illumination intensity values of the light source 1 and the light source 2 are larger than or equal to an illumination intensity threshold value, and the illumination intensity values of the light source 3 and the light source 4 are smaller than the illumination intensity threshold value, as shown in fig. 6; the rendering platform may perform shading calculations on the target cache points according to light source 1 and light source 2.
In the embodiment of the application, the rendering results of the same cache point at different viewpoints or different viewpoints can be the same, or the rendering results of the same cache point at different viewpoints or different viewpoints can be different, but the light source rejection results of the same cache point at different viewpoints or different viewpoints are the same. In fig. 5, taking the first viewpoint as an example, the visual space corresponding to the first viewpoint includes a target cache point, and the rendering platform performs coloring calculation on the target cache point based on at least one target light source to obtain a rendering result corresponding to the first viewpoint.
S505, the rendering platform stores the corresponding relation between the target cache point and at least one target light source.
For example, the rendering platform stores the correspondence of the target cache point with the at least one target light source in a memory. The memory may include volatile memory (RAM), such as random access memory (random access memory). The memory may also be a non-volatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory, a mechanical hard disk (HARD DISK DRIVE, HDD) or a Solid State Disk (SSD), etc.
In some embodiments, the memory may be disposed outside the rendering platform, or may be disposed inside the rendering platform. For example, the memory may be disposed inside the rendering engine.
For example, the rendering platform may build a light source rejection result table for a plurality of cache points corresponding to each three-dimensional model. The light source culling result table may include numbers of respective cache points, and numbers of one or more light sources having an illumination intensity threshold value at the respective cache points that is greater than or equal to the illumination intensity threshold value. For example, the light source rejection result table may be as shown in table 1. It should be noted that, the rendering platform may uniquely identify the cache point corresponding to the three-dimensional model through a number or an index, or may uniquely identify the cache point corresponding to the three-dimensional model through coordinates, parameters, or the like, which is not limited by the embodiment of the present application.
TABLE 1
Cache point coding Light source rejection result
1 Light source 1, light source 2
2 Light source 3
…… ……
N Light source 1, … …, light source M
It should be noted that the execution sequence of S504 and S505 is only an example, and the embodiment of the present application is not limited thereto. For example, the rendering platform may further store a correspondence between the target cache point and at least one target light source, and then perform coloring calculation on the target cache point based on the at least one target light source to obtain a rendering result corresponding to the first viewpoint; or the rendering platform can also store the corresponding relation between the target cache point and the at least one target light source and the target cache point while performing coloring calculation on the target cache point based on the at least one target light source.
In the embodiment, the rendering platform performs light source rejection with the granularity of the cache points, so that the light source rejection results at the same cache point under different view angles or different view points can be shared, and therefore the light source rejection results at the same cache point under each view angle or each view point do not need to be calculated, and the calculation amount of light source rejection is reduced.
Referring to fig. 7, a schematic flow chart of a light source eliminating method according to an embodiment of the present application is shown. As shown in fig. 7, the method may include the following.
S701, the rendering platform receives a rendering request. The rendering request may be from the terminal device or from the management device, and fig. 7 exemplifies that the rendering request is from the terminal device. The rendering request may include information about the second viewpoint for requesting a rendering result of the second viewpoint. The related information of the second viewpoint may be, for example, a viewpoint direction of the second viewpoint, etc., and the rendering platform may determine the second viewpoint of the user according to the related information of the second viewpoint. The rendering request may also include an identification of the rendering application. The identification of the rendering application uniquely identifies the rendering application in the rendering system. In some scenes, when the rendering system only includes one rendering application, the rendering request may not carry the identifier of the rendering application.
S701 is an optional step, represented by a dashed line in fig. 7. That is, the rendering platform may acquire a rendering result of the second viewpoint in response to the rendering request, in which case the first viewpoint may be the same as or different from the second viewpoint; or the rendering platform may obtain the rendering result of the second viewpoint after obtaining the rendering result of the first viewpoint, so as to obtain the rendering result of the different viewpoints, where the first viewpoint is different from the second viewpoint.
The visual space corresponding to the second viewpoint includes the target cache point in the flow shown in fig. 5.
S702, the rendering platform acquires at least one target light source corresponding to the target cache point.
The rendering platform can obtain the light source rejection result of the target cache point. For example, the rendering platform obtains at least one target light source corresponding to the target cache point by querying the light source rejection result table. For example, the rendering platform may send a read request to a memory, where the read request is used to read a light source rejection result of a target cache point, and the memory sends at least one target light source corresponding to the target cache point to the rendering platform in response to the read request.
S703, the rendering platform performs coloring calculation on the target cache points based on the at least one target light source to obtain a rendering result of the second viewpoint.
In the embodiment shown in fig. 7, the rendering platform performs coloring calculation on the target cache point according to at least one target light source acquired from the memory to obtain a rendering result of the second viewpoint, so that the light source rejection result of the target cache point under the first viewpoint can be shared, the light source rejection result of the target cache point under the second viewpoint does not need to be calculated, and the calculation load of light source rejection can be reduced.
Based on the same technical concept as the above method, an embodiment of the present application further provides a rendering engine, referring to fig. 8, the rendering engine 800 is used for a rendering application, the rendering application includes a plurality of three-dimensional models and a plurality of light sources, and each three-dimensional model includes a plurality of cache points; the rendering engine 800 includes a processing unit 801 and a storage unit 802.
The processing unit 801 is configured to obtain target cache points from the storage unit 802, respectively calculate a plurality of illumination intensity values of a plurality of light sources at the target cache points, where the plurality of cache points include the target cache points, and the visual space corresponding to the first viewpoint includes the target cache points; determining at least one target light source from the plurality of light sources according to the plurality of illumination intensity values and the illumination intensity threshold; and performing coloring calculation on the target cache points based on the at least one target light source to obtain a rendering result corresponding to the first viewpoint.
In a possible implementation manner, the visual space corresponding to the second viewpoint includes the target cache point, and the processing unit 801 is further configured to: and coloring calculation is carried out on the target cache points based on the at least one target light source so as to obtain a rendering result corresponding to the second viewpoint.
In one possible implementation, after determining at least one target light source from the plurality of light sources according to the plurality of light intensity values and the light intensity threshold, the storage unit 802 is further configured to: and storing the corresponding relation between the target cache point and the at least one target light source.
In one possible implementation, the cache point is one or more of the following: patches, texels, points in a point cloud.
In a possible implementation manner, the cache points are located on the surface of the three-dimensional model, and the number of the cache points included in different three-dimensional models is different.
In a possible implementation, the rendering engine 800 may further comprise a configuration interface 803 (indicated with dashed lines in fig. 8), which configuration interface 803 is arranged to receive said illumination intensity threshold set by the user.
In one possible embodiment, the illumination intensity value of each of the at least one target light source at the target cache point is greater than or equal to the illumination intensity threshold.
The present application also provides a computing device 900. As shown in fig. 9, the computing device 900 includes: bus 902, processor 904, and memory 906. Optionally, computing device 900 may also include a communication interface 908 (shown in phantom in fig. 9). Communication between the processor 904, the memory 906, and the communication interface 908 is via the bus 902. Computing device 900 may be a server or a terminal device. It should be understood that the present application is not limited to the number of processors, memories in computing device 900.
Bus 902 may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one line is shown in fig. 9, but not only one bus or one type of bus. Bus 902 may include a path to transfer information between various components of computing device 900 (e.g., memory 906, processor 904, communication interface 908).
The processor 904 may include any one or more of a central processing unit (central processing unit, CPU), a graphics processor (graphics processing unit, GPU), a Microprocessor (MP), or a digital signal processor (DIGITAL SIGNAL processor, DSP).
In some possible implementations, the processor 904 may include one or more graphics processors. The processor 904 is configured to execute instructions stored in the memory 906 to implement the methods described in the foregoing embodiments corresponding to fig. 5 or fig. 7.
In some possible implementations, the processor 904 may include one or more central processors and one or more graphics processors. The processor 904 is configured to execute instructions stored in the memory 906 to implement the methods described in the foregoing embodiments corresponding to fig. 5 or fig. 7.
Memory 906 may include volatile memory (RAM), such as random access memory (random access memory). The processor 904 may also include non-volatile memory (non-volatile memory), such as read-only memory (ROM), flash memory, mechanical hard disk (HARD DISK DRIVE, HDD) or Solid State Disk (SSD). The memory 906 has stored therein executable program code that is executed by the processor 904 to implement the method described in the corresponding embodiment of fig. 5 or fig. 7. Specifically, the memory 906 stores instructions for the rendering node to execute the method described in the embodiment corresponding to fig. 5 or fig. 7.
Communication interface 908 enables communication between computing device 900 and other devices or communication networks using a transceiver module such as, but not limited to, a network interface card, transceiver, etc.
The embodiment of the application also provides a computer readable storage medium. The computer readable storage medium may be any available medium that can be stored by a computing device or a data storage device such as a data center containing one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc. The computer-readable storage medium includes instructions that instruct a computing device to perform the method described above with respect to the corresponding embodiments of fig. 5 or fig. 7.
Embodiments of the present application also provide a computer program product comprising instructions. The computer program product may be software or a program product containing instructions capable of running on a computing device or stored in any useful medium. The computer program product, when run on at least one computer device, causes the at least one computer device to perform the method described above for the corresponding embodiment of fig. 5 or fig. 7.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the embodiments of the present application without departing from the scope of the embodiments of the application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims and the equivalents thereof, the present application is also intended to include such modifications and variations.

Claims (17)

1. A light source culling method, the method being applied to a rendering application, the rendering application comprising a plurality of three-dimensional models and a plurality of light sources, each three-dimensional model comprising a plurality of cache points, the method comprising:
Respectively calculating a plurality of illumination intensity values of the plurality of light sources at a target cache point, wherein the plurality of cache points comprise the target cache point, and a visual space corresponding to a first viewpoint comprises the target cache point;
Determining at least one target light source from the plurality of light sources according to the plurality of illumination intensity values and the illumination intensity threshold;
and coloring calculation is carried out on the target cache points based on the at least one target light source so as to obtain a rendering result corresponding to the first viewpoint.
2. The method of claim 1, wherein the target cache point is included in a visual space corresponding to a second viewpoint, the method further comprising:
And coloring calculation is carried out on the target cache points based on the at least one target light source so as to obtain a rendering result corresponding to the second viewpoint.
3. The method according to claim 1 or 2, wherein after determining at least one target light source from the plurality of light sources according to the plurality of illumination intensity values and illumination intensity threshold values, the method further comprises:
And storing the corresponding relation between the target cache point and the at least one target light source.
4. A method according to any one of claims 1 to 3, wherein the cache points are one or more of the following:
Patches, texels, points in a point cloud.
5. The method of any one of claims 1 to 4, wherein the cache points are located on a surface of the three-dimensional model, and wherein different three-dimensional models include different numbers of cache points.
6. The method according to any one of claims 1 to 5, further comprising:
Providing a configuration interface, wherein the configuration interface is used for receiving the illumination intensity threshold set by a user.
7. The method of any one of claims 1 to 6, wherein an illumination intensity value of each of the at least one target light source at the target cache point is greater than or equal to the illumination intensity threshold.
8. A rendering engine for rendering an application, the rendering application comprising a plurality of three-dimensional models and a plurality of light sources, each three-dimensional model comprising a plurality of cache points, the rendering engine comprising a processing unit and a storage unit;
The processing unit is used for acquiring target cache points from the storage unit, respectively calculating a plurality of illumination intensity values of the light sources at the target cache points, wherein the plurality of cache points comprise the target cache points, and the target cache points are included in a visual space corresponding to a first viewpoint; determining at least one target light source from the plurality of light sources according to the plurality of illumination intensity values and the illumination intensity threshold; and performing coloring calculation on the target cache points based on the at least one target light source to obtain a rendering result corresponding to the first viewpoint.
9. The rendering engine of claim 8, wherein the target cache point is included in a visual space corresponding to a second viewpoint, the processing unit further configured to:
And coloring calculation is carried out on the target cache points based on the at least one target light source so as to obtain a rendering result corresponding to the second viewpoint.
10. The rendering engine of claim 8 or 9, wherein after determining at least one target light source from the plurality of light sources according to the plurality of light intensity values and the light intensity threshold, the storage unit is further to:
And storing the corresponding relation between the target cache point and the at least one target light source.
11. The rendering engine of any one of claims 8 to 10, wherein the cache points are one or more of:
Patches, texels, points in a point cloud.
12. The rendering engine of any one of claims 8 to 11, wherein the cache points are located on a surface of the three-dimensional model, different three-dimensional models comprising different numbers of cache points.
13. The rendering engine of any one of claims 8 to 12, further comprising a configuration interface for receiving the illumination intensity threshold set by a user.
14. The rendering engine of any of claims 8 to 13, wherein an illumination intensity value of each of the at least one target light source at the target cache point is greater than or equal to the illumination intensity threshold.
15. A computer program product comprising instructions which, when executed by a cluster of computer devices, cause the cluster of computer devices to perform the method of any of claims 1 to 7.
16. A computer readable storage medium comprising computer program instructions which, when executed by a cluster of computing devices, perform the method of any of claims 1 to 7.
17. A cluster of computing devices, comprising at least one computing device, each computing device comprising a processor and a memory;
the processor of the at least one computing device is configured to execute instructions stored in the memory of the at least one computing device to cause the cluster of computing devices to perform the method of any one of claims 1 to 7.
CN202211480728.XA 2022-11-23 2022-11-23 Light source eliminating method and rendering engine Pending CN118115644A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211480728.XA CN118115644A (en) 2022-11-23 2022-11-23 Light source eliminating method and rendering engine
PCT/CN2023/101625 WO2024109006A1 (en) 2022-11-23 2023-06-21 Light source elimination method and rendering engine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211480728.XA CN118115644A (en) 2022-11-23 2022-11-23 Light source eliminating method and rendering engine

Publications (1)

Publication Number Publication Date
CN118115644A true CN118115644A (en) 2024-05-31

Family

ID=91195123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211480728.XA Pending CN118115644A (en) 2022-11-23 2022-11-23 Light source eliminating method and rendering engine

Country Status (2)

Country Link
CN (1) CN118115644A (en)
WO (1) WO2024109006A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226839A (en) * 2013-04-22 2013-07-31 浙江大学 Three-channel high-reality-sense rendering method of transparent material
CN111260766B (en) * 2020-01-17 2024-03-15 网易(杭州)网络有限公司 Virtual light source processing method, device, medium and electronic equipment
CN111739074B (en) * 2020-06-03 2023-07-18 福建数***信息科技有限公司 Scene multi-point light source rendering method and device
WO2022159494A2 (en) * 2021-01-19 2022-07-28 Krikey, Inc. Three-dimensional avatar generation and manipulation

Also Published As

Publication number Publication date
WO2024109006A1 (en) 2024-05-30

Similar Documents

Publication Publication Date Title
CN108154548B (en) Image rendering method and device
US7940266B2 (en) Dynamic reallocation of processing cores for balanced ray tracing graphics workload
US8619078B2 (en) Parallelized ray tracing
US8243073B2 (en) Tree insertion depth adjustment based on view frustum and distance culling
US9842424B2 (en) Volume rendering using adaptive buckets
WO2021228031A1 (en) Rendering method, apparatus and system
US20170186219A1 (en) Method for 360-degree panoramic display, display module and mobile terminal
US10269166B2 (en) Method and a production renderer for accelerating image rendering
CN109992103B (en) Method, system, and storage medium for adjusting angular sampling rate during rendering
US11790594B2 (en) Ray-tracing with irradiance caches
WO2022063260A1 (en) Rendering method and apparatus, and device
US11165848B1 (en) Evaluating qualitative streaming experience using session performance metadata
US11823321B2 (en) Denoising techniques suitable for recurrent blurs
CN113298924A (en) Scene rendering method, computing device and storage medium
CN109410309A (en) Weight illumination method and device, electronic equipment and computer storage medium
CN116758208A (en) Global illumination rendering method and device, storage medium and electronic equipment
KR20140000170A (en) Method for estimating the quantity of light received by a participating media, and corresponding device
US20230351555A1 (en) Using intrinsic functions for shadow denoising in ray tracing applications
WO2023088047A1 (en) Rendering method and apparatus
CN110838167B (en) Model rendering method, device and storage medium
CN114596401A (en) Rendering method, device and system
CN116883576A (en) TBR+PT-based collaborative rendering method and device
CN118115644A (en) Light source eliminating method and rendering engine
CN115761105A (en) Illumination rendering method and device, electronic equipment and storage medium
CN115690284A (en) Rendering method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication