CN116152422A - Illumination data processing method and device and electronic equipment - Google Patents

Illumination data processing method and device and electronic equipment Download PDF

Info

Publication number
CN116152422A
CN116152422A CN202211584390.2A CN202211584390A CN116152422A CN 116152422 A CN116152422 A CN 116152422A CN 202211584390 A CN202211584390 A CN 202211584390A CN 116152422 A CN116152422 A CN 116152422A
Authority
CN
China
Prior art keywords
illumination
block
probe
information
virtual scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211584390.2A
Other languages
Chinese (zh)
Inventor
高浩然
刘勇成
胡志鹏
刘星
程龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202211584390.2A priority Critical patent/CN116152422A/en
Publication of CN116152422A publication Critical patent/CN116152422A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The application discloses a processing method, a processing device, electronic equipment and a computer readable storage medium of illumination data, wherein the processing method comprises the following steps: dividing the virtual scene into a plurality of blocks, acquiring block information of each block, and arranging a plurality of illumination probes in each block; the method comprises the steps of determining probe information of each illumination probe distributed on each block, and storing the probe information of each illumination probe in a first illumination map corresponding to the block on which the illumination probe is distributed; and loading the first illumination map corresponding to the first target block in the virtual scene according to the block information of each block. According to the method, the association relation between the probe information (including the illumination data) of the illumination probe and the block information of the virtual scene is established, so that the partition block loading of the illumination data and the partition rendering of the virtual scene are realized, and the technical problem that the partition rendering of the virtual scene cannot be performed in the prior art is solved.

Description

Illumination data processing method and device and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for processing illumination data, an electronic device, and a computer readable storage medium.
Background
The illumination probe is an illumination rendering technology for measuring scene positions of illumination during light baking and storing corresponding illumination data. In the presence Jing Yun row, the lighting effect illuminated on the virtual object may be rendered by computing the lighting data stored by the lighting probe nearest to the virtual object.
Currently, the rendering of the illumination effect is mainly realized by using a LPPV (Light Probe Proxy Volume) component tool of the Unity engine, the component tool can generate a three-dimensional grid of the illumination probe in the virtual scene, store illumination data (namely, spherical harmonic coefficients) of the illumination probe in the three-dimensional grid, and realize the rendering of the illumination effect in the virtual scene by loading the illumination data in the three-dimensional grid in the virtual scene when Jing Yun rows are performed. As the illumination data in the three-dimensional grid can be fully loaded in the virtual scene when the scene runs, the method can only realize synchronous rendering of all the virtual scenes and can not realize partition rendering of the virtual scenes.
Disclosure of Invention
The application provides a processing method and device of illumination data, electronic equipment and a computer readable storage medium, and aims to solve the technical problem that in the prior art, all virtual scenes can be synchronously rendered and the virtual scenes cannot be subjected to partition rendering.
The embodiment of the application provides a processing method of illumination data, which comprises the following steps:
dividing a virtual scene into a plurality of blocks, and acquiring block information of each block, wherein the block information comprises block position information of the block in the virtual scene and size information of the block, and each block is provided with a plurality of illumination probes;
determining probe information of each illumination probe arranged on each block, and storing the probe information of each illumination probe in a first illumination map corresponding to the block in which the illumination probe is arranged, wherein the probe information comprises probe position information of the illumination probe in the block and illumination data stored by the illumination probe;
and loading the first illumination map corresponding to a first target block in the virtual scene according to the block information of each block, wherein the first target block comprises at least one block occupied by a first target virtual object in the virtual scene.
The embodiment of the application also provides a device for processing illumination data, which comprises: the system comprises a block information acquisition unit, a probe information processing unit and an illumination map loading unit;
The block information acquisition unit is used for dividing a virtual scene into a plurality of blocks and acquiring block information of each block, wherein the block information comprises block position information of the block in the virtual scene and size information of the block, and each block is provided with a plurality of illumination probes;
the probe information processing unit is used for determining probe information of each illumination probe laid on each block, and storing the probe information of each illumination probe in a first illumination map corresponding to the block where the illumination probe is laid, wherein the probe information comprises probe position information of the illumination probe in the block and illumination data of the illumination probe;
the illumination map loading unit is configured to load, according to the block information of each block, the first illumination map corresponding to a first target block in the virtual scene, where the first target block includes at least one block occupied by a first target virtual object in the virtual scene.
The embodiment of the application also provides electronic equipment, which comprises: a memory and a processor;
The memory is used for storing one or more computer instructions;
the processor is configured to execute the one or more computer instructions to implement the method described above.
Embodiments of the present application also provide a computer-readable storage medium having stored thereon one or more computer instructions that are executed by a processor to implement the above-described method.
Compared with the prior art, the illumination data processing method provided by the application comprises the following steps: dividing a virtual scene into a plurality of blocks, and acquiring block information of each block, wherein the block information comprises block position information of the block in the virtual scene and size information of the block, and each block is provided with a plurality of illumination probes; determining probe information of each illumination probe arranged on each block, and storing the probe information of each illumination probe in a first illumination map corresponding to the block in which the illumination probe is arranged, wherein the probe information comprises probe position information of the illumination probe in the block and illumination data stored by the illumination probe; and loading the first illumination map corresponding to a first target block in the virtual scene according to the block information of each block, wherein the first target block comprises at least one block occupied by a first target virtual object in the virtual scene. According to the method, the virtual scene is partitioned into the blocks, and the illumination probes distributed in the virtual scene are classified into the corresponding blocks, so that illumination data stored by the illumination probes can be stored according to the blocks. Therefore, when the virtual scene runs, the block loading of the illumination data can be realized according to the rendering requirements of different blocks in the virtual scene. The processing method of illumination data provides a technical scheme that the association relation between the probe information (comprising the illumination data) of the illumination probe and the block information of the virtual scene is established, so that the partition block loading of the illumination data and the partition block rendering of the virtual scene are realized, and the technical problem that in the prior art, all virtual scenes can only be synchronously rendered and the partition rendering of the virtual scene can not be performed is solved.
Drawings
Fig. 1 is a schematic diagram of light effect rendering on a virtual scene in the prior art provided in the embodiments of the present application;
fig. 2 is an application system diagram of a processing method of illumination data provided in an embodiment of the present application;
fig. 3 is a flowchart of a method for processing illumination data according to the first embodiment of the present application;
fig. 4 is a schematic diagram of dividing a virtual scene into a plurality of blocks according to a first embodiment of the present application;
FIG. 5 is a schematic diagram of dividing a virtual scene into a plurality of blocks according to the first embodiment of the present application;
FIG. 6 is a comparative graph of illumination effects provided by the first embodiment of the present application;
FIG. 7 is a schematic diagram of a block-loading illumination data provided in a first embodiment of the present application;
FIG. 8 is a flowchart of a method for processing illumination data according to a second embodiment of the present application;
FIG. 9 is a block loading and unloading illumination data application diagram according to a second embodiment of the present application;
fig. 10 is a schematic structural diagram of a processing device for illumination data according to a third embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is, however, susceptible of embodiment in many other ways than those herein described and similar generalizations can be made by those skilled in the art without departing from the spirit of the application and the application is therefore not limited to the specific embodiments disclosed below.
An illumination Probe (Light Probe) is an illumination rendering technique for measuring (detecting) scene positions of illumination during Light baking, storing illumination data in a virtual scene. In practical application, a certain number of illumination probes are usually placed in a virtual scene, illumination data from all directions around the illumination probes are baked, the illumination probes store illumination data in the virtual scene around the illumination probes, when the system runs in a field Jing Yun, the system searches the illumination probes near the virtual object, and the illumination effect of the virtual object is rendered in the virtual scene through the illumination data stored in the illumination probes near the virtual object.
Currently, the rendering of lighting effects is mainly implemented by using LPPV (Light Probe Proxy Volume) component tools of the Unity engine, which can enable large dynamic game objects that cannot use baked lighting maps to use more lighting information. The method mainly comprises the steps of generating a three-dimensional grid of the illumination probe in a virtual scene based on the illumination probe, and storing illumination data of the illumination probe in the three-dimensional grid, wherein the illumination data are represented by spherical harmonic coefficients of spherical harmonic functions. When the field Jing Yun is in a line, the illumination data in the three-dimensional grid is loaded in the virtual scene, and the illumination effect in the virtual scene is rendered through calculation of the illumination data. As the illumination data in the three-dimensional grid can be fully loaded in the virtual scene when the scene runs, the method can only realize synchronous rendering of all the virtual scenes and can not realize partition rendering of the virtual scenes. In this case, if the illumination effects of different areas in the virtual scene need to be respectively rendered, the spatial cutting of the illumination probe needs to be performed again, and the process is complex and takes a long time. If the illumination data of some areas are directly and forcedly loaded or unloaded, the whole illumination data can be disordered. Therefore, the technical problem that the prior art can only synchronously render all the virtual scenes and cannot carry out partition rendering on the virtual scenes.
Fig. 1 is a schematic diagram of light effect rendering on a virtual scene in the prior art according to an embodiment of the present application.
As shown in fig. 1, a virtual object 101 and a virtual object 102 are arranged in a virtual scene, a three-dimensional grid 103 of an illumination probe is generated in the virtual scene by an LPPV assembly tool of a Unity engine, and illumination data of the illumination probe is stored in the three-dimensional grid. When the virtual scene runs, illumination data in the three-dimensional grid are loaded in the virtual scene, and the virtual scene illumination effect is rendered by further calculating the illumination data, wherein the virtual object 101 and the virtual object 102 obtain corresponding illumination effects. However, if the illumination effect rendering of the virtual object 101 is required to be implemented in the virtual scene according to the requirement, and then the illumination effect rendering of the virtual object 102 is required to be implemented, in the prior art, since illumination data is synchronously loaded in the virtual scene, partition rendering of the virtual object 101 and the virtual object 102 cannot be implemented.
In view of this, the present application provides a method for implementing partition block loading or unloading of illumination data based on different blocks of a virtual scene. According to the method, the virtual scene is partitioned into blocks, and the illumination probes distributed in the virtual scene are classified into the corresponding blocks, so that illumination data stored by the illumination probes can be stored according to the blocks. Therefore, the aim of carrying out block loading of illumination data on different blocks according to the rendering requirements of the different blocks in the virtual scene and carrying out partition rendering of illumination effects is fulfilled.
The method, apparatus, electronic device and computer readable storage medium for processing illumination data described in the present application are described in further detail below with reference to specific embodiments and accompanying drawings.
Fig. 2 is an application system diagram of a processing method of illumination data according to an embodiment of the present application. As shown in fig. 2, the system includes a client 201 and a server 202. The user terminal 201 and the server terminal 202 are in communication connection through a network. The user terminal 201 may be a touch terminal, such as a smart phone, a tablet computer, a personal digital assistant (Personal Digital Assistant, PDA), etc.; the computer terminal may be one or more devices, such as a notebook computer and a desktop computer. The server 202 is configured to deploy the processing method of illumination data provided in the application. The application developer sends an illumination effect rendering request to the server side 202 through the user side 201, the server side 202 stores illumination data of the illumination probe in blocks according to different blocks of the virtual scene by using a deployed processing method of the illumination data, and realizes block loading or unloading of the illumination data in the virtual scene according to the illumination effect rendering request. The server 202 may be a module that belongs to a lighting effect rendering device with the client 201, and may provide lighting data processing services for the client 201, or may be an independent server, and may provide lighting data processing services for a plurality of clients 201.
The first embodiment of the application provides a processing method of illumination data.
Fig. 3 is a flowchart of a processing method of illumination data provided in the present embodiment. The following describes the processing method of illumination data provided in this embodiment in detail with reference to fig. 3. The embodiments referred to in the following description are used for explaining the technical solutions of the present application, and are not limiting for practical use.
As shown in fig. 3, the processing method of illumination data provided in this embodiment includes the following steps:
in step S301, the virtual scene is divided into a plurality of blocks, and block information of each block is obtained, wherein the block information includes block position information of the block in the virtual scene and size information of the block, and a plurality of illumination probes are arranged in each block.
The division of the virtual scene into a plurality of blocks may be understood as dividing the virtual scene into a plurality of stereoscopic regions on a division plane by taking a certain plane in the virtual scene as the division plane. Such as: the virtual scene is divided into N blocks by taking the horizontal plane as a dividing plane, and then each block is a three-dimensional area which extends upwards from the bottom layer of the virtual scene to the top layer of the virtual scene by taking the block limit divided on the horizontal plane as a boundary. Of course, in addition to the primary division of the virtual scene on the division plane, the obtained primary block (a primary block may be understood as a block obtained by primary division of the virtual scene on the division plane) may be further finely divided secondarily on the basis of the primary division. Such as: the above-mentioned divided stereoscopic block is further divided into a plurality of region segments, and each region segment is used as a divided block.
The basis of the block division may be the size of the virtual scene, such as: dividing the virtual scene into a plurality of blocks with the same size according to the size of the virtual scene; the number of illumination probes deployed in the virtual scene may also be as follows: according to the arrangement quantity of the illumination probes in different areas in the virtual scene, dividing the area with more illumination probes into smaller-sized areas, and dividing the area with less illumination probes into larger-sized areas; the setting condition of the virtual object in the virtual scene can also be that: and dividing the region where the same virtual object is positioned into a block according to the virtual object arranged in the virtual scene.
The partitioned blocks may be of any shape, any size, such as: the virtual scene is divided into cuboid blocks taking 2×2 square meters square as a basic surface, and the following steps are: the virtual scene is divided into cuboid blocks taking rectangles of 2×4 square meters as a basic surface, and the following steps are: the virtual scene is divided into cylindrical blocks with a circle of 2 meters in diameter as a base surface. The same virtual scene can be divided into a plurality of blocks with the same size and shape, or can be divided into a plurality of blocks with different sizes and shapes, and all the blocks are combined to be capable of completely covering the virtual scene so as to prevent missing any virtual object or at least part of the virtual object in the virtual scene. The specific manner of block division is not limited herein.
After the virtual scene is partitioned, the block information of each block needs to be acquired, wherein the block information comprises the position information of the block in the virtual scene and the size information of the block. The size information of the block may be represented by volume data representing the volume of the block, or may be represented by area data representing the dividing area of the block on a dividing plane. The position information of the block may be represented by column and row data of the block in the virtual scene, or may be represented by identification codes of the block in the X direction and the Y direction of the division plane.
Fig. 4 is a schematic diagram of dividing a virtual scene into a plurality of blocks according to the present embodiment, where the virtual scene is divided into a plurality of blocks on a dividing plane, and the blocks are not shown to extend from a bottom layer of the virtual scene to a stereoscopic region of a top layer of the virtual scene.
As shown in fig. 4, the virtual scene is divided into 9 rectangular blocks of the same size with a square of 2×2 square meters as a base plane on an arbitrary plane as a division plane. In an alternative implementation manner provided in this embodiment, the identification code of each block in the X direction and the Y direction of the dividing plane is used to represent the block position information of each block, and the dividing area of each block in the dividing plane is used to represent the size information of each block, so the block information corresponding to the blocks shown in fig. 4 may be as shown in table 1:
Table 1 block information table 1
Figure BDA0003991693510000061
Figure BDA0003991693510000071
Fig. 5 is a schematic diagram of dividing a virtual scene into a plurality of blocks according to the present embodiment, where the virtual scene is divided into a plurality of blocks on a dividing plane, and the blocks are not shown as stereoscopic regions extending from a bottom layer of the virtual scene to a top layer of the virtual scene.
As shown in fig. 5, an arbitrary plane is used as a dividing plane, and the virtual scene is divided into 5 rectangular blocks with the same size and taking a square of 2×2 square meters as a base surface and 2 rectangular blocks with the same size and taking a rectangle of 2×4 square meters as a base surface on the plane according to the arrangement condition of the virtual objects in the virtual scene. In an alternative implementation manner provided in this embodiment, the identification code of each block in the X direction and the Y direction of the dividing plane is used to represent the block position information of each block, and the dividing area of each block in the dividing plane is used to represent the size information of each block, so the block information corresponding to the blocks shown in fig. 5 may be as shown in table 2:
table 2 block information table 2
Block labels (see FIG. 5) Block position information (X, Y) Size information (square meter)
501 (X1,Y3) 4
502 (X1,Y2) 4
503 (X3,Y3) 4
504 (X3,Y2) 4
505 (X3,Y1) 4
506 (X2,Y23) 8
507 (X12,Y1) 8
As shown in tables 1 and 2, not only the block position information of each block in the virtual scene, but also the size information of each block, but also the direction information of each block in the virtual scene can be obtained from the block information table. For example, the block 506 has a block position information of (X2, Y23), which indicates that the length of the block 506 in the X direction is smaller than the length of the block 506 in the Y direction, and the block 506 is a rectangular block in the Y axis direction; the block position information of the block 507 is (X12, Y1), which means that the length of the block 507 in the X direction is longer than the length in the Y direction, and it is understood that the block 507 is a rectangular block in the X axis direction.
Step S302, determining probe information of each illumination probe laid on each block, and storing the probe information of each illumination probe in a first illumination map corresponding to the block where the illumination probe is laid, where the probe information includes probe position information of the illumination probe in the block and illumination data stored by the illumination probe.
After the virtual scene is divided into a plurality of blocks, all illumination probes arranged in the virtual scene can be distinguished according to the blocks, and probe information of all illumination probes included in each block is stored in one illumination map corresponding to each block, and in this embodiment, the illumination map corresponding to the block is defined as a first illumination map.
The probe information may include probe position information of the illumination probe in the block, and illumination data stored by the illumination probe.
The probe position information can be understood as the position of the illumination probe in the block to which the illumination probe belongs, and since the block is a three-dimensional area in a virtual scene, the illumination probe can be positioned at any position in the three-dimensional area, so that the probe position information of the illumination probe is a three-dimensional array, and not only comprises the positions of the illumination probe in the X direction and the Y direction of a certain plane of the block, but also comprises the positions of the illumination probe in the Z direction perpendicular to the plane in the block. If only the positions of the illumination probes in the X direction and the Y direction of a certain plane of the block are used as probe position information of the illumination probes, the illumination probes which are positioned at the same X position and Y position but are arranged at different heights can be generated. Such as: the illumination probe A and the illumination probe B are positioned in the block Z, the positions of the illumination probe A and the illumination probe B in the X direction of the horizontal plane of the block Z are 5, the positions of the illumination probe A and the illumination probe B in the Y direction are 8, and the positions of the illumination probe A and the illumination probe B in the Z direction perpendicular to the horizontal plane are 3 and 7 respectively. If the probe position information of the illumination probe is represented by two-dimensional data, the position information of the illumination probe A is (X5, Y8), and the position information of the illumination probe B is (X5, Y8), so that the illumination probe A and the illumination probe B are considered to be the same illumination probe. If the three-dimensional data is used for representing the probe position information of the illumination probe, the position information of the illumination probe A is (X5, Y8 and Z3), the position information of the illumination probe B is (X5, Y8 and Z7), and the specific positions of the illumination probe A and the illumination probe B in the block Z can be positioned according to the position information.
The illumination data may be understood as illumination data obtained by baking illumination information from various directions around the illumination probe after the illumination probe is arranged in the virtual scene. The illumination data are stored in the illumination probe, and when the illumination probe is on the spot Jing Yun, the illumination effect in the virtual scene can be rendered by loading and calculating the illumination data stored in the illumination probe.
The illumination data is typically represented by spherical harmonic coefficients, which refer to coefficients of spherical harmonics formed by weighting and summing a plurality of basis functions, so that the spherical harmonic coefficients are a data set made up of several data. The more the number of basis functions in the spherical harmonics, the more complex the spherical harmonics, the stronger the expression of the spherical harmonics.
The order is generally adopted to represent the complexity level of the spherical harmonic function, and the higher the order of the spherical harmonic function is, the more the basic functions are included, the larger the corresponding illumination data volume is, and the more the approximately real illumination effect can be restored. However, the larger the amount of illumination data, the larger the occupied storage space, and the running performance of the application is affected. Thus, it is often necessary to achieve a balance between lighting effects and operational performance depending on the specifics of the business application. Such as: for game applications, if the order of the spherical harmonics is too high, the amount of data contained in the illumination data is too large, and although the game picture has a good performance, the game performance may be affected, that is, situations such as a stuck game running, a discontinuous game picture and the like may occur. If the order of the spherical harmonics is too low, the amount of data contained in the illumination data is too small, and the game picture shows extremely unrealistic phenomena.
Currently, the Unity engine uses third-order spherical harmonics to describe illumination from different directions around the illumination probe, wherein the spherical harmonic coefficients corresponding to the third-order spherical harmonics comprise 9 data, the first-order spherical harmonic coefficient L0 comprises one data, the second-order spherical harmonic coefficient L1 comprises three data, and the third-order spherical harmonic coefficient L2 comprises five data. Because the illumination data of the illumination probe comprises three RGB color channels, each color channel corresponds to a group of three-order spherical harmonic coefficients and comprises 9 data, the illumination data of the illumination probe is a 3X 9 matrix, and each row corresponds to 9 data of one color channel. The illumination data expressed in terms of third order spherical harmonic coefficients are shown in table 3:
TABLE 3 illumination data (third order spherical harmonic coefficients)
L0 L1 L3
Red channel (R) L00 L1-1,L10,L11 L2-2,L2-1,L20,L21,L22
Green channel (G) L00 L1-1,L10,L11 L2-2,L2-1,L20,L21,L22
Blue channel (B) L00 L1-1,L10,L11 L2-2,L2-1,L20,L21,L22
As described above, the illumination data represented by the third-order spherical harmonic coefficients contains 27 data in total, and the data amount is large. The terminal equipment with better performance can support the storage and calculation of a large amount of data, and the operation effect of the application cannot be excessively influenced by illumination data with more data. For terminal equipment with poor performance, the operation effect of the application is poor due to illumination data with more data quantity. Such as: for the game of end game, the memory capacity of the memory module of the computer is larger, and the computing capacity of the computing module is stronger, so that the illumination data expressed by the third-order spherical harmonic coefficient can be computed easily in the game running process, and the running performance of the game is not influenced. However, for the hand game, due to the performance defect of the mobile phone, in the game running process, the illumination data with large data volume occupies more calculation bearing capacity of the calculation module, so that the situations of clamping and blocking of game pictures, unsmooth pictures and the like are caused.
Based on this, the present embodiment provides an alternative implementation of illumination data, i.e. using a second order spherical harmonic to describe illumination from different directions around the illumination probe, the illumination data being represented by a second order spherical harmonic coefficient.
The second order spherical harmonic corresponds to 4 spherical harmonic coefficients, wherein the first order spherical harmonic coefficient L0 comprises one datum and the second order spherical harmonic coefficient L1 comprises three datum. Because the illumination data of the illumination probe comprises three RGB color channels, each color channel corresponds to a group of second order spherical harmonic coefficients and comprises 4 data, the illumination data of the illumination probe is a 3×4 matrix, and each row corresponds to 4 data of one color channel. The illumination data expressed in terms of the second order spherical harmonic coefficients are shown in table 4:
TABLE 4 illumination data (second order spherical harmonic coefficients)
L0 L1
Red channel (R) L00 L1-1,L10,L11
Green channel (G) L00 L1-1,L10,L11
Blue channel (B) L00 L1-1,L10,L11
From the above, the total of 12 pieces of illumination data represented by the second-order spherical harmonic coefficients is contained, and the data size is greatly reduced.
Fig. 6 is a comparison chart of illumination effects provided by the present embodiment.
As shown in fig. 6, 601 is an illumination effect rendered using a second order spherical harmonic coefficient, and 602 is an illumination effect rendered using a third order spherical harmonic coefficient. The second-order spherical harmonic coefficient is reduced by 15 data compared with the third-order spherical harmonic coefficient, the color transition is more linear, but for low-frequency light such as global illumination, the illumination effect rendered by the second-order spherical harmonic coefficient and the illumination effect rendered by the third-order spherical harmonic coefficient are difficult to distinguish by naked eyes.
Therefore, the second-order spherical harmonic coefficient is adopted to represent illumination data, the influence on the rendered illumination effect is small, but the data volume of the illumination data can be compressed, and the running performance of the application is improved.
In summary, the method for representing illumination data by the second-order spherical harmonic coefficient provided by the embodiment achieves the balance between illumination effect and operation performance.
Since the spherical harmonics describe illumination from different directions around the illumination probe, the illumination data stored in the illumination probe should include illumination data corresponding to different types of illumination around the illumination probe, i.e. the illumination data is a collection comprising sub-illumination data corresponding to a plurality of illumination types.
The illumination data in the illumination probe is mainly used for calculating indirect light irradiated to each virtual object in the virtual scene, so the illumination types in the embodiment mainly include: sunlight, indirect sunlight, and indirect static light.
The sky light (sky light) can be understood as reflected sunlight, belongs to indirect light, has different light and shadow tracking degrees, and is closer to a point light source, and is closer to a space light source.
The indirect sunlight (sun Light Indirect) can be understood as the illumination of sunlight after reflection through the illumination probe.
The indirect static light (static Light Indirect) may be understood as illumination through the illumination probe after reflection by a static object.
The corresponding sub-illumination data comprises at least: the sunlight illumination data, the indirect sunlight illumination data, and the indirect static light illumination data. In an optional implementation manner of this embodiment, the sunlight illumination data is a set of spherical harmonic coefficients, the indirect sunlight illumination data is three sets of spherical harmonic coefficients, and the indirect static sunlight illumination data is three sets of spherical harmonic coefficients.
The three groups of spherical harmonic coefficients of the indirect sunlight illumination data and the indirect static light illumination data are spherical harmonic coefficients corresponding to three RGB color channels respectively, and one group of spherical harmonic coefficients of the sunlight illumination data are spherical harmonic coefficients corresponding to one single-color channel which is formed by compressing the three RGB color channels. Thus, 7 sets of spherical harmonic coefficients are included in the illumination data. If three-order spherical harmonics are used to describe the illumination around the illumination probe, then the illumination data stored by one illumination probe contains 63 data (7x9=63), and if two-order spherical harmonics are used to describe the illumination around the illumination probe, then the illumination data stored by one illumination probe contains 28 data (7x4=28). The data are divided into three groups (sunlight illumination data, indirect sunlight illumination data and indirect static light illumination data) and are respectively baked and stored into one illumination probe three times.
In the illumination data processing method provided in this embodiment, probe information of all illumination probes arranged on each block is stored in one illumination map, and in this embodiment, the illumination map corresponding to the block is defined as a first illumination map. That is, the first illumination map corresponding to each block stores probe information of all illumination probes arranged on the block, wherein the probe information includes position information of the illumination probes in the block and illumination data stored by the illumination probes.
As previously described, the probe position information of the illumination probe in the block is a three-dimensional data including the position of the illumination probe in the X direction, Y direction and Z direction of a certain plane of the block. The illumination data stored by the illumination probe is a data set comprising at least 7 sets of spherical harmonic coefficients. Then, the probe information stored in the first illumination map includes at least: the probe position information, the set of spherical harmonic coefficients corresponding to the sunlight, the three sets of spherical harmonic coefficients corresponding to the indirect sunlight, and the three sets of spherical harmonic coefficients corresponding to the indirect static light. The probe information of the illumination probe can be as shown in table 5:
TABLE 5 Probe information of illumination probes
Figure BDA0003991693510000121
The numbers in table 5 represent the identification codes of the respective constituent items in the probe information, that is, the identification code "1" is used for the sunlight illumination data, the identification code "5" is used for the indirect static light illumination data corresponding to the indirect static light red channel, and the identification code "8" is used for the probe position information. Of course, the expression may be expressed by letters, symbols, or the like, and the specific expression is not limited thereto.
Table 5 shows the probe information of one illumination probe, which includes 8 composition items, to be stored in the first illumination map using 8 map pixels. And (3) storing the probe information of all the illumination probes distributed in one block in a first illumination map, wherein the size of the first illumination map is N multiplied by 8, and N is the number of the illumination probes distributed in the block.
The probe position information of each illumination probe includes 3 data of X direction, Y direction and Z direction, and assuming that 10 illumination probes are arranged in the block Z, then,
if the third-order spherical harmonics are used to describe the illumination around the illumination probe, the first illumination map contains the following data amounts:
(7×9+3)×10=660
if the second order spherical harmonic is used to describe the illumination around the illumination probe, the first illumination map contains the following data:
(7×4+3)×10=310
It follows that using a second order spherical harmonic to describe the illumination around the illumination probe will greatly reduce the amount of data contained in the first illumination map.
In an optional implementation manner of this embodiment, after the step of storing the probe information of each illumination probe in the first illumination map corresponding to the area where the illumination probe is placed, the method further includes: and storing the first illumination map corresponding to the block in an illumination data storage module corresponding to the block, wherein the illumination data storage module is a module which is arranged for each block and used for storing probe information of the illumination probes distributed on the block.
After the virtual scene is divided into a plurality of blocks, an illumination data storage module is set for each block and used for storing a first illumination map corresponding to each block. The block information corresponding to the block and the probe information corresponding to the illumination probe are mutually bound through the illumination data storage module, and the illumination data are bound on the block where the illumination probe is arranged. In order to facilitate the block loading of illumination data corresponding to different blocks according to different blocks in subsequent operations.
Step S303, loading the first illumination map corresponding to a first target block in the virtual scene according to the block information of each block, where the first target block includes at least one block occupied by a first target virtual object in the virtual scene.
Through step S302, the illumination data stored by the illumination probe is stored in the first illumination map corresponding to the illumination probe placement block, when Jing Yun is on the spot, the first illumination map corresponding to the block with illumination rendering requirement can be loaded in the virtual scene according to the specific illumination rendering requirement of the virtual scene, and the illumination data stored by all the illumination probes in the virtual scene are not required to be loaded completely, so that the block loading and stream loading of the illumination data are realized.
In an optional implementation manner of this embodiment, according to the lighting rendering requirement of the virtual object in the virtual scene, the first lighting map corresponding to the first target block occupied by the first target virtual object in the virtual scene is loaded in the virtual scene.
The first target virtual object may be understood as a virtual object with illumination rendering requirements when the scene is running, and the virtual object may occupy one block of the virtual scene or may occupy multiple blocks of the virtual scene.
The first target block may be understood as a block occupied by the first target virtual object, if the first target virtual object occupies one block of the virtual scene, the first target block is one block, and if the first target virtual object occupies a plurality of blocks of the virtual scene, the first target block is a plurality of blocks.
Fig. 7 is a schematic diagram of the block loading illumination data provided in the present embodiment.
In the virtual scene, there are virtual objects 711 and 712, the division of the blocks of the virtual scene has been completed, and the first illumination map corresponding to each block is obtained. If the virtual object 711 is the first target virtual object to be rendered, then the blocks 724, 725, 727, 728 occupied by the virtual object 711 are the first target blocks. According to the block information of the blocks 724, 725, 727, 728, four first illumination maps corresponding to the blocks 724, 725, 727, 728 occupied by the virtual object 711 can be loaded in the virtual scene.
As described above, in an alternative implementation manner of this embodiment, after dividing the virtual scene into a plurality of blocks, an illumination data storage module is configured for each block, and is configured to store a first illumination map corresponding to each block. Based on this, this embodiment provides an alternative implementation manner of loading, in a virtual scene, a first illumination map corresponding to a first target block according to block information of each block, including the following steps:
Step S303-1, according to the block information of each block, obtaining the first illumination map corresponding to the first target block from the illumination data storage module corresponding to the first target block.
Step S303-2, loading the first illumination map corresponding to the first target block in the virtual scene.
The first embodiment provides an optional processing method of illumination data, and realizes block loading of the illumination data. It should be noted that the exemplary description of the first embodiment is only for facilitating understanding of the method described in the present embodiment, and is not intended to be limiting, and the specific implementation is not limited herein.
The second embodiment of the application provides a processing method of illumination data.
Fig. 8 is a flowchart of a processing method of illumination data provided in the present embodiment.
As shown in fig. 8, the processing method of illumination data provided in this embodiment includes the following steps:
in step S801, a virtual scene is divided into a plurality of blocks, and block information of each block is obtained, wherein the block information includes block position information of the block in the virtual scene and size information of the block, and a plurality of illumination probes are arranged in each block.
The division of the virtual scene into a plurality of blocks may be understood as dividing the virtual scene into a plurality of stereoscopic regions, and may be a division performed by taking a certain plane in the virtual scene as a division plane. The basis of the block division can be the size of the virtual scene, the number of illumination probes distributed in the virtual scene, and the setting condition of virtual objects in the virtual scene. The same virtual scene can be divided into a plurality of blocks with the same size and shape, or can be divided into a plurality of blocks with different sizes and shapes, and all the blocks are combined to be capable of completely covering the virtual scene so as to prevent missing any virtual object or at least part of the virtual object in the virtual scene.
After the virtual scene is subjected to block division, block information of each block can be obtained, wherein the block information comprises block position information of the block in the virtual scene and size information of the block. The size information may be represented by volume data representing the volume of the block, or may be represented by area data representing the dividing area of the block on the dividing plane. The block position information may be represented by a two-dimensional array in which the positions of the blocks in the X direction and the Y direction of the dividing plane are recorded.
Specifically, the block dividing method and the block information obtaining method are substantially the same as those described in step S301 of the first embodiment of the present application, and the relevant points may be referred to the description of the first embodiment of the present application in step S301, which is not repeated herein.
Step S802, determining probe information of each illumination probe laid on each block, and storing the probe information of each illumination probe in a first illumination map corresponding to the block where the illumination probe is laid, where the probe information includes probe position information of the illumination probe in the block and illumination data stored by the illumination probe.
After the virtual scene is divided into a plurality of blocks, all illumination probes distributed in the virtual scene can be distinguished according to the blocks, and probe information of each illumination probe included in each block is stored in a first illumination map corresponding to each block, so that illumination data is divided and stored according to the blocks.
The probe information of the illumination probe stored in the first illumination map includes probe position information of the illumination probe in the block and illumination data stored by the illumination probe. The probe position information can be represented by a three-dimensional array, and the positions of the illumination probe in the X direction and the Y direction of a certain plane of the block and the positions of the illumination probe in the Z direction perpendicular to the plane are recorded in the array. The illumination data is generally represented by spherical harmonic coefficients of spherical harmonic functions, and the higher the order of the spherical harmonic functions, the more data are contained in the spherical harmonic coefficients, the larger the corresponding illumination data quantity is, and the more the approximately real illumination effect can be restored. However, the larger the amount of illumination data, the larger the occupied storage space, and the running performance of the application is affected.
Based on the above, the embodiment provides an illumination data compression method for representing illumination data by using second-order spherical harmonic coefficients, the spherical harmonic coefficients representing the illumination data are compressed from 9 data to 4 data, and on the basis of little influence on the rendered illumination effect, the operation performance of the application is improved, and the balance between the illumination effect and the operation performance is achieved.
Specifically, the compression method of the illumination data and the acquisition method of the first illumination map are basically the same as those described in step S302 of the first embodiment of the present application, and the relevant points may be referred to the description of the first embodiment of the present application in step S302, which is not repeated herein.
Step 803, loading the first illumination map corresponding to a first target block in the virtual scene according to the block information of each block, where the first target block includes at least one block occupied by a first target virtual object in the virtual scene.
The first target virtual object may be understood as a virtual object with illumination rendering requirements when the scene is running, and the virtual object may occupy one block of the virtual scene or may occupy multiple blocks of the virtual scene.
The first target block may be understood as a block occupied by or related to the first target virtual object, if the first target virtual object occupies one block of the virtual scene, the first target block is one block, and if the first target virtual object occupies a plurality of blocks of the virtual scene, the first target block is a plurality of blocks.
When the scene runs, the first illumination map corresponding to the first target block can be independently loaded in the virtual scene, so that the block loading and the stream loading of illumination data are realized.
Specifically, the method for loading the illumination data in blocks is basically the same as the method described in step S303 of the first embodiment of the present application, and the relevant points may be referred to the description of the first embodiment of the present application in step S303, which is not repeated here.
Step S804, integrating the first illumination map corresponding to each block in the first target block into a second illumination map; and rendering the illumination effect of the virtual scene, which irradiates on the first target virtual object, according to the second illumination map.
The second illumination map refers to an illumination map formed by integrating a plurality of first illumination maps, and if the first target block only includes one block corresponding to one first illumination map, the second illumination map is the first illumination map. The second illumination map stores probe information of all illumination probes arranged in the first target block, namely probe position information and illumination data of all illumination probes arranged in the first target block.
When the scene runs, the first illumination maps corresponding to the blocks related to the first target object are loaded in the virtual scene, the first illumination maps can be integrated into a huge second illumination map through the rendering pipeline, corresponding illumination data are collected according to world space coordinates, global illumination is calculated, and finally illumination effect rendering of the first target virtual object is completed.
Step S805, unloading the first illumination map corresponding to a second target block from the virtual scene according to the block information of each block, where the second target block includes at least one block occupied by a second target virtual object in the virtual scene.
The second target virtual object may be understood as a virtual object that has rendered an illumination effect when the scene is running, and is a virtual object with a requirement for canceling illumination rendering.
The second target block may be understood as a block occupied by or related to the second target virtual object, and may be one block or may be a plurality of blocks.
When the scene runs, not only the illumination rendering requirement on a certain virtual object or a certain area scene is met, but also the illumination rendering requirement on the certain virtual object or the certain area scene is eliminated. Such as: in a game selection scene, a plurality of virtual roles are simultaneously arranged, and along with the movement of a selection control by a player, the virtual roles in a selection cursor display illumination effects, and other virtual roles hide the illumination effects. To achieve this, it may be necessary to load the illumination data related to the virtual character within the selection cursor into the virtual scene according to the movement of the selection cursor, and unload the illumination data related to the virtual character that is not yet in the illumination rendering state in the selection cursor from the virtual scene.
In an optional implementation manner provided in this embodiment, unloading of illumination data related to the second target virtual object may be implemented by unloading, from the virtual scene, a first illumination map corresponding to the second target block occupied by or related to the second target virtual object.
Fig. 9 is an application schematic diagram of loading and unloading illumination data by blocks according to the present embodiment.
In a game selector scenario, there are virtual characters 941, 942, 943. The player needs to move the selection control to enable the virtual characters in the selection cursor to display the illumination effect, and other virtual characters hide the illumination effect. The processing method of illumination data provided by the embodiment can realize the above requirements, and specifically comprises the following steps:
step S901, the game selector scene is divided into 16 blocks with the same size, and block information of each block is obtained, including block position information and size information of each block in the scene. The blocks and block information of the game selection scene are shown in table 6:
table 6 block information table
Block labels (see FIG. 9) Block position information (X, Y) Size information (square meter)
911 (X12,Y4) 8
912 (X34,Y4) 8
913 (X56,Y4) 8
914 (X78,Y4) 8
915 (X12,Y3) 8
916 (X34,Y3) 8
917 (X56,Y3) 8
918 (X78,Y3) 8
919 (X12,Y2) 8
920 (X34,Y2) 8
921 (X56,Y2) 8
922 (X78,Y2) 8
923 (X12,Y1) 8
924 (X34,Y1) 8
925 (X56,Y1) 8
926 (X78,Y1) 8
In step S902, probe information of all illumination probes arranged in each block is determined, and the probe information is stored in a first illumination map corresponding to each block. Taking block 915 as an example, assume that 6 illumination probes are disposed in block 915, and the second order spherical harmonic coefficients represent illumination data stored in the illumination probes, where the illumination data is a data set including celestial illumination data, indirect sunlight illumination data, and indirect static light illumination data, as previously described, where the celestial illumination data is a set of spherical harmonic coefficients, the indirect sunlight illumination data is three sets of spherical harmonic coefficients, and the indirect static light illumination data is three sets of spherical harmonic coefficients. Therefore, the probe information of one illumination probe includes 7 sets of spherical harmonic coefficients and a set of three-dimensional position data, and the first illumination map corresponding to the block 915 includes 186 data amounts, and the calculation formula is as follows:
(7×4+3)×6=186
The probe information stored in the first illumination map corresponding to block 915 is as shown in table 7:
TABLE 7 Probe information of illumination probes
Figure BDA0003991693510000181
Figure BDA0003991693510000191
In step S903, the first illumination map corresponding to each block is stored in the illumination data storage module corresponding to each block.
In step S904, when the player moves the selection cursor to the avatar 941, since the blocks related to the avatar 941 are the blocks 915, 916, 919, and 920, the system obtains the first illumination maps corresponding to the blocks 915, 916, 919, and 920 from the illumination data storage modules corresponding to the blocks 915, 916, 919, and 920, and loads the four first illumination maps corresponding to the blocks 915, 916, 919, and 920 in the virtual scene. The four first illumination maps corresponding to the block 915, the block 916, the block 919 and the block 920 loaded in the virtual scene are integrated into a second illumination map, and the illumination effect irradiated on the virtual character 941 in the virtual scene is rendered according to the second illumination map.
In step S905, when the player moves the selection cursor to the avatar 942, since the blocks related to the avatar 942 are the blocks 916, 917, 920, 921, then the system obtains the first illumination maps corresponding to the blocks 916, 917, 920, 921 from the illumination data storage modules corresponding to the blocks 916, 917, 920, 921, and loads the four first illumination maps corresponding to the blocks 916, 917, 920, 921 in the virtual scene. At the same time, the four first illumination maps corresponding to block 915, block 916, block 919, and block 920 are unloaded from the virtual scene. The four first illumination maps corresponding to the block 916, the block 917, the block 920 and the block 921 loaded in the virtual scene are integrated into a second illumination map, and the illumination effect irradiated on the virtual character 942 in the virtual scene is rendered according to the second illumination map.
In step S906, when the player moves the selection cursor to the avatar 943, since the blocks related to the avatar 943 are the block 917, the block 918, the block 921, and the block 922, the system obtains the first illumination maps corresponding to the block 917, the block 918, the block 921, and the block 922 from the illumination data storage modules corresponding to the block 917, the block 918, the block 921, and the block 922, and loads the four first illumination maps corresponding to the block 917, the block 918, the block 921, and the block 922 in the virtual scene. At the same time, the four first illumination maps corresponding to block 916, block 917, block 920, and block 921 are unloaded from the virtual scene. The four first illumination maps corresponding to the block 917, the block 918, the block 921 and the block 922 loaded in the virtual scene are integrated into a second illumination map, and the illumination effect irradiated on the virtual character 943 in the virtual scene is rendered according to the second illumination map.
The method realizes the block loading and unloading of the illumination data, thereby realizing the application requirement that the virtual characters positioned in the selection cursor display illumination effects and other virtual characters hide the illumination effects.
The second embodiment provides an optional processing method of illumination data, and realizes block loading of the illumination data. It should be noted that the exemplary description of the second embodiment is only for facilitating understanding of the method described in the present embodiment, and is not intended to be limiting, and the specific implementation is not limited herein.
The third embodiment of the application provides a processing device for illumination data. Fig. 10 is a schematic structural diagram of a processing device for illumination data according to the present embodiment.
As shown in fig. 10, the processing device for illumination data provided in this embodiment includes: a block information acquisition unit 1001, a probe information processing unit 1002, and an illumination map loading unit 1003.
The block information obtaining unit 1001 is configured to divide a virtual scene into a plurality of blocks, and obtain block information of each block, where the block information includes block position information of the block in the virtual scene and size information of the block, and each block is configured to place a plurality of illumination probes.
The probe information processing unit 1002 is configured to determine probe information of each illumination probe disposed on each block, and store the probe information of each illumination probe in a first illumination map corresponding to the block in which the illumination probe is disposed, where the probe information includes probe position information of the illumination probe in the block and illumination data of the illumination probe.
Optionally, the illumination data is represented by spherical harmonic coefficients, and is a set of sub-illumination data corresponding to multiple illumination types.
Optionally, the illumination type includes at least: the sunlight, indirect sunlight and indirect static light, and the sub-illumination data corresponding to the multiple illumination types at least comprises:
the sunlight illumination system comprises sunlight illumination data, indirect sunlight illumination data and indirect static light illumination data, wherein the sunlight illumination data are a group of spherical harmonic coefficients, the indirect sunlight illumination data are three groups of spherical harmonic coefficients, and the indirect static light illumination data are three groups of spherical harmonic coefficients.
Optionally, the probe information stored in the first illumination map includes at least:
the probe position information, the set of spherical harmonic coefficients corresponding to the sunlight, the three sets of spherical harmonic coefficients corresponding to the indirect sunlight, and the three sets of spherical harmonic coefficients corresponding to the indirect static light.
Optionally, the spherical harmonic coefficient is a second order spherical harmonic coefficient.
Optionally, after the step of determining the probe information of each illumination probe disposed on each block and storing the probe information of each illumination probe in the first illumination map corresponding to the block in which the illumination probe is disposed, the apparatus is further configured to:
and storing the first illumination map corresponding to the block in an illumination data storage module corresponding to the block, wherein the illumination data storage module is a module which is arranged for each block and used for storing probe information of the illumination probes distributed on the block.
The illumination map loading unit 1003 is configured to load, in the virtual scene, the first illumination map corresponding to a first target block according to the block information of each block, where the first target block includes at least one block occupied by a first target virtual object in the virtual scene.
Optionally, the loading the first illumination map corresponding to the first target block in the virtual scene according to the block information of each block includes:
according to the block information of each block, the first illumination map corresponding to the first target block is obtained from the illumination data storage module corresponding to the first target block;
and loading the first illumination map corresponding to the first target block in the virtual scene.
Optionally, the device is further configured to:
integrating the first illumination map corresponding to each block in the first target block into a second illumination map;
and rendering the illumination effect of the virtual scene, which irradiates on the first target virtual object, according to the second illumination map.
Optionally, the device is further configured to:
And unloading the first illumination map corresponding to a second target block from the virtual scene according to the block information of each block, wherein the second target block comprises at least one block occupied by a second target virtual object in the virtual scene.
A fourth embodiment of the present application provides an electronic device. Fig. 11 is a schematic structural diagram of an electronic device provided in the present embodiment.
As shown in fig. 11, the electronic device provided in this embodiment includes: a memory 1101 and a processor 1102.
The memory 1101 is configured to store computer instructions for executing a processing method of illumination data.
The processor 1102 is configured to execute computer instructions stored in the memory 1101 and perform the following operations:
dividing a virtual scene into a plurality of blocks, and acquiring block information of each block, wherein the block information comprises block position information of the block in the virtual scene and size information of the block, and each block is provided with a plurality of illumination probes;
determining probe information of each illumination probe arranged on each block, and storing the probe information of each illumination probe in a first illumination map corresponding to the block in which the illumination probe is arranged, wherein the probe information comprises probe position information of the illumination probe in the block and illumination data stored by the illumination probe;
And loading the first illumination map corresponding to a first target block in the virtual scene according to the block information of each block, wherein the first target block comprises at least one block occupied by a first target virtual object in the virtual scene.
Optionally, the following operations are also performed:
integrating the first illumination map corresponding to each block in the first target block into a second illumination map;
and rendering the illumination effect of the virtual scene, which irradiates on the first target virtual object, according to the second illumination map.
Optionally, the following operations are also performed:
and unloading the first illumination map corresponding to a second target block from the virtual scene according to the block information of each block, wherein the second target block comprises at least one block occupied by a second target virtual object in the virtual scene.
Optionally, the illumination data is represented by spherical harmonic coefficients, and is a set of sub-illumination data corresponding to multiple illumination types.
Optionally, the illumination type includes at least: the sunlight, indirect sunlight and indirect static light, and the sub-illumination data corresponding to the multiple illumination types at least comprises:
The sunlight illumination system comprises sunlight illumination data, indirect sunlight illumination data and indirect static light illumination data, wherein the sunlight illumination data are a group of spherical harmonic coefficients, the indirect sunlight illumination data are three groups of spherical harmonic coefficients, and the indirect static light illumination data are three groups of spherical harmonic coefficients.
Optionally, the probe information stored in the first illumination map includes at least:
the probe position information, the set of spherical harmonic coefficients corresponding to the sunlight, the three sets of spherical harmonic coefficients corresponding to the indirect sunlight, and the three sets of spherical harmonic coefficients corresponding to the indirect static light.
Optionally, the spherical harmonic coefficient is a second order spherical harmonic coefficient.
Optionally, after the step of determining the probe information of each illumination probe disposed on each block and storing the probe information of each illumination probe in the first illumination map corresponding to the block in which the illumination probe is disposed, the following operations are further performed:
and storing the first illumination map corresponding to the block in an illumination data storage module corresponding to the block, wherein the illumination data storage module is a module which is arranged for each block and used for storing probe information of the illumination probes distributed on the block.
Optionally, the loading the first illumination map corresponding to the first target block in the virtual scene according to the block information of each block includes:
according to the block information of each block, the first illumination map corresponding to the first target block is obtained from the illumination data storage module corresponding to the first target block;
and loading the first illumination map corresponding to the first target block in the virtual scene.
A fifth embodiment of the present application provides a computer-readable storage medium comprising computer instructions which, when executed by a processor, are configured to implement the methods described in the embodiments of the present application.
It is noted that the relational terms such as "first," "second," and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Furthermore, the terms "comprise," "have," "include," and other similar terms, are intended to be inclusive and open-ended in that any one or more items following any one of the terms described above, neither of which indicates that the one or more items have been enumerated, as an exhaustive list, or limited to only those one or more items so enumerated.
As used herein, unless expressly stated otherwise, the term "or" includes all possible combinations, except where not possible. For example, if expressed as a database may include a or B, then unless specifically stated or not possible, the database may include a, or B, or a and B. In a second example, if expressed as a database might include A, B or C, the database may include database A, or B, or C, or A and B, or A and C, or B and C, or A and B and C, unless otherwise specifically stated or not viable.
It is noted that the above-described embodiments may be implemented by hardware or software (program code), or a combination of hardware and software. If implemented by software, it may be stored in the computer-readable medium described above. The software, when executed by a processor, may perform the methods disclosed above. The computing units and other functional units described in this disclosure may be implemented by hardware or software, or a combination of hardware and software. Those of ordinary skill in the art will also appreciate that the above-described modules/units may be combined into one module/unit, and each of the above-described modules/units may be further divided into a plurality of sub-modules/sub-units.
In the foregoing detailed description, embodiments have been described with reference to numerous specific details that may vary from implementation to implementation. Certain adaptations and modifications of the described embodiments can be made. Other embodiments will be apparent to those skilled in the art from consideration of the specification disclosed herein. The specification and examples are for illustrative purposes only, with the true scope and nature of the application being indicated by the following claims. The order of steps shown in the figures is also for illustrative purposes only and is not meant to be limited to any particular step, order. Accordingly, those skilled in the art will recognize that the steps may be performed in a different order when performing the same method.
In the drawings and detailed description of the present application, exemplary embodiments are disclosed. Many variations and modifications may be made to these embodiments. Accordingly, although specific terms are employed, they are used in a generic and descriptive sense only and not for purposes of limitation.

Claims (12)

1. A method of processing illumination data, the method comprising:
dividing a virtual scene into a plurality of blocks, and acquiring block information of each block, wherein the block information comprises block position information of the block in the virtual scene and size information of the block, and each block is provided with a plurality of illumination probes;
Determining probe information of each illumination probe arranged on each block, and storing the probe information of each illumination probe in a first illumination map corresponding to the block in which the illumination probe is arranged, wherein the probe information comprises probe position information of the illumination probe in the block and illumination data stored by the illumination probe;
and loading the first illumination map corresponding to a first target block in the virtual scene according to the block information of each block, wherein the first target block comprises at least one block occupied by a first target virtual object in the virtual scene.
2. The method according to claim 1, wherein the method further comprises:
integrating the first illumination map corresponding to each block in the first target block into a second illumination map;
and rendering the illumination effect of the virtual scene, which irradiates on the first target virtual object, according to the second illumination map.
3. The method according to claim 1, wherein the method further comprises:
and unloading the first illumination map corresponding to a second target block from the virtual scene according to the block information of each block, wherein the second target block comprises at least one block occupied by a second target virtual object in the virtual scene.
4. The method of claim 1, wherein the illumination data is represented as spherical harmonic coefficients as a set comprising sub-illumination data corresponding to a plurality of illumination types.
5. The method of claim 4, wherein the illumination type comprises at least: the sunlight, indirect sunlight and indirect static light, and the sub-illumination data corresponding to the multiple illumination types at least comprises:
the sunlight illumination system comprises sunlight illumination data, indirect sunlight illumination data and indirect static light illumination data, wherein the sunlight illumination data are a group of spherical harmonic coefficients, the indirect sunlight illumination data are three groups of spherical harmonic coefficients, and the indirect static light illumination data are three groups of spherical harmonic coefficients.
6. The method of claim 5, wherein the probe information stored in the first illumination map comprises at least:
the probe position information, the set of spherical harmonic coefficients corresponding to the sunlight, the three sets of spherical harmonic coefficients corresponding to the indirect sunlight, and the three sets of spherical harmonic coefficients corresponding to the indirect static light.
7. The method of any of claims 4-6, wherein the spherical harmonic coefficients are second order spherical harmonic coefficients.
8. The method according to claim 1, wherein after the step of determining the probe information of each illumination probe placed on each block and saving the probe information of each illumination probe in the first illumination map corresponding to the block where the illumination probe is placed, the method further comprises:
and storing the first illumination map corresponding to the block in an illumination data storage module corresponding to the block, wherein the illumination data storage module is a module which is arranged for each block and used for storing probe information of the illumination probes distributed on the block.
9. The method of claim 8, wherein loading the first illumination map corresponding to the first target block in the virtual scene according to the block information of each block comprises:
according to the block information of each block, the first illumination map corresponding to the first target block is obtained from the illumination data storage module corresponding to the first target block;
and loading the first illumination map corresponding to the first target block in the virtual scene.
10. An apparatus for processing illumination data, the apparatus comprising: the system comprises a block information acquisition unit, a probe information processing unit and an illumination map loading unit;
the block information acquisition unit is used for dividing a virtual scene into a plurality of blocks and acquiring block information of each block, wherein the block information comprises block position information of the block in the virtual scene and size information of the block, and each block is provided with a plurality of illumination probes;
the probe information processing unit is used for determining probe information of each illumination probe laid on each block, and storing the probe information of each illumination probe in a first illumination map corresponding to the block where the illumination probe is laid, wherein the probe information comprises probe position information of the illumination probe in the block and illumination data of the illumination probe;
the illumination map loading unit is configured to load, according to the block information of each block, the first illumination map corresponding to a first target block in the virtual scene, where the first target block includes at least one block occupied by a first target virtual object in the virtual scene.
11. An electronic device, comprising: a memory and a processor;
the memory is used for storing one or more computer instructions;
the processor being configured to execute the one or more computer instructions to implement the method of any of claims 1-9.
12. A computer readable storage medium having stored thereon one or more computer instructions executable by a processor to implement the method of any of claims 1-9.
CN202211584390.2A 2022-12-09 2022-12-09 Illumination data processing method and device and electronic equipment Pending CN116152422A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211584390.2A CN116152422A (en) 2022-12-09 2022-12-09 Illumination data processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211584390.2A CN116152422A (en) 2022-12-09 2022-12-09 Illumination data processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN116152422A true CN116152422A (en) 2023-05-23

Family

ID=86355349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211584390.2A Pending CN116152422A (en) 2022-12-09 2022-12-09 Illumination data processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN116152422A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117272698A (en) * 2023-11-21 2023-12-22 北京格如灵科技有限公司 Weather change simulation method and system
CN118052923A (en) * 2024-04-16 2024-05-17 深圳海拓时代科技有限公司 Object rendering method, device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117272698A (en) * 2023-11-21 2023-12-22 北京格如灵科技有限公司 Weather change simulation method and system
CN118052923A (en) * 2024-04-16 2024-05-17 深圳海拓时代科技有限公司 Object rendering method, device and storage medium

Similar Documents

Publication Publication Date Title
CN116152422A (en) Illumination data processing method and device and electronic equipment
KR101639852B1 (en) Pixel value compaction for graphics processing
CN110990516B (en) Map data processing method, device and server
WO2021249091A1 (en) Image processing method and apparatus, computer storage medium, and electronic device
CN112233216B (en) Game image processing method and device and electronic equipment
US9224233B2 (en) Blending 3D model textures by image projection
CN108830923B (en) Image rendering method and device and storage medium
Tasse et al. Enhanced texture‐based terrain synthesis on graphics hardware
CN113470092B (en) Terrain rendering method and device, electronic equipment and storage medium
CN103995684A (en) Method and system for synchronously processing and displaying mass images under ultrahigh resolution platform
CN111899323A (en) Three-dimensional earth drawing method and device
US11995771B2 (en) Automated weighting generation for three-dimensional models
CN106575428B (en) High order filtering in a graphics processing unit
CN104025155A (en) Variable depth compression
CN113256755A (en) Image rendering method, intelligent terminal and storage device
CN112973121A (en) Reflection effect generation method and device, storage medium and computer equipment
CN111861873B (en) Method and device for generating simulation image
CN111353007B (en) Geographic element pickup method, coding method and device based on vector slicing and electronic equipment
CN115131531A (en) Virtual object display method, device, equipment and storage medium
CN110580274B (en) GIS data rendering method
CN116563357B (en) Image matching method, device, computer equipment and computer readable storage medium
CN111558222B (en) Method, device and equipment for compressing illumination graph
CN115712580B (en) Memory address allocation method, memory address allocation device, computer equipment and storage medium
CN111506680B (en) Terrain data generation and rendering method and device, medium, server and terminal
US7593011B2 (en) Light map compression

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination