CN116524102A - Cartoon second-order direct illumination rendering method, device and system - Google Patents

Cartoon second-order direct illumination rendering method, device and system Download PDF

Info

Publication number
CN116524102A
CN116524102A CN202310402953.XA CN202310402953A CN116524102A CN 116524102 A CN116524102 A CN 116524102A CN 202310402953 A CN202310402953 A CN 202310402953A CN 116524102 A CN116524102 A CN 116524102A
Authority
CN
China
Prior art keywords
illumination
shadow
color
shadow point
cartoon
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310402953.XA
Other languages
Chinese (zh)
Inventor
王英
陈若含
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
4u Beijing Technology Co ltd
Original Assignee
4u Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 4u Beijing Technology Co ltd filed Critical 4u Beijing Technology Co ltd
Priority to CN202310402953.XA priority Critical patent/CN116524102A/en
Publication of CN116524102A publication Critical patent/CN116524102A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Generation (AREA)

Abstract

The application provides a rendering method, device and system of cartoon second-order direct illumination, wherein the method comprises the following steps: determining a maximum value t1 and a minimum value of a gradual change range of each shadow point aiming at each shadow point of the cartoon image to be rendered under direct illumination; calculating the current position of gradual change based on the maximum value, the minimum value, the illumination direction and the normal direction; calculating the color of each shadow point based on the current position of the gradation and the color of illumination; and rendering the shadows of the cartoon images to be rendered under direct illumination based on the color of each shadow point.

Description

Cartoon second-order direct illumination rendering method, device and system
Technical Field
The application relates to the technical field of cartoon animation, in particular to a rendering method, device and system of cartoon second-order direct illumination.
Background
Conventional illumination models are typically calculated based on three directions, a normal direction, an illumination direction, and a line of sight direction, respectively. The illumination model calculates the brightness and color of each pixel by calculating the angle between the object surface normal direction and the illumination direction and the angle between the line of sight direction and the surface normal direction. Meanwhile, in order to simulate the roughness of the surface of an object, the conventional illumination model also introduces some roughness parameters, wherein the roughness is determined by calculating the included angle between the line of sight direction and the reflection vector. In some scenes with low performance requirements, such as mobile games, unimportant scene models, or other platforms with low rendering capabilities, conventional lighting models are often used extensively.
However, the conventional illumination model has some drawbacks in calculating the actual illumination effect, such as being unable to simulate the effects of shadows, refraction, scattering, etc. well, so in the scene requiring high quality rendering, more complex rendering techniques, such as a physical-based rendering (Physical Based Rendering, PBR) model, etc. need to be used.
PBR is a more realistic graphics rendering technique that is based on real physical phenomena by simulating light scattering, reflection, perspective, and absorption of the object surface for rendering. PBR uses accurate physical parameters to describe the appearance and behavior of a material, such as refractive index, roughness, metalization, etc., making a graphic look more realistic. The PBR uses a physical-based ambient illumination model to calculate the reflection and transmission of light on the surface, so that a more real illumination effect can be obtained.
However, PBR has some problems for direct illumination when rendering cartoon characters. Mainly because the accurate simulation process of the light is too real, the characteristics of simplicity, flatness and brightness of the cartoon are difficult to be completely represented. Because the cartoon style emphasizes the simplicity and the abstraction of the form of the color, and the PBR rendering is too dependent on the light simulation of the real physics, the cartoon style is not flexible enough, and the cartoon characteristic is difficult to accurately express. The cartoon-style object can lose the original bright, flat and pure color characteristics under the special PBR rendering, and presents too real shadow and light shadow changes, so that the whole style becomes too realistic.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides a rendering method, a rendering device and a rendering system for second-order direct illumination of a cartoon, which at least solve the problem that the accurate simulation process of PBR on the direct illumination is too real and the cartoon characteristic is difficult to be completely represented in the prior art.
According to an aspect of the embodiment of the invention, there is provided a rendering method of cartoon second-order direct illumination, comprising: determining the maximum value and the minimum value t0 of the gradual change range of each shadow point aiming at each shadow point of the cartoon image to be rendered under direct illumination; calculating a gradual change current position x based on the maximum value, the minimum value t0, the illumination direction L and the normal direction N; calculating the color PColor of each shadow point based on the current position x of the gradual change and the color of illumination; and rendering the shadow of the cartoon image to be rendered under direct illumination based on the color PColor of each shadow point.
According to another aspect of the embodiment of the present invention, there is also provided a rendering device for cartoon second order direct illumination, including: a determining module configured to determine, for each shadow point of the cartoon character to be rendered under direct illumination, a maximum value and a minimum value t0 of a gradation range of the each shadow point; a position calculation module configured to calculate a current position x of the fade based on the maximum value, the minimum value t0, the illumination direction L, and the normal direction N; a color calculation module configured to calculate the color P of each shadow point based on the current position x of the gradation and the color of illumination color The method comprises the steps of carrying out a first treatment on the surface of the A rendering module configured to be based on the color P of each shadow point color To render shadows of the cartoon character to be rendered under direct illumination.
According to still another aspect of the embodiment of the present invention, there is also provided a rendering system for cartoon second order direct illumination, including: a cartoon second-order direct illumination rendering device as described above.
In the embodiment of the invention, aiming at each shadow point of the cartoon image to be rendered under direct illumination, the maximum value and the minimum value t0 of the gradual change range of each shadow point are determined; calculating a gradual change current position x based on the maximum value, the minimum value t0, the illumination direction L and the normal direction N; calculating the color P of each shadow point based on the current position x of the gradation and the color of illumination color The method comprises the steps of carrying out a first treatment on the surface of the Based on each of theColor P of shadow points color The shadow of the cartoon image to be rendered under direct illumination is rendered, and the technical problem that the accurate simulation process of PBR to light rays is too real and the second-order characteristic of the cartoon image under direct illumination is difficult to be completely represented in the prior art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a flow chart of a method of rendering cartoon second order direct illumination according to an embodiment of the present application;
FIG. 2 is a flow chart of another cartoon second order direct illumination rendering method according to an embodiment of the present application;
FIG. 3 is a flow chart of yet another cartoon second order direct illumination rendering method in accordance with embodiments of the present application;
FIG. 4 is a flow chart of a method of creating an optic cone according to an embodiment of the present application;
FIG. 5 is a flow chart of a method of computing a minimum bounding box according to an embodiment of the present application;
FIG. 6 is a flow chart of a method of computing depth according to an embodiment of the present application;
FIG. 7 is a flow chart of a method of generating a shadow map according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a cartoon second order direct illumination rendering device according to an embodiment of the present application;
fig. 9 shows a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments in accordance with the present application. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
The relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present application unless it is specifically stated otherwise. Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description. Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but should be considered part of the specification where appropriate. In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
Example 1
Cartoon rendering (ton shading) is a rendering technique aimed at making three-dimensional computer graphics look like hand-drawn cartoon animations. Unlike conventional rendering techniques, cartoon rendering emphasizes lines and simple shadows, often using bright, vivid colors and flat textures.
The embodiment of the application provides a rendering method of cartoon second-order direct illumination, as shown in fig. 1, comprising the following steps:
step S102, determining the maximum value t1 and the minimum value t0 of the gradual change range of each shadow point aiming at each shadow point of the cartoon image to be rendered under direct illumination.
For example, calculating an angle between each shadow point and a light source for direct illumination based on the position of each shadow point; the maximum value t1 and the minimum value t0 of the gradation range of each shadow point are determined based on the angle between each shadow point and the light source for direct illumination.
By calculating the maximum value and the minimum value by the method, the contour line of the object can be enhanced. In cartoon rendering, contour lines are very important. By determining the gradual change range, the color gradual change of the shadow part is more natural, the contour line of the object is enhanced, and the cartoon image is clearer. In addition, the illumination effect can be enhanced. Cartoon rendering typically uses a simplified illumination model, whereas direct illumination is one of the most basic. By determining the gradual change range, the illumination effect can be enhanced, so that the cartoon image is more real. Finally, expressive force can also be enhanced. Cartoon characters are often required to express some specific emotion or meaning. By determining the gradient range, the cartoon images can be more abundant and expressive, and the audience can more easily understand the meaning to be expressed.
Step S104, calculating the current position x of gradual change based on the maximum value t1, the minimum value t0, the illumination direction L and the normal direction N.
In some examples, the illumination direction L and the normal direction N of each shadow point and the light source may be calculated based on the position of each shadow point and the position of the light source for direct illumination; the current position x of the fade is calculated based on the maximum value t1, the minimum value t0, the unit vector of the illumination direction L, and the unit vector of the normal direction N. For example, the current position x of the fade is calculated based on the following formula: x= (unit vector of illumination direction L. Unit vector of normal direction N-t 0)/(t 1-t 0).
The embodiment can generate the rendering effect similar to the cartoon through the simple and quick rendering method. In this embodiment, by calculating the illumination direction and the normal direction of each shadow point and then calculating the current position of the gradation by combining the maximum value, the minimum value, the illumination direction and the normal direction, the gradation effect of the shadow can be generated, thereby increasing the stereoscopic impression of the scene. Furthermore, this embodiment has the advantage of easy implementation and computation, since only simple vector computation and interpolation are involved, without the need for complex ray tracing or shadow mapping. This enables real-time rendering on mobile devices and low power consumption devices.
Step S106, calculating the color Pcolor of each shadow point based on the current position x of the gradation and the color of the illumination.
For example, the color Pcolor of each shadow point may be calculated based on the following formula:
pcolor=x2 (3-2 x) color of illumination.
In this embodiment, by calculating the color of each shadow point using a quadratic interpolation function, a smooth color gradation can be generated, thereby generating a rendering effect similar to a cartoon or hand-drawn style, increasing the readability and artistic sense of the scene, and also increasing the stereoscopic sense and visual appeal. Furthermore, by the above method there is also the advantage of high computational efficiency, since it involves only simple mathematical calculations and color interpolation, without the need for complex texture mapping or shading techniques. This makes it suitable for real-time rendering on mobile devices and low power devices.
And step S108, rendering the shadow of the cartoon image to be rendered under direct illumination based on the color PColor of each shadow point.
In the embodiment, the shadow rendering is calculated based on the illumination color and the gradual change function, and is consistent with the illumination and the color of the cartoon image, so that a more real cartoon rendering effect can be generated, and the stereoscopic impression and the depth impression of the cartoon image are enhanced.
Example 2
The embodiment of the application provides another cartoon second-order direct illumination rendering method, which is shown in fig. 2 and comprises the following steps:
step S202, a light source is determined.
First, it is necessary to determine the light sources in the scene and calculate the color and intensity of the light sources. The light source may be a point light source, a spotlight, or ambient light, etc. Typically, the color of the light source may be represented in RGB format and the intensity may be represented in illuminance or radiance.
In step S204, the illumination direction is calculated.
For each rendered pixel, the illumination direction from each point on the pixel surface to the light source needs to be calculated. This can be achieved by calculating a vector between the surface normal and the light source position. The illumination direction L can be expressed as a unit vector:
L=(P-S)/||P-S||
where P represents a point on the pixel surface, S represents the position of the light source, and P-S represents the modulus of the vector P-S.
In step S206, a first reflection is calculated.
The first reflection of the surface is calculated using the direction of illumination and the surface normal. This reflection is typically calculated based on a specular reflection model to determine the highlight of the surface. The calculation formula of the specular reflection is:
R=2(N·L)N–L
where N represents the surface normal, L represents the illumination direction, and R represents the reflection direction. The dot product symbol (·) represents the inner product of the two vectors.
In step S208, a second reflection is calculated.
The second reflection of the surface is calculated using the illumination direction and the surface normal. This reflection is typically calculated based on a diffuse reflection model to determine the shadow area of the surface. The calculation formula of diffuse reflection is:
I=Kd(N·L)
wherein Kd represents the diffuse reflection coefficient of the surface, N represents the normal direction, L represents the illumination direction, and I represents the diffuse reflection intensity of the surface.
In step S210, a color is calculated.
The color of the surface is calculated from the highlights and shadows of the surface. In general, the highlight region will be brighter than the shadow region, and the color will be brighter and more vivid. Cartoon rendering is typically implemented using different palettes to produce strong color contrast.
1) For each shadow point of the cartoon character to be rendered under direct illumination, determining a maximum value t1 and a minimum value t0 of the gradual change range of each shadow point.
Before determining the color of each shadow point, the position of that point in the shadow transition zone needs to be determined. This can be achieved by determining the maximum t1 and minimum t0 of the fade range for each shadow point. t1 represents the shadow center to edge distance and t0 represents the shadow edge to non-shadow region distance.
2) The current position x of the fade is calculated based on the maximum t1, the minimum t0, the illumination direction L and the normal direction N.
After determining the fade position for each shadow point, the current position x needs to be calculated in order to calculate the shadow color for that point from the illumination color. The value range of x is [0,1], which represents the relative position of the current position in the gradual change range. The calculation mode of x is as follows: x= (unit vector of illumination direction L. Unit vector of normal direction N-t 0)/(t 1-t 0), where N represents normal direction, L represents illumination direction, and · represents point multiplication operator.
3) The color Pcolor of each shadow point is calculated based on the current position x of the fade and the color of the illumination.
The color Pcolor of each shadow point can be calculated according to the current position x and the illumination color. Typically, the shadow color is interpolated within the gradient range according to the position x to obtain a gradient color value. The specific calculation mode can use linear interpolation or other interpolation modes for calculation.
4) The shadows of the cartoon character to be rendered under direct illumination are rendered based on the color PColor of each shadow point.
Finally, according to the color PColor of each shadow point, the shadow of the cartoon image under direct illumination can be rendered. In general, the color of the shadow area is darker, a certain gradual effect is presented, and the area outside the shadow area presents the basic color of the cartoon image.
Step S212, rendering is performed and an image is output.
Based on the calculated color, rendering the shadow of the cartoon character under direct illumination, and outputting the rendering result as an image.
The embodiment has the following beneficial effects:
1) Better illumination effect. The effect of real illumination can be better simulated by using the second-order direct illumination model, so that the cartoon image looks more real.
2) Better color contrast. The different palettes are used for realizing strong color contrast, so that the color of the cartoon image is more vivid and bright.
3) Better shadow effect. By calculating the illumination direction of each point on each pixel surface to the light source and calculating the color of each shadow point according to the gradient range and position of the shadow, the shadow effect in the real world can be better simulated.
Example 3
The embodiment of the application provides a rendering method of cartoon second-order direct illumination, and the embodiment improves the resolution and quality of shadows by dividing a scene into a plurality of cascade layers and using different shadow mapping for each cascade layer, so that the rendering efficiency and visual effect of the shadows are improved. As shown in fig. 3, the rendering method of cartoon second-order direct illumination provided in this embodiment includes the following steps:
In step S302, a view cone is created.
A view cone is created that covers the field of view of the camera and the distance between the distal section and the light source location is fixed. The view cone is referred to as a "cascade" and may contain multiple cascade layers. Specifically, the method for creating a view cone is shown in fig. 4, and includes the following steps:
in step S3021, the view cone of the camera is determined.
A view cone is a geometric body of objects visible within the camera's view angle. Typically, the viewing cone consists of 6 planes, namely a near plane, a far plane, a left plane, a right plane, an upper plane and a lower plane. The near plane and the far plane are defined by the near and far clipping planes of the camera view cone. The left plane, right plane, upper plane and lower plane are calculated from the camera position and view angle. The intersection of these planes defines the 8 corner points of the view cone.
In step S3022, the light source position and direction are determined.
The light source may be a point light source or a directional light source. The position of the point light source is a fixed point in space. The directional light source has no position and only a direction.
Step S3023, determining the distant section position of each cascade according to the number and the distance of the cascade.
For each cascade, the distance from the far clipping plane of the camera view cone is fixed. Typically, the distance is calculated from the camera position and scene size.
Step S3024, determining each cascaded view cone according to the distal section position.
Each cascaded view cone is calculated from the camera position and orientation. Its far plane is the far plane of the previous cascade, while the near plane is the far plane of the cascade.
In step S3025, for each cascade, an axis alignment bounding box AABB that encloses the entire scene is calculated.
The axis alignment bounding box AABB contains all visible objects and is the sum of the distances from the position of the cascade far section to the scene AABB.
Step S3026, save each of the concatenated view cones.
Finally, each concatenated view cone is saved for use in later steps. Each cascaded view-cone is used to crop objects in the scene and generate a shadow map.
In the embodiment, the scene is divided into a plurality of cascade layers, and each cascade layer uses a different shadow map, so that the resolution of the shadow map can be effectively reduced, and the rendering performance and quality are improved. In addition, the calculation of the far cross-sectional position and AABB of each cascade layer also helps to optimize the shadow rendering process, making it more efficient and accurate.
Step S304, calculating a minimum bounding box.
For each cascade, a minimum bounding box (AABB) is calculated that contains all objects in the scene. Specifically, as shown in fig. 5, the steps may be as follows:
in step S3042, the position and direction of the light source are determined.
The light source may be a point light source or a directional light source. The position of the point light source is a fixed point in space. The directional light source has no position and only a direction. The position and orientation of the light source is determined according to the type of light source and the specific scene. In the case of a point light source, the position of the point light source needs to be determined. If it is a directional light source, the direction of the directional light source needs to be determined.
In step S3043, an observation matrix is calculated.
The observation matrix describes the position and orientation of the scene relative to the light source. The observation matrix is calculated from the light source position and orientation. The observation matrix transforms the scene coordinate system into the light source space.
In step S3044, a projection matrix is calculated.
The projection matrix is used to project objects in the scene into the shadow map. The projection matrix is calculated from the width and height of the observation matrix and the shadow map.
In step S3045, the view projection matrix of the light source is obtained by multiplying the observation matrix and the projection matrix.
The view projection matrix transforms the scene coordinate system into a shadow map coordinate system.
In step S3046, a minimum bounding box is calculated for each cascade that contains all objects in the scene.
The minimum bounding box for each cascade containing all objects in the scene is calculated using the view projection matrix of the light sources.
Through the steps, the position and the direction of the light source are determined by the cascade shadow mapping algorithm, and an observation matrix, a projection matrix and a view projection matrix of the light source are calculated. These matrices will be used in later steps to calculate the shadow map and project objects in the scene into the shadow map.
In this embodiment, the efficiency and accuracy of the algorithm can be improved by calculating the minimum bounding box (AABB) containing all objects in the scene in each cascade. The minimum bounding box can be used to determine the object for which shadows need to be calculated, reducing unnecessary computation. By calculation of the observation matrix and the projection matrix, the position of each pixel in the shadow map can be accurately determined, so that the shadow of the object can be calculated.
Step S306, determining a clipping plane.
The near plane of each cascade of view cones is taken as the clipping plane and this plane is used to clip all objects in the scene to ensure that only objects visible in the current cascade will generate shadows.
Step S308, rendering textures.
For each cascade, the near-planar depth texture of its view cone is rendered onto a piece of texture. This texture, i.e. the shadow map, is used to store the depth values of the objects in the current cascade as seen from the light source perspective. The object is then projected into the shadow map. Objects in a scene are rendered into a shadow map, and the objects may be projected into the respective cascade using a projection matrix of each cascade. In this process, a depth test is required to ensure that only the nearest object in the shadow map is rendered.
In this embodiment, through the above steps, the rendering performance is improved. By projecting objects into the shadow map, the number of objects that need to be considered in scene rendering can be reduced. In rendering a scene, only objects projected into the current cascade need be considered, and other objects not in the cascade need not be considered. This may improve rendering performance. At the same time, the shadow effect is improved. The shadow map may store depth values of objects in the current cascade as seen from the light source perspective. In this way, the intensity and position of the shadow can be calculated from the depth values. The intensity and position of the shadow can be calculated from the depth value and the position and direction of the light source, thereby improving the shadow effect. In addition, shadow effects of distant objects can be supported. Since the distances of objects from the light source are different, different cascades need to be used at different distances to handle the shadow effect. Shadow effects of distant objects can be supported using multiple cascades. Finally, it is also possible to adapt the cascade size and position. The size and position of each cascade is adaptively calculated based on the distance and scene size. This ensures that each cascade contains only objects in the current scene, thereby improving rendering performance and shadow effect.
In step S310, a matrix is calculated.
The projection matrix of each cascaded camera projection matrix and the projection matrix of the view angle of the light source are calculated and stored in a transformation matrix array.
First, a large shadow map texture is created for storing all cascaded shadow information. Each cascaded shadow map is copied into a large shadow map pattern. This can be done by drawing each cascaded shadow map to a different region of a large shadow map texture.
Next, the offset for each cascade is calculated. Since the size and position of the shadow map of each cascade is different, the offset of each cascade relative to the large shadow map texture needs to be calculated in order to correctly acquire shadow information in the subsequent rendering. The offset for each cascade is then stored into a constant buffer for use in subsequent rendering.
Through the above steps, each cascade of shadow maps is merged into one large shadow map texture, and the offset of each cascade relative to the large shadow map texture is calculated. This information will be used in subsequent renderings to obtain the correct shadow information.
The present embodiment can form high shadow quality by the above steps. By separating the scene into different cascades, each cascade can have higher resolution and more accurate shadow information. Meanwhile, the shadow information can be more accurate through the camera projection matrix and the projection matrix of the light source visual angle. In addition, rendering performance can be improved. Combining multiple shadow maps into one large shadow map may reduce rendering calls, thereby improving rendering performance. In addition, according to the position and the size of each cascade, the projection matrix is calculated adaptively, so that the shadow mapping space can be prevented from being wasted in a place with less remote details, and the performance is further improved. Finally, it is also possible to adapt itself. By calculating the offset for each cascade and storing it in a constant buffer, the shading information can be adapted in a later rendering to accommodate changes in the position of the camera and objects in the scene. This makes the algorithm more flexible and adaptable to a variety of different scenarios.
In summary, the present embodiment can improve shadow quality and rendering performance by dividing a scene into a plurality of cascades, adaptively calculating a projection matrix and an offset, and merging shadow map textures, and the like, and adapt to requirements of different scenes.
In step S312, the depth is calculated.
For each object, its depth at the light source perspective is calculated separately in each cascade and compared to the depths in the shadow map to determine whether the object is covered by a shadow. In this step, shadow information needs to be calculated using the merged shadow map texture and the offset for each cascade and applied to objects in the scene. As shown in fig. 6, the method comprises the steps of:
in step S3122, for each pixel, its coordinates in the large shadow map are calculated.
For each pixel, its coordinates in the large shadow map texture are calculated and converted to texture coordinates. This can be done by multiplying the coordinates of the pixels by a texture coordinate scaling factor and adding an offset to each cascade.
In step S3124, a depth value is calculated.
In large shadow tiling, depth values of several surrounding pixels are acquired. These pixels are typically located around the current pixel, so a two-dimensional convolution filter may be used to obtain the depth values of these pixels.
In step S314, a shadow map is generated.
For objects covered by shadows, shadow maps are generated in each cascade and combined together to form the final shadow map. The embodiment adopts a filtering algorithm, and the filtering algorithm is used for reducing jagged edges of shadows and enhancing smoothness of the shadows. As shown in fig. 7, the method includes the steps of:
in step S3142, an average value between the depth values is calculated.
An average value between the current pixel and the depth values of these pixels is calculated. This can be done by adding the depth values of all pixels and dividing the result by the number of pixels.
In step S3144, the distance between the current pixel and the light source is calculated.
The distance between the current pixel and the light source is calculated. This may be done by converting the coordinates of the current pixel into the light source space and calculating its distance to the light source position.
In step S3146, the deviation is calculated.
A deviation between the current pixel and the depth value in the shadow map is calculated. If the depth value of the current pixel is smaller than the depth value in the shadow map, the pixel is in the shadow, otherwise it is in the illumination area.
Step S3148, performing blurring processing, and achieving a shadow effect.
And carrying out fuzzy processing on the shadow information according to the magnitude of the deviation and the setting of the filter radius. This may be achieved by some fuzzy filtering algorithm, such as gaussian fuzzy, average fuzzy, etc. Shadow information is applied to objects in the scene. For pixels in the shadow, its color may be set to black, or some shadow transparency may be used to achieve the shadow effect.
For example, for each shadow point of the cartoon character to be rendered under direct illumination, determining a maximum value t1 and a minimum value t0 of the gradual change range of each shadow point; calculating a gradual change current position x based on the maximum value t1, the minimum value t0, the illumination direction L and the normal direction N; the color Pcolor of each shadow point is calculated based on the current position x of the fade and the color of the illumination.
In some embodiments, in order to make the shadow points more consistent with the characteristics of the cartoon animation, more comprehensive factors such as the color Cl of the light source, the reflectivity K of the object surface, the color Ca and intensity Ia of the ambient light, the color Cs of the shadow, the roughness R of the object surface, and the reflectivity Ka of the ambient light may also be considered when rendering the cartoon character.
In calculating the shadow spot color, the effect of the color of the light source on the shadow spot may be considered and a factor may be used to adjust the effect of the color of the light source on the shadow spot color. The reflectivity K of the object surface also affects the color of the shadow spot, and a factor can be used to adjust the effect of the reflectivity of the object surface on the color of the shadow spot. The color and intensity of the ambient light also affects the color of the shadow spot, and a factor can be used to adjust the effect of the color and intensity of the ambient light on the color of the shadow spot. The shadows of the cartoon character are usually black or dark grey, but a factor can be used to adjust the shade of the shadow color to achieve different effects. The roughness R of the object surface also affects the color of the shadow spot, and a coefficient can be used to adjust the effect of the roughness of the object surface on the color of the shadow spot. In addition, the reflectance Ka of ambient light also affects the color of the shadow spot, and a factor can be used to adjust the effect of the reflectance of ambient light on the color of the shadow spot.
Pcolor=(Il*Cl*K*(1-x)+Ia*Ca*Ka)*(1-R)*Cs
Wherein, PColor is the color of the shadow point; il is the intensity of the light source; cl is the color of the light source; k is the reflectivity of the surface of the object; x is the current position of gradual change; ia is the intensity of ambient light; ca is the color of ambient light; ka is the reflectivity of ambient light; r is the roughness of the surface of the object; cs is the color of the shade.
Through the method, the cartoon style can be enhanced, the cartoon animation generally has obvious visual styles, one of the cartoon animations is a dark shadow point, and the color of the shadow point can be better calculated by using the method, so that the cartoon style is more accordant, and the visual style of the cartoon animation is enhanced. In addition, the sense of realism of the picture can be increased. While the visual style of cartoon animation is often off-reality, increasing the realism of the scene may make the audience more immersed in the storyline. The method can better calculate the color of the shadow point, so that the shadow point is closer to the shadow effect of the real world, and the sense of reality of the picture is increased. Finally, the expressive force of the picture can also be enhanced. Cartoon animations often have a more pronounced emotional expressive force, where the color of the picture is also an important means of representing emotion. The method can better calculate the color of the shadow point, so that the color of the picture is more expressive, and the expressive force of the picture is enhanced.
Example 4
The embodiment of the application provides a cartoon second-order direct illumination rendering device, as shown in fig. 8, comprising: a determination module 82, a location calculation module 84, a color calculation module 86, and a rendering module 88.
The determining module 82 is configured to determine, for each shadow point of the cartoon character to be rendered under direct illumination, a maximum value t1 and a minimum value t0 of a gradation range of said each shadow point; a position calculation module 84 configured to calculate a current position x of the fade based on the maximum value t1, the minimum value t0, the illumination direction L, and the normal direction N; a color calculation module 86 configured to calculate a color Pcolor for each shadow point based on the current position x of the fade and the color of the illumination; the rendering module 88 is configured to render shadows of the cartoon character to be rendered under direct illumination based on the color Pcolor of each shadow point.
It should be noted that: the rendering device for direct illumination at the second level provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical application, the above functional allocation may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the live broadcast device of the virtual anchor provided in the above embodiment and the live broadcast method embodiment of the virtual anchor belong to the same concept, and the specific implementation process of the live broadcast device of the virtual anchor is detailed in the method embodiment, which is described herein.
Example 5
Fig. 9 shows a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure. It should be noted that the electronic device shown in fig. 9 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present disclosure.
As shown in fig. 9, the electronic apparatus includes a Central Processing Unit (CPU) 1001 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a storage section 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data required for system operation are also stored. The CPU1001, ROM 1002, and RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
The following components are connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, and the like; an output portion 1007 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), etc., and a speaker, etc.; a storage portion 1008 including a hard disk or the like; and a communication section 1009 including a network interface card such as a LAN card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The drive 1010 is also connected to the I/O interface 1005 as needed. A removable medium 1011, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is installed as needed in the drive 1010, so that a computer program read out therefrom is installed as needed in the storage section 1008.
In particular, according to embodiments of the present disclosure, the processes described below with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 1009, and/or installed from the removable medium 1011. When executed by a Central Processing Unit (CPU) 1001, performs the various functions defined in the methods and apparatus of the present application. In some embodiments, the electronic device may further include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
It should be noted that the computer readable medium shown in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
As another aspect, the present application also provides a computer-readable medium that may be contained in the electronic device described in the above embodiment; or may exist alone without being incorporated into the electronic device.
The computer-readable medium carries one or more programs which, when executed by one of the electronic devices, cause the electronic device to implement the methods described in the embodiments below. For example, the electronic device may implement the steps of the method embodiments described above, and so on.
The integrated units in the above embodiments may be stored in the above-described computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause one or more computer devices (which may be personal computers, servers or network devices, etc.) to perform all or part of the steps of the methods described in the various embodiments of the present application.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In several embodiments provided in the present application, it should be understood that the disclosed terminal device may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.

Claims (10)

1. The rendering method of the cartoon second-order direct illumination is characterized by comprising the following steps of:
determining the maximum value and the minimum value of the gradual change range of each shadow point aiming at each shadow point of the cartoon image to be rendered under direct illumination;
calculating the current position of gradual change based on the maximum value, the minimum value, the illumination direction and the normal direction;
calculating the color of each shadow point based on the current position of the gradation and the color of illumination;
and rendering the shadows of the cartoon images to be rendered under direct illumination based on the color of each shadow point.
2. The method of claim 1, wherein determining the maximum and minimum values of the fade range for each shadow point comprises:
calculating an angle between each shadow point and a light source for direct illumination based on the position of each shadow point;
the maximum and minimum values of the fade range for each shadow point are determined based on the angle between the shadow point and the light source for direct illumination.
3. The method of claim 1, wherein calculating the current position of the fade based on the maximum value, the minimum value, the illumination direction, and the normal direction comprises:
calculating the illumination direction and the normal direction of each shadow point and the light source based on the position of each shadow point and the position of the light source for direct illumination;
and calculating the current position of the gradual change based on the maximum value, the minimum value, the unit vector of the illumination direction and the unit vector of the normal direction.
4. A method according to claim 3, wherein calculating the current position of the fade based on the maximum value, the minimum value, a unit vector of illumination direction, and a unit vector of normal direction comprises: the current position of the fade is calculated based on the following formula:
x= (unit vector of illumination direction L x unit vector of normal direction N)/(t 1-t 0);
wherein L represents the illumination direction, N represents the normal direction, t1 represents the maximum value, and t0 represents the minimum value.
5. The method of claim 4, wherein calculating the color of each shadow point based on the current location of the fade and the color of illumination comprises: the color of each shadow point is calculated based on the following formula:
P color =x 2 (3-2 x) the colour of the illumination.
6. A cartoon second order direct illumination rendering device, comprising:
a determining module configured to determine, for each shadow point of the cartoon character to be rendered under direct illumination, a maximum value and a minimum value of a gradation range of the each shadow point;
a position calculation module configured to calculate a current position of the fade based on the maximum value, the minimum value, the illumination direction, and the normal direction;
a color calculation module configured to calculate a color of each shadow point based on a current position of the fade and a color of illumination;
and the rendering module is configured to render the shadow of the cartoon image to be rendered under direct illumination based on the color of each shadow point.
7. The apparatus of claim 6, wherein the determination module is further configured to:
calculating an angle between each shadow point and a light source for direct illumination based on the position of each shadow point;
the maximum and minimum values of the fade range for each shadow point are determined based on the angle between the shadow point and the light source for direct illumination.
8. The apparatus of claim 6, wherein the location calculation module is further configured to:
calculating the illumination direction and the normal direction of each shadow point and the light source based on the position of each shadow point and the position of the light source for direct illumination;
and calculating the current position of the gradual change based on the maximum value, the minimum value, the unit vector of the illumination direction and the unit vector of the normal direction.
9. A cartoon second order direct illumination rendering system, comprising: the device of any one of figures 6 to 8.
10. A computer readable storage medium, having stored thereon a program, which, when run, causes a computer to perform the method of any of claims 1 to 5.
CN202310402953.XA 2023-04-14 2023-04-14 Cartoon second-order direct illumination rendering method, device and system Pending CN116524102A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310402953.XA CN116524102A (en) 2023-04-14 2023-04-14 Cartoon second-order direct illumination rendering method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310402953.XA CN116524102A (en) 2023-04-14 2023-04-14 Cartoon second-order direct illumination rendering method, device and system

Publications (1)

Publication Number Publication Date
CN116524102A true CN116524102A (en) 2023-08-01

Family

ID=87389524

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310402953.XA Pending CN116524102A (en) 2023-04-14 2023-04-14 Cartoon second-order direct illumination rendering method, device and system

Country Status (1)

Country Link
CN (1) CN116524102A (en)

Similar Documents

Publication Publication Date Title
US11024077B2 (en) Global illumination calculation method and apparatus
CN112116692B (en) Model rendering method, device and equipment
CN111508052B (en) Rendering method and device of three-dimensional grid body
CN110196746B (en) Interactive interface rendering method and device, electronic equipment and storage medium
KR100261076B1 (en) Rendering method and apparatus of performing bump mapping and phong shading at the same time
US7583264B2 (en) Apparatus and program for image generation
US8223149B2 (en) Cone-culled soft shadows
US20070139408A1 (en) Reflective image objects
CN113012273B (en) Illumination rendering method, device, medium and equipment based on target model
US20090153557A1 (en) Horizon split ambient occlusion
US6791544B1 (en) Shadow rendering system and method
CN113240783B (en) Stylized rendering method and device, readable storage medium and electronic equipment
US11804008B2 (en) Systems and methods of texture super sampling for low-rate shading
KR20040024550A (en) Painting method
US6906729B1 (en) System and method for antialiasing objects
JP2003168130A (en) System for previewing photorealistic rendering of synthetic scene in real-time
CN116363288A (en) Rendering method and device of target object, storage medium and computer equipment
US11989807B2 (en) Rendering scalable raster content
CN115631289A (en) Vehicle model surface generation method, system, equipment and storage medium
KR100454070B1 (en) Method for Real-time Toon Rendering with Shadow using computer
CN111739074B (en) Scene multi-point light source rendering method and device
CN116524102A (en) Cartoon second-order direct illumination rendering method, device and system
US7710419B2 (en) Program, information storage medium, and image generation system
US20180005432A1 (en) Shading Using Multiple Texture Maps
US11776179B2 (en) Rendering scalable multicolored vector content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination