CN114119847A - Graph processing method and device, computer equipment and storage medium - Google Patents

Graph processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114119847A
CN114119847A CN202111471456.2A CN202111471456A CN114119847A CN 114119847 A CN114119847 A CN 114119847A CN 202111471456 A CN202111471456 A CN 202111471456A CN 114119847 A CN114119847 A CN 114119847A
Authority
CN
China
Prior art keywords
map
processing
sampling
information
dimensional model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111471456.2A
Other languages
Chinese (zh)
Other versions
CN114119847B (en
Inventor
宋田骥
刘欢
陈烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202111471456.2A priority Critical patent/CN114119847B/en
Publication of CN114119847A publication Critical patent/CN114119847A/en
Priority to PCT/CN2022/127456 priority patent/WO2023098344A1/en
Application granted granted Critical
Publication of CN114119847B publication Critical patent/CN114119847B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

The present disclosure provides a graphics processing method, apparatus, computer device, and storage medium, wherein the method comprises: acquiring a texture map for rendering a target three-dimensional model; the texture map comprises a color map, a normal map reflecting the normal direction and a brush map reflecting the drawing characteristics of a brush; based on the preset illumination direction information, performing illumination processing on the normal map to obtain the normal map after the illumination processing; performing first sampling processing on the normal map, the color map and the brush map after the illumination processing to obtain a first sampling map layer; responding to the input contour line information of the target three-dimensional model, and performing contour tracing processing on the first sampling layer based on the contour line information to obtain a second sampling layer; and rendering the target three-dimensional model based on the second sampling layer to obtain the rendered target three-dimensional model. The rendered target three-dimensional model is more vivid and closer to the watercolor drawing effect.

Description

Graph processing method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer graphics technologies, and in particular, to a method and an apparatus for processing graphics, a computer device, and a storage medium.
Background
With the development of computer graphics technology, three-dimensional model rendering is widely applied to scenes such as movies and games. The style of three-dimensional models in virtual scenes is also varied for different animation or game types, not just realistic styles. In particular, due to the effect of painting methods such as watercolor painting, carbon strokes, and cartoons, attempts have been made to render a three-dimensional model using non-realistic graphics such as watercolor. However, in a three-dimensional scene rendered in a cartoon style, the prior art lacks vividness and watercolor rendering effects when watercolor rendering is performed on a three-dimensional model.
Disclosure of Invention
The embodiment of the disclosure at least provides a graphic processing method and device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a graphics processing method, including:
acquiring a texture map for rendering a target three-dimensional model; the texture map comprises a color map corresponding to the target three-dimensional model, a normal map reflecting the normal direction of each point in the target three-dimensional model, and a brush map reflecting the drawing characteristics of a brush;
based on preset illumination direction information, performing illumination processing on the normal map to obtain an illuminated normal map;
performing first sampling processing on the normal map, the color map and the brush map after the illumination processing to obtain a first sampling map layer;
responding to the input contour line information of the target three-dimensional model, and performing contour tracing processing on the first sampling layer based on the contour line information to obtain a second sampling layer;
and rendering the target three-dimensional model based on the second sampling layer to obtain a rendered target three-dimensional model.
In an optional implementation manner, after obtaining a first sampled image layer, before performing contouring on the first sampled image layer, the method further includes:
performing brush trace processing on the first sampling image layer based on the input brush drawing trace information in response to the input brush drawing trace information to obtain a first sampling image layer after the brush trace processing;
the responding to the input contour line information of the target three-dimensional model, performing contour tracing processing on the first sampling image layer based on the contour line information to obtain a second sampling image layer comprises:
and responding to the input contour line information of the target three-dimensional model, and performing contour tracing processing on the first sampling layer subjected to the brush trace processing based on the contour line information to obtain a second sampling layer.
In an optional embodiment, the brush drawing trace information includes at least one of brush texture intensity information, brush lateral distortion information, brush longitudinal distortion information, and highlight range information.
In an alternative embodiment, the texture map further comprises a noise map reflecting brush noise;
after obtaining a first sampling layer and before performing contouring on the first sampling layer, the method further includes:
performing brush noise processing on the first sampling layer based on brush noise information in the noise map to obtain a first sampling layer after the brush noise processing;
the responding to the input contour line information of the target three-dimensional model, performing contour tracing processing on the first sampling image layer based on the contour line information to obtain a second sampling image layer comprises:
and responding to the input contour line information of the target three-dimensional model, and performing contour tracing processing on the first sampling layer subjected to the brush noise wave processing based on the contour line information to obtain a second sampling layer.
In an optional embodiment, the texture map further includes a color level map reflecting the color change characteristics of the light receiving area in the target three-dimensional model;
after obtaining a first sampling layer and before performing contouring on the first sampling layer, the method further includes:
performing color gradation processing on the first sampling layer based on the color gradation information in the color gradation mapping to obtain a first sampling layer after color gradation processing; the color level information comprises color information corresponding to each color level, the proportion information of each color level in a light receiving area of the target three-dimensional model and color fusion information of adjacent color levels in each color level;
the responding to the input contour line information of the target three-dimensional model, performing contour tracing processing on the first sampling image layer based on the contour line information to obtain a second sampling image layer comprises:
and responding to the input contour line information of the target three-dimensional model, and performing contour tracing processing on the first sampling layer subjected to the color gradation processing based on the contour line information to obtain a second sampling layer.
In an optional embodiment, the texture map further includes a channel map reflecting an area to be subjected to gamut rendering in the target three-dimensional model;
the performing, based on the gamut information in the gamut map, gamut processing on the first sampling layer to obtain a gamut-processed first sampling layer includes:
determining a local sampling layer corresponding to the region to be subjected to the color level rendering in the first sampling layer based on the channel map;
performing color gradation processing on the local sampling layer based on the color gradation information in the color gradation mapping to obtain a color gradation processed local sampling layer;
the responding to the input contour line information of the target three-dimensional model, performing contour tracing processing on the first sampling image layer based on the contour line information to obtain a second sampling image layer comprises:
and responding to the input contour line information of the target three-dimensional model, and performing contour tracing processing on the local sampling layer subjected to the color gradation processing based on the contour line information to obtain a second sampling layer.
In an alternative embodiment, the tone mapping is determined by:
performing second sampling processing on the normal map subjected to the illumination processing and the brush map to obtain a third sampling map layer;
obtaining the (N + 1) th order color information based on the nth order color information in the color level map and the third sampling layer; wherein N is a positive integer greater than or equal to 1; the 1 st order color information is determined according to the information in the third sampling layer.
In an optional implementation manner, the performing a first sampling process on the normal map, the color map, and the brush map after the illumination process to obtain a first sampling map layer includes:
performing the fusion processing on the color map of each patch of the target three-dimensional model, the normal map after the illumination processing and the brush map to obtain sub-sampling map layers corresponding to the patches respectively;
and integrating the sub-sampling layers respectively corresponding to the patches to obtain the first sampling layer.
In a second aspect, an embodiment of the present disclosure further provides a graphics processing apparatus, including:
the acquisition module is used for acquiring a texture map for rendering the target three-dimensional model; the texture map comprises a color map corresponding to the target three-dimensional model, a normal map reflecting the normal direction of each point in the target three-dimensional model, and a brush map reflecting the drawing characteristics of a brush;
the first processing module is used for carrying out illumination processing on the normal map based on preset illumination direction information to obtain the normal map after the illumination processing;
the second processing module is used for performing first sampling processing on the normal map, the color map and the brush map after the illumination processing to obtain a first sampling map layer;
the third processing module is used for responding to the input contour line information of the target three-dimensional model and carrying out contour tracing processing on the first sampling layer on the basis of the contour line information to obtain a second sampling layer;
and the fourth processing module is used for rendering the target three-dimensional model based on the second sampling layer to obtain a rendered target three-dimensional model.
In a third aspect, an embodiment of the present disclosure further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
According to the graphic processing method provided by the embodiment of the disclosure, normal map and color map after illumination processing for rendering a target three-dimensional model and brush map reflecting brush drawing characteristics are subjected to first sampling processing; then carrying out contour tracing processing on the first sampling image layer by adopting contour line information; by the rendering method, the target three-dimensional model can achieve brush texture and stereoscopic impression, and the obtained second sampling layer can more highlight the brush drawing effect of the target three-dimensional model, so that the target three-dimensional model obtained by rendering is more vivid and closer to the watercolor drawing effect.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 is a flow chart illustrating a method of graphics processing provided by an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating an effect of a brush map provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating an effect of a noise mapping provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating an effect of a color gradation map provided by an embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating an effect of a rendered three-dimensional model of a target according to an embodiment of the disclosure;
FIG. 6 is a schematic diagram of a graphics processing apparatus provided by an embodiment of the present disclosure;
fig. 7 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
The process of rendering the three-dimensional model mainly comprises the following steps: firstly, a three-dimensional model is created, then a charter is used for making a chartlet with a certain material effect, then the chartlet is drawn in the three-dimensional model for texture effect simulation, finally the three-dimensional model with the effect simulation is rendered and colored and displayed in the three-dimensional scene, and therefore the three-dimensional model displayed in the three-dimensional scene has a corresponding style, such as handwriting, cartoon, hand drawing and the like. In the three-dimensional scene rendered in the cartoon style, vividness and watercolor rendering effects are lacked when watercolor rendering is carried out on some three-dimensional models.
Based on the research, the present disclosure provides a graphics processing method, which may enable a target three-dimensional model to achieve texture and stereoscopic impression of a brush by performing a first sampling process on a normal map, a color map, and a brush map that reflects drawing characteristics of the brush, which are used for rendering the target three-dimensional model, after illumination processing; and then carrying out contour tracing processing on the first sampling layer by contour line information for carrying out contour tracing on the target three-dimensional model, wherein the obtained second sampling layer can more highlight the brush drawing effect of the target three-dimensional model, so that the rendering effect of the target three-dimensional model is more vivid and is closer to the watercolor drawing effect.
The defects existing in the above solutions and the proposed solutions are the results obtained after the inventor has made practice and careful study, therefore, the discovery process of the above problems and the solutions proposed by the present disclosure in the following problems should be the contribution of the inventor to the present disclosure in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
For the convenience of understanding of the present embodiment, a detailed description is first provided for a graphics processing method disclosed in the embodiments of the present disclosure, and the graphics processing provided by the embodiments of the present disclosure is described below by taking an execution subject as a server as an example.
Referring to fig. 1, a flowchart of a graphics processing method provided by the embodiment of the present disclosure is shown, where the method includes S101 to S105, where:
s101: acquiring a texture map for rendering a target three-dimensional model; the texture map comprises a color map corresponding to the target three-dimensional model, a normal map reflecting the normal direction of each point in the target three-dimensional model, and a brush map reflecting the drawing characteristics of a brush.
In this disclosure, the target three-dimensional model may be a three-dimensional model to be rendered corresponding to a virtual object in a target game scene. The virtual object may be any three-dimensional virtual object, such as a virtual character object, a virtual object, and the like. The target three-dimensional model may be rendered by using animation rendering and production software, such as 3D Studio Max (3 DS Max or 3D Max for short) or Maya three-dimensional model production software.
After the target three-dimensional model is manufactured, the manufactured target three-dimensional model can be unfolded to obtain a two-dimensional image under a UV coordinate system (U can represent a transverse coordinate axis under the UV coordinate, and V can represent a longitudinal coordinate axis under the UV coordinate). Each UV coordinate value in the resulting two-dimensional image may correspond to each point on the surface of the target three-dimensional model.
The texture map may store rendering information for rendering the target three-dimensional model. The texture map may be created by drawing software.
Specifically, the Color Map (Color Map) may contain Color information of the virtual object itself corresponding to the target three-dimensional model. Specifically, the color information included in the color map corresponds to the UV coordinate value, and the color map may include color information at each UV coordinate value in the two-dimensional image obtained by expanding the target three-dimensional model. The color map can be drawn by, for example, drawing software Photoshop or other drawing software.
The Normal Map (Normal Map) may include Normal directions of each point in the three-dimensional model of the object, and the Normal directions may be marked by Red Green Blue (RGB) color channels. By applying the normal map to the surface of the target three-dimensional model, the normal direction of each point included in the normal map can reflect the concave-convex effect of the target three-dimensional model. The normal map can be created by drawing software such as Zbrush or May.
The Brush Map (Brush Map) may include Brush drawing characteristics information such as material, drawing shape, drawing continuity, and filling degree of the pigment of the Brush. In the effect diagram of a brush map shown in fig. 2, the drawing shape of the brush included in the brush map can be visually seen.
Here, the drawn color map, normal map, and brush map may be added to an operation interface of the game engine, and the game engine may perform processing according to information included in the color map, normal map, and brush map to obtain rendering information for rendering the target three-dimensional model.
The following describes in detail the steps of processing the above-mentioned multiple texture maps to obtain rendering information for rendering a target three-dimensional maze.
S102: and based on preset illumination direction information, carrying out illumination processing on the normal map to obtain the normal map after illumination processing.
In the embodiment of the present disclosure, the preset illumination direction may be parallel natural light irradiated from a preset direction. Here, when a point of a normal direction of a certain point in the normal map and the preset illumination direction is multiplied by more than 0 and less than 1, the point is light-receiving; when the point of the normal direction of a certain point in the normal map and the preset illumination direction is multiplied by 1, the point is opposite to the light source, and the point is brightest; when the point of the normal direction of a certain point in the normal map and the preset illumination direction is multiplied by more than-1 and less than 0, the point is illuminated; when the point of the normal direction of a certain point in the normal map and the preset illumination direction is multiplied by-1, the point is right opposite to the light source, and the point is darkest. The normal map after the illumination processing may include not only the normal direction of each point, but also illumination information received by each point. After the target three-dimensional model is rendered by using the normal map after illumination processing, each point on the target three-dimensional model can present a visual effect of light receiving or backlight.
S103: and carrying out first sampling processing on the normal map subjected to the illumination processing, the color map and the brush map to obtain a first sampling map layer.
Here, the process of performing the first sampling processing on the normal map, the color map, and the brush map after the illumination processing may be: and fusing the normal direction and the illumination information of each point contained in the normal map subjected to illumination processing, the color information of each point contained in the color map and the brush drawing characteristic information contained in the brush map. The obtained first sampling layer may include the first rendering information after the information fusion. After the target three-dimensional model is rendered by utilizing the first sampling layer, the rendered target three-dimensional model can present a color rendering effect drawn by a brush in a watercolor style.
The surface of the target three-dimensional model is composed of a plurality of patches. In an embodiment, when the normal map, the color map, and the brush map after the illumination processing are subjected to the first sampling processing, the color map of each patch of the target three-dimensional model may be respectively fused with the normal map and the brush map after the illumination processing, so as to obtain sub-sampling map layers corresponding to each patch. And then integrating the sub-sampling layers respectively corresponding to the patches to obtain a first sampling layer.
The sub-color map corresponding to each patch contains color information of a corresponding position in the virtual object corresponding to the patch. The sub-sampling layer corresponding to each patch may include first rendering information obtained by performing first sampling processing on the normal map, the color map and the brush map of the patch after illumination processing and fusing the normal map, the color map and the brush map. And integrating the sub-sampling layers corresponding to the patches according to the positions of the patches in the target three-dimensional model respectively to obtain a first sampling layer.
S104: and responding to the input contour line information of the target three-dimensional model, and performing contour tracing processing on the first sampling layer based on the contour line information to obtain a second sampling layer.
Here, it may be in response to contour line information input on the operation interface of the game engine. The contour information may include an outer contour and an inner contour of the three-dimensional model of the object. Wherein the outer contour line may be a boundary between the target three-dimensional model and the background, or between the target three-dimensional model and the other three-dimensional model. The inner contour line may be a boundary line between different portions in the target three-dimensional model, such as a boundary line between an upper garment and a lower garment of the three-dimensional model of a character.
The second sampling layer obtained by performing contour tracing on the first sampling layer may include the first rendering information in the first sampling layer and the second rendering information obtained after the contour line information is fused. After the target three-dimensional model is rendered by utilizing the second sampling layer, the target three-dimensional model can present a rendering effect with clear outline in the watercolor style.
S105: and rendering the target three-dimensional model based on the second sampling layer to obtain a rendered target three-dimensional model.
When the target three-dimensional model is rendered, the second rendering information contained in the second sampling layer can be used for directly rendering, and the obtained target three-dimensional model is fused with color information corresponding to each point of the target three-dimensional model stored in the color map, the normal direction of each point stored in the normal map, the preset illumination direction information and brush drawing characteristic information stored in the brush map, so that the rendered target three-dimensional model can present a watercolor style rendering effect.
In order to make the watercolor style rendering effect of the target three-dimensional model more vivid, in an embodiment, after the first sampling layer is obtained and before the first sampling layer is subjected to contour tracing, the first sampling layer may be subjected to brush trace processing based on brush trace information in response to input brush trace information, so as to obtain the first sampling layer after the brush trace processing. And then responding to the input contour line information of the target three-dimensional model, and performing contour tracing processing on the first sampling layer subjected to the brush trace processing based on the contour line information to obtain a second sampling layer.
Here, the trace information may be drawn in response to a brush input on an operation interface of the game engine. The trace information drawn by the brush can increase the trace direction of the brush, the texture effect of the trace of the brush, the highlight effect and the like, so that the brush drawing effect of the rendered target three-dimensional model is more real. Therefore, in still another mode, the brush drawing trace information may include at least one of brush texture intensity information, brush lateral distortion information, brush longitudinal distortion information, highlight range information.
In a specific implementation, the texture intensity of the brush traces in the target three-dimensional model can be adjusted based on the texture intensity information of the brush. And performing brush trace processing on the first sampling layer by using the brush texture intensity information, wherein the obtained brush trace processed first sampling layer is fused with the brush texture intensity information. After the target three-dimensional model is rendered by the first sampling layer subjected to the brush mark sensing treatment, the texture of brush marks in the target three-dimensional model is stronger, and the target three-dimensional model shows a watercolor drawing effect with stronger brush marks.
In implementations, the lateral twist direction and/or the longitudinal twist direction of the brush traces in the target three-dimensional model may also be adjusted based on the brush lateral twist information and/or the brush longitudinal twist information. And performing brush trace processing on the first sampling layer by utilizing the brush transverse distortion information and/or the brush longitudinal distortion information, and fusing the brush transverse distortion information and/or the brush longitudinal distortion information in the obtained first sampling layer after the brush trace processing. After the target three-dimensional model is rendered by the first sampling layer subjected to the brush mark sensing processing, the distortion direction of the brush marks in the target three-dimensional model is more obvious, and the target three-dimensional model is enabled to show a watercolor drawing effect with the brush mark direction more obvious.
In a specific implementation, the highlight range of the brush trace in the target three-dimensional model can also be adjusted based on the highlight range information. And performing brush trace processing on the first sampling layer by using highlight range information, wherein a highlight range is fused in the obtained first sampling layer after the brush trace processing. After the target three-dimensional model is rendered by the first sampling layer after the brush trace processing, the highlight range of the brush trace in the target three-dimensional model is more obvious, namely, the target three-dimensional model presents a watercolor drawing effect with more prominent highlight range of the brush trace.
In a specific implementation process, a plurality of types of the brush drawing trace information can be utilized to perform brush trace processing on the first sampling image layer, so that the watercolor drawing effect of the three-dimensional model of the vertical target can be increased, and the authenticity and vividness of the watercolor drawing effect are enhanced.
In order to enhance the vividness of the watercolor drawing effect of the target three-dimensional model, the rendering position, the rendering times and the like of the brush can be randomly processed, so that the rendering position, the rendering times and the like are more random. Illustratively, in one embodiment, a noise map reflecting the noise of the brush may also be included in the texture map. After the first sampling layer is obtained and before the first sampling layer is subjected to the contour delineation processing, brush noise processing may be performed on the first sampling layer based on brush noise information in the noise map, so as to obtain a first sampling layer after the brush noise processing. And then responding to the input contour line information of the target three-dimensional model, and performing contour tracing processing on the first sampling layer subjected to the brush noise wave processing based on the contour line information to obtain a second sampling layer.
Here, the noise map may include brush noise information of noise corresponding to information such as a rendering position and a rendering number. The effect of a noise map as shown in fig. 3 may include randomly distributed noise textures and noise positions.
After brush noise processing is carried out on the first sampling image layer by using brush noise information in the noise mapping, information such as brush positions, brush times and the like on the target three-dimensional model can be randomly distributed, and therefore authenticity and vividness of watercolor drawing effects are enhanced. The process of performing contour tracing on the first sampling layer after the brush noise processing by using the contour line information to obtain the second sampling layer may refer to the step of S105, which is not described herein again.
In order to enhance the stereoscopic impression of the watercolor rendering effect of the target three-dimensional model, in one embodiment, the texture map further comprises a color level map reflecting the color change characteristics of the light receiving area in the target three-dimensional model. After the first sampling layer is obtained and before the first sampling layer is subjected to contour tracing, performing color gradation processing on the first sampling layer based on color gradation information in the color gradation map to obtain a color gradation processed first sampling layer; the color level information comprises color information corresponding to each color level, the proportion information of each color level in a light receiving area of the target three-dimensional model and color fusion information of adjacent color levels in each color level. And then, responding to the input contour line information of the target three-dimensional model, and performing contour tracing processing on the first sampling image layer subjected to the color gradation processing based on the contour line information to obtain a second sampling image layer.
Here, the tone map may be a map created by drawing software to reflect the color change characteristics of the light-receiving region in the target three-dimensional model. The tone map may comprise a plurality of tones. The individual levels in the tone map may be arranged in an order of varying shades of color. As shown in the effect diagram of a color level map shown in fig. 4, the color level map may include 4 color levels, and the 4 color levels are arranged from left to right in the order from deep to light. And the fraction information of each gamut in the entire gamut map may be different. The color values of two adjacent color levels at the critical position can be the color fusion value after the color values of the two color levels are fused.
And carrying out color gradation processing on the first sampling layer by utilizing the color gradation information in the color gradation mapping, wherein the obtained color gradation processed first sampling layer is fused with the color gradation information in the color gradation mapping. After the target three-dimensional model is rendered by using the first sampling layer after the color gradation processing, the color of the light receiving area on the target three-dimensional model can show a gradual change effect, so that the situation that the color jumps from a shadow area to the light receiving area can be avoided, and the stereoscopic vision effect and the authenticity of the target three-dimensional model are improved. The process of obtaining the second sampling layer by performing the color gradation processing on the first sampling layer after the color gradation processing by using the contour line information may refer to the step of S105, and details are not described here.
It is contemplated that in some cases only the target area in the target three-dimensional model may be toned, such as the legs of a virtual character. Therefore, the channel map that can reflect the region to be subjected to the gamut rendering in the target three-dimensional model can be obtained, and the region indicated by the region information to be subjected to the gamut rendering is subjected to the gamut rendering through the region information to be subjected to the gamut rendering stored in the channel map.
In a specific embodiment, a local sampling layer corresponding to an area to be subjected to gamut rendering in the first sampling layer may be determined based on the channel map. And then, carrying out color gradation processing on the local sampling layer based on the color gradation information in the color gradation mapping to obtain the local sampling layer after the color gradation processing. And then responding to the input contour line information of the target three-dimensional model, and performing contour tracing processing on the part sampling layer subjected to the color gradation processing based on the contour line information to obtain a second sampling layer.
Here, the local sampling layer may include area information to be subjected to gamut rendering and first rendering information corresponding to an area indicated by the area information.
By using the channel map, the color level rendering can be performed on the region to be subjected to the color level rendering, so that the region to be subjected to the color level rendering presents a color gradient effect, and the stereoscopic visual effect and the authenticity of the target three-dimensional model are increased. The process of performing contour tracing on the local sampling layer after the color gradation processing by using the contour line information to obtain the second sampling layer may refer to the step of S105, which is not described herein again.
In a specific implementation, the tone mapping used in the process of rendering the target three-dimensional model is tone-mapped to the light-receiving region, so that the color information in the tone mapping is related to the preset illumination direction of the target three-dimensional model, the normal direction stored in the normal mapping, and the brush mapping. Thus, in one embodiment, the tone mapping may be obtained by: firstly, performing second sampling processing on the normal map and the brush map after the illumination processing to obtain a third sampling map layer; then, obtaining the (N + 1) th order color information based on the nth order color information in the color level mapping and the third sampling layer; wherein N is a positive integer greater than or equal to 1; the 1 st order color information is determined according to the information in the third sampling layer.
Here, the third sampling layer may include fusion information of the normal direction and preset lighting information included in the normal map after the lighting processing, and third rendering information obtained by fusing brush rendering characteristic information included in the brush map.
The color level map comprises a plurality of color levels. The 1 st order color information is determined according to the information in the third sampling layer. After determining the 1 st order color information, the 2 nd order color information may be determined based on the 1 st order color information and information in the third sampling layer. After determining the 2 nd order color information, 3 rd order color information may be determined based on the 2 nd order color information and information in the third sampling layer. And repeating the steps until the color information of each color level in the color level mapping is determined.
In one implementation, the second sampling processing may be performed on the normal map, the brush map, the noise map, and the brush texture intensity information in the brush drawing trace information after the illumination processing, so as to obtain a third sampling map layer. And then taking the R channel and the G channel corresponding to the third sampling layer as UV coordinate values of the color level maps to obtain color information in each color level map in the color level maps.
In the embodiment of the present disclosure, the obtained texture map may be processed by an algorithm for performing watercolor rendering, so as to obtain rendering information for rendering the target three-dimensional model. The algorithms used here mainly include a shading algorithm and a contour algorithm.
In specific implementation, a normal map of the target three-dimensional model can be obtained, and then the normal map is subjected to illumination processing by using an illumination coloring algorithm based on a preset illumination direction to obtain the normal map subjected to illumination processing.
And then combining the color map in an illumination coloring algorithm, and carrying out fusion processing on the color map, the preset illumination direction and the normal map to obtain a first map layer. And rendering the target three-dimensional model by using the first layer.
And then, combining the brush texture intensity information included in the brush mapping, the noise mapping and the brush drawing trace information in the illumination coloring algorithm, and fusing the brush texture intensity information included in the first image layer, the brush mapping, the noise mapping and the brush drawing trace information to obtain a second image layer. And then, using the information values under the R channel and the G channel contained in the second layer as UV coordinate values to obtain a color gradation map.
The method comprises the steps of obtaining 1 st-order color information in a color level map according to rendering information contained in a second layer, obtaining 2 nd-order color information in the color level map by utilizing the 1 st-order color information, the rendering information contained in the second layer and brush transverse distortion information and brush longitudinal distortion information contained in brush drawing trace information, and obtaining color information of each color level in the color level map by analogy.
And then, combining highlight range information included in the brush drawing trace information and region information under an R channel in the channel map in an illumination coloring algorithm, and carrying out fusion processing on the preset illumination direction, the normal map, highlight range information included in the brush drawing trace information and region information under the R channel in the channel map to obtain a third map layer.
In addition, the color-rank map and the color map can be combined in the illumination coloring algorithm to be fused to obtain a fourth map layer.
And then, carrying out fusion processing on the third image layer and the fourth image layer by using an illumination coloring algorithm to obtain a fifth image layer.
Finally, a contour line corresponding to the target three-dimensional model is obtained, the contour line is used for performing contour tracing processing on the fifth layer by using a contour line algorithm to obtain a sixth layer, then the target three-dimensional model is rendered by using the sixth layer to obtain the rendered target three-dimensional model, for example, a peach model in the effect schematic diagram of the target three-dimensional model shown in fig. 5, wherein the peach model can be a watercolor model rendered by using information after fusion such as a color map, a normal map, a brush map, a noise map, a color level map and the like in the model surrounded by the contour line, wherein the leaves have a first color, the fruits have a second color, and the first color and the second color can be different.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a graphics processing apparatus corresponding to the graphics processing method, and since the principle of the apparatus in the embodiment of the present disclosure for solving the problem is similar to the graphics processing method described above in the embodiment of the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not described again.
Referring to fig. 6, there is shown a schematic architecture diagram of a graphics processing apparatus according to an embodiment of the present disclosure, the apparatus includes: an obtaining module 601, a first processing module 602, a second processing module 603, a third processing module 604, and a fourth processing module 605; wherein the content of the first and second substances,
an obtaining module 601, configured to obtain a texture map for rendering a target three-dimensional model; the texture map comprises a color map corresponding to the target three-dimensional model, a normal map reflecting the normal direction of each point in the target three-dimensional model, and a brush map reflecting the drawing characteristics of a brush;
a first processing module 602, configured to perform illumination processing on the normal map based on preset illumination direction information to obtain an illumination-processed normal map;
a second processing module 603, configured to perform a first sampling process on the normal map, the color map, and the brush map after the illumination process, so as to obtain a first sampling map layer;
a third processing module 604, configured to perform contour tracing on the first sampling layer based on the contour line information in response to the input contour line information of the target three-dimensional model, to obtain a second sampling layer;
a fourth processing module 605, configured to perform rendering processing on the target three-dimensional model based on the second sampling layer, so as to obtain a rendered target three-dimensional model.
In an alternative embodiment, the apparatus further comprises:
the fifth processing module is used for responding to input brush drawing trace information, and performing brush trace processing on the first sampling image layer based on the brush drawing trace information to obtain a first sampling image layer after the brush trace processing;
the third processing module 604 is specifically configured to perform contour tracing on the first sampling layer after the brush trace processing based on the contour line information in response to the input contour line information of the target three-dimensional model, so as to obtain a second sampling layer.
In an optional embodiment, the brush drawing trace information includes at least one of brush texture intensity information, brush lateral distortion information, brush longitudinal distortion information, and highlight range information.
In an alternative embodiment, the texture map further comprises a noise map reflecting brush noise;
the device further comprises:
a sixth processing module, configured to perform brush noise processing on the first sampling layer based on brush noise information in the noise map, to obtain a first sampling layer after the brush noise processing;
the third processing module 604 is specifically configured to perform contour tracing on the first sampling layer after the brush noise processing based on the contour line information in response to the input contour line information of the target three-dimensional model, so as to obtain a second sampling layer.
In an optional embodiment, the texture map further includes a color level map reflecting the color change characteristics of the light receiving area in the target three-dimensional model;
the device further comprises:
a seventh processing module, configured to perform, based on the gamut information in the gamut map, gamut processing on the first sampling layer to obtain a first sampling layer after the gamut processing; the color level information comprises color information corresponding to each color level, the proportion information of each color level in a light receiving area of the target three-dimensional model and color fusion information of adjacent color levels in each color level;
the third processing module 604 is specifically configured to perform contour tracing on the first sampling layer after the color gradation processing based on the contour line information in response to the input contour line information of the target three-dimensional model, so as to obtain a second sampling layer.
In an optional embodiment, the texture map further includes a channel map reflecting an area to be subjected to gamut rendering in the target three-dimensional model;
a seventh processing module, configured to determine, based on the channel map, a local sampling layer corresponding to the region to be subjected to the gamut rendering in the first sampling layer;
performing color gradation processing on the local sampling layer based on the color gradation information in the color gradation mapping to obtain a color gradation processed local sampling layer;
the third processing module 604 is specifically configured to perform contour tracing on the local sampling layer after the color gradation processing based on the contour line information in response to the input contour line information of the target three-dimensional model, so as to obtain the second sampling layer.
In an alternative embodiment, the tone mapping is determined by:
performing second sampling processing on the normal map subjected to the illumination processing and the brush map to obtain a third sampling map layer;
obtaining the (N + 1) th order color information based on the nth order color information in the color level map and the third sampling layer; wherein N is a positive integer greater than or equal to 1; the 1 st order color information is determined according to the information in the third sampling layer.
In an optional implementation manner, the second processing module is configured to perform the fusion processing on the color map of each patch of the target three-dimensional model, the normal map after the illumination processing, and the brush map, respectively, to obtain sub-sampling map layers corresponding to the patches, respectively;
and integrating the sub-sampling layers respectively corresponding to the patches to obtain the first sampling layer.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 7, a schematic structural diagram of a computer device 700 provided in the embodiment of the present disclosure includes a processor 701, a memory 702, and a bus 703. The memory 702 is used for storing execution instructions and includes a memory 7021 and an external memory 7022; the memory 7021 is also referred to as an internal memory, and is used to temporarily store operation data in the processor 701 and data exchanged with an external memory 7022 such as a hard disk, the processor 701 exchanges data with the external memory 7022 through the memory 7021, and when the computer apparatus 700 is operated, the processor 701 communicates with the memory 702 through the bus 703, so that the processor 701 executes the following instructions:
acquiring a texture map for rendering a target three-dimensional model; the texture map comprises a color map corresponding to the target three-dimensional model, a normal map reflecting the normal direction of each point in the target three-dimensional model, and a brush map reflecting the drawing characteristics of a brush;
based on preset illumination direction information, performing illumination processing on the normal map to obtain an illuminated normal map;
performing first sampling processing on the normal map, the color map and the brush map after the illumination processing to obtain a first sampling map layer;
responding to the input contour line information of the target three-dimensional model, and performing contour tracing processing on the first sampling layer based on the contour line information to obtain a second sampling layer;
and rendering the target three-dimensional model based on the second sampling layer to obtain a rendered target three-dimensional model.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the graphics processing method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the graphics processing method described in the foregoing method embodiments, which may be referred to specifically in the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (11)

1. A method of graphics processing, comprising:
acquiring a texture map for rendering a target three-dimensional model; the texture map comprises a color map corresponding to the target three-dimensional model, a normal map reflecting the normal direction of each point in the target three-dimensional model, and a brush map reflecting the drawing characteristics of a brush;
based on preset illumination direction information, performing illumination processing on the normal map to obtain an illuminated normal map;
performing first sampling processing on the normal map, the color map and the brush map after the illumination processing to obtain a first sampling map layer;
responding to the input contour line information of the target three-dimensional model, and performing contour tracing processing on the first sampling layer based on the contour line information to obtain a second sampling layer;
and rendering the target three-dimensional model based on the second sampling layer to obtain a rendered target three-dimensional model.
2. The method according to claim 1, wherein after obtaining a first sampled layer, before performing a contouring process on the first sampled layer, the method further comprises:
performing brush trace processing on the first sampling image layer based on the input brush drawing trace information in response to the input brush drawing trace information to obtain a first sampling image layer after the brush trace processing;
the responding to the input contour line information of the target three-dimensional model, performing contour tracing processing on the first sampling image layer based on the contour line information to obtain a second sampling image layer comprises:
and responding to the input contour line information of the target three-dimensional model, and performing contour tracing processing on the first sampling layer subjected to the brush trace processing based on the contour line information to obtain a second sampling layer.
3. The method of claim 2, wherein the brush drawing trace information comprises at least one of brush texture intensity information, brush lateral distortion information, brush longitudinal distortion information, highlight range information.
4. The method of claim 1, wherein the texture map further comprises a noise map reflecting brush noise;
after obtaining a first sampling layer and before performing contouring on the first sampling layer, the method further includes:
performing brush noise processing on the first sampling layer based on brush noise information in the noise map to obtain a first sampling layer after the brush noise processing;
the responding to the input contour line information of the target three-dimensional model, performing contour tracing processing on the first sampling image layer based on the contour line information to obtain a second sampling image layer comprises:
and responding to the input contour line information of the target three-dimensional model, and performing contour tracing processing on the first sampling layer subjected to the brush noise wave processing based on the contour line information to obtain a second sampling layer.
5. The method according to claim 1, wherein the texture map further comprises a color level map reflecting the color change characteristics of the light receiving area in the target three-dimensional model;
after obtaining a first sampling layer and before performing contouring on the first sampling layer, the method further includes:
performing color gradation processing on the first sampling layer based on the color gradation information in the color gradation mapping to obtain a first sampling layer after color gradation processing; the color level information comprises color information corresponding to each color level, the proportion information of each color level in a light receiving area of the target three-dimensional model and color fusion information of adjacent color levels in each color level;
the responding to the input contour line information of the target three-dimensional model, performing contour tracing processing on the first sampling image layer based on the contour line information to obtain a second sampling image layer comprises:
and responding to the input contour line information of the target three-dimensional model, and performing contour tracing processing on the first sampling layer subjected to the color gradation processing based on the contour line information to obtain a second sampling layer.
6. The method of claim 5, wherein the texture map further comprises a channel map reflecting an area of the target three-dimensional model to be tone-rendered;
the performing, based on the gamut information in the gamut map, gamut processing on the first sampling layer to obtain a gamut-processed first sampling layer includes:
determining a local sampling layer corresponding to the region to be subjected to the color level rendering in the first sampling layer based on the channel map;
performing color gradation processing on the local sampling layer based on the color gradation information in the color gradation mapping to obtain a color gradation processed local sampling layer;
the responding to the input contour line information of the target three-dimensional model, performing contour tracing processing on the first sampling image layer based on the contour line information to obtain a second sampling image layer comprises:
and responding to the input contour line information of the target three-dimensional model, and performing contour tracing processing on the local sampling layer subjected to the color gradation processing based on the contour line information to obtain a second sampling layer.
7. The method of claim 5, wherein the tone map is determined by:
performing second sampling processing on the normal map subjected to the illumination processing and the brush map to obtain a third sampling map layer;
obtaining the (N + 1) th order color information based on the nth order color information in the color level map and the third sampling layer; wherein N is a positive integer greater than or equal to 1; the 1 st order color information is determined according to the information in the third sampling layer.
8. The method according to claim 1, wherein the performing a first sampling process on the normal map, the color map, and the brush map after the illumination process to obtain a first sampling map layer comprises:
performing the fusion processing on the color map of each patch of the target three-dimensional model, the normal map after the illumination processing and the brush map to obtain sub-sampling map layers corresponding to the patches respectively;
and integrating the sub-sampling layers respectively corresponding to the patches to obtain the first sampling layer.
9. A graphics processing apparatus, comprising:
the acquisition module is used for acquiring a texture map for rendering the target three-dimensional model; the texture map comprises a color map corresponding to the target three-dimensional model, a normal map reflecting the normal direction of each point in the target three-dimensional model, and a brush map reflecting the drawing characteristics of a brush;
the first processing module is used for carrying out illumination processing on the normal map based on preset illumination direction information to obtain the normal map after the illumination processing;
the second processing module is used for performing first sampling processing on the normal map, the color map and the brush map after the illumination processing to obtain a first sampling map layer;
the third processing module is used for responding to the input contour line information of the target three-dimensional model and carrying out contour tracing processing on the first sampling layer on the basis of the contour line information to obtain a second sampling layer;
and the fourth processing module is used for rendering the target three-dimensional model based on the second sampling layer to obtain a rendered target three-dimensional model.
10. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when a computer device is running, the machine-readable instructions when executed by the processor performing the steps of the graphics processing method according to any of claims 1 to 8.
11. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the graphics-processing method according to any one of claims 1 to 8.
CN202111471456.2A 2021-12-05 2021-12-05 Graphic processing method, device, computer equipment and storage medium Active CN114119847B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111471456.2A CN114119847B (en) 2021-12-05 2021-12-05 Graphic processing method, device, computer equipment and storage medium
PCT/CN2022/127456 WO2023098344A1 (en) 2021-12-05 2022-10-25 Graphic processing method and apparatus, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111471456.2A CN114119847B (en) 2021-12-05 2021-12-05 Graphic processing method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114119847A true CN114119847A (en) 2022-03-01
CN114119847B CN114119847B (en) 2023-11-07

Family

ID=80366484

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111471456.2A Active CN114119847B (en) 2021-12-05 2021-12-05 Graphic processing method, device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN114119847B (en)
WO (1) WO2023098344A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596400A (en) * 2022-05-09 2022-06-07 山东捷瑞数字科技股份有限公司 Method for batch generation of normal map based on three-dimensional engine
WO2023098344A1 (en) * 2021-12-05 2023-06-08 北京字跳网络技术有限公司 Graphic processing method and apparatus, computer device, and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685869A (en) * 2018-12-25 2019-04-26 网易(杭州)网络有限公司 Dummy model rendering method and device, storage medium, electronic equipment
CN109993822A (en) * 2019-04-10 2019-07-09 阿里巴巴集团控股有限公司 A kind of wash painting style method and apparatus
CN111402381A (en) * 2020-03-17 2020-07-10 网易(杭州)网络有限公司 Model rendering method and device and readable storage medium
CN112116692A (en) * 2020-08-28 2020-12-22 北京完美赤金科技有限公司 Model rendering method, device and equipment
CN112967363A (en) * 2021-02-24 2021-06-15 北京盛世顺景文化传媒有限公司 8K three-dimensional ink-wash animation production method
CN113012185A (en) * 2021-03-26 2021-06-22 影石创新科技股份有限公司 Image processing method, image processing device, computer equipment and storage medium
CN113064540A (en) * 2021-03-23 2021-07-02 网易(杭州)网络有限公司 Game-based drawing method, game-based drawing device, electronic device, and storage medium
CN113240783A (en) * 2021-05-27 2021-08-10 网易(杭州)网络有限公司 Stylized rendering method and device, readable storage medium and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966312B (en) * 2014-06-10 2017-07-21 腾讯科技(深圳)有限公司 A kind of rendering intent, device and the terminal device of 3D models
CN111127596B (en) * 2019-11-29 2023-02-14 长安大学 Incremental Voronoi sequence-based layered oil painting brush drawing method
CN112051959B (en) * 2020-09-02 2022-05-27 北京字节跳动网络技术有限公司 Method, device and equipment for generating image drawing process and storage medium
CN112070854B (en) * 2020-09-02 2023-08-08 北京字节跳动网络技术有限公司 Image generation method, device, equipment and storage medium
CN114119847B (en) * 2021-12-05 2023-11-07 北京字跳网络技术有限公司 Graphic processing method, device, computer equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685869A (en) * 2018-12-25 2019-04-26 网易(杭州)网络有限公司 Dummy model rendering method and device, storage medium, electronic equipment
CN109993822A (en) * 2019-04-10 2019-07-09 阿里巴巴集团控股有限公司 A kind of wash painting style method and apparatus
CN111402381A (en) * 2020-03-17 2020-07-10 网易(杭州)网络有限公司 Model rendering method and device and readable storage medium
CN112116692A (en) * 2020-08-28 2020-12-22 北京完美赤金科技有限公司 Model rendering method, device and equipment
CN112967363A (en) * 2021-02-24 2021-06-15 北京盛世顺景文化传媒有限公司 8K three-dimensional ink-wash animation production method
CN113064540A (en) * 2021-03-23 2021-07-02 网易(杭州)网络有限公司 Game-based drawing method, game-based drawing device, electronic device, and storage medium
CN113012185A (en) * 2021-03-26 2021-06-22 影石创新科技股份有限公司 Image processing method, image processing device, computer equipment and storage medium
CN113240783A (en) * 2021-05-27 2021-08-10 网易(杭州)网络有限公司 Stylized rendering method and device, readable storage medium and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023098344A1 (en) * 2021-12-05 2023-06-08 北京字跳网络技术有限公司 Graphic processing method and apparatus, computer device, and storage medium
CN114596400A (en) * 2022-05-09 2022-06-07 山东捷瑞数字科技股份有限公司 Method for batch generation of normal map based on three-dimensional engine
CN114596400B (en) * 2022-05-09 2022-08-02 山东捷瑞数字科技股份有限公司 Method for batch generation of normal map based on three-dimensional engine

Also Published As

Publication number Publication date
WO2023098344A1 (en) 2023-06-08
CN114119847B (en) 2023-11-07

Similar Documents

Publication Publication Date Title
CN112215934B (en) Game model rendering method and device, storage medium and electronic device
US11257286B2 (en) Method for rendering of simulating illumination and terminal
CN108564646B (en) Object rendering method and device, storage medium and electronic device
CN111009026B (en) Object rendering method and device, storage medium and electronic device
WO2023098344A1 (en) Graphic processing method and apparatus, computer device, and storage medium
CN104392479B (en) Method of carrying out illumination coloring on pixel by using light index number
CN106898040B (en) Virtual resource object rendering method and device
CN109741438B (en) Three-dimensional face modeling method, device, equipment and medium
CN110115841B (en) Rendering method and device for vegetation object in game scene
Argudo et al. Single-picture reconstruction and rendering of trees for plausible vegetation synthesis
WO2023098358A1 (en) Model rendering method and apparatus, computer device, and storage medium
CN111402373A (en) Image processing method and device, electronic equipment and storage medium
Lopez-Moreno et al. Non-photorealistic, depth-based image editing
CN110610504A (en) Pencil drawing generation method and device based on skeleton and tone
CN115063330A (en) Hair rendering method and device, electronic equipment and storage medium
CN113936080A (en) Rendering method and device of virtual model, storage medium and electronic equipment
Seo et al. Interactive painterly rendering with artistic error correction
CN113838155A (en) Method and device for generating material map and electronic equipment
Cheok et al. Humanistic Oriental art created using automated computer processing and non-photorealistic rendering
CN112669437B (en) Role model coloring method, coloring device, equipment and storage medium
CN117931979B (en) Building display method and related device in electronic map
Li Rendering technology of 3D digital Chinese ink-wash landscape paintings based on maya
CN115131493A (en) Dynamic light special effect display method and device, computer equipment and storage medium
CN106056550A (en) Rendering method and device based on high dynamic range image
CN114998505A (en) Model rendering method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant