CN115814415A - Model rendering method and device for dynamic effect, electronic equipment and storage medium - Google Patents

Model rendering method and device for dynamic effect, electronic equipment and storage medium Download PDF

Info

Publication number
CN115814415A
CN115814415A CN202211648469.7A CN202211648469A CN115814415A CN 115814415 A CN115814415 A CN 115814415A CN 202211648469 A CN202211648469 A CN 202211648469A CN 115814415 A CN115814415 A CN 115814415A
Authority
CN
China
Prior art keywords
map
coordinate information
texture coordinate
dynamic
offset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211648469.7A
Other languages
Chinese (zh)
Inventor
曹保勇
莫芷馨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202211648469.7A priority Critical patent/CN115814415A/en
Publication of CN115814415A publication Critical patent/CN115814415A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application provides a method and a device for rendering a model with a dynamic effect, electronic equipment and a storage medium, wherein the method comprises the following steps: responding to starting operation aiming at a target rendering option, and acquiring an element map of a target model to be rendered, wherein the target rendering option is any one of a plurality of different rendering options; processing the element map by adopting a rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map; rendering the target model based on the dynamic map to obtain a dynamic effect of the target model; according to the method and the device, only one element map is needed, various different dynamic effects are obtained by selecting different rendering options, the generation efficiency of the dynamic effects is improved, and meanwhile map resources and storage space are saved.

Description

Model rendering method and device for dynamic effect, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer graphics technologies, and in particular, to a method and an apparatus for rendering a model with a dynamic effect, an electronic device, and a storage medium.
Background
In a game scene or an animated video scene, it is often necessary to present a model with a dynamic effect. Taking a racing game as an example, a plurality of different series of virtual vehicles can be provided for a player in the game, and the same series of virtual vehicles also need to show different dynamic effects according to different game levels of the player, that is, the same series of virtual vehicles corresponding to different player levels have the same dynamic elements, but the dynamic effects generated by the dynamic elements are different.
In the prior art, when a model with dynamic effects is manufactured, a large number of maps need to be drawn for each dynamic effect to realize the corresponding dynamic effect, the generation efficiency is low, each map can only be applied to one dynamic effect generally, the utilization rate of the maps is low, and a great resource waste situation exists.
It is noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure and therefore may include information that does not constitute prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
In view of the above, the present application is proposed to provide a model rendering method and apparatus, an electronic device, and a storage medium for dynamic effects that overcome or at least partially solve the above problems, including:
a method for model rendering of dynamic effects by running an application to display a graphical user interface on a display screen of a terminal device, the graphical user interface comprising a plurality of different rendering options, the method comprising:
responding to starting operation aiming at a target rendering option, and acquiring an element map of a target model to be rendered, wherein the target rendering option is any one of the plurality of different rendering options, and the element map is used for determining dynamic elements of the target model;
processing the element map by adopting a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map, wherein the dynamic map comprises the relation of the position of the dynamic element changing along with time;
and rendering the target model based on the dynamic map to obtain the dynamic effect of the target model.
An apparatus for model rendering of dynamic effects by running an application to display a graphical user interface on a display screen of a terminal device, the graphical user interface including a plurality of different rendering options, the apparatus comprising:
a rendering mode determining module, configured to, in response to a start operation for a target rendering option, obtain an element map of a target model to be rendered, where the target rendering option is any one of the multiple different rendering options, and the element map is used to determine a dynamic element of the target model;
the dynamic chartlet generation module is used for processing the element chartlet by adopting a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic chartlet, and the dynamic chartlet contains the relation of the position of the dynamic element changing along with time;
and the dynamic effect rendering module is used for rendering the target model based on the dynamic map to obtain the dynamic effect of the target model.
An electronic device comprising a processor, a memory and a computer program stored on the memory and capable of running on the processor, the computer program, when executed by the processor, implementing a method of model rendering of dynamic effects as described above.
A computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of model rendering of dynamic effects as described above.
The application has the following advantages:
in the embodiment of the application, the graphical user interface is displayed on the display screen of the terminal equipment by running the application program, the graphical user interface comprises a plurality of different rendering options, so that a plurality of rendering modes are integrated into the same graphical user interface, and a user can conveniently make models with different dynamic effects. Responding to starting operation aiming at a target rendering option, acquiring a target model element map to be rendered, wherein the target rendering option is any one of a plurality of different rendering options, and the element map is a dynamic element for determining a target model; processing the element map by adopting a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map, wherein the dynamic map comprises the relation of the position of the dynamic element changing along with time; rendering the target model based on the dynamic map to obtain a dynamic effect of the target model; according to the method and the device, only one element map is needed, various different dynamic effects are obtained by selecting different rendering options, the generation efficiency of the dynamic effects is improved, and meanwhile map resources and storage space are saved.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings required to be used in the description of the present application will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings may be obtained according to these drawings without inventive labor.
Fig. 1 is a flowchart illustrating steps of a method for rendering a dynamic model according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a graphical user interface of an example of the present application;
FIG. 3 is a schematic diagram of an exemplary dynamic mask of the present application;
FIG. 4 is a first channel map illustration of an example of the present application;
FIG. 5 is a second channel map illustration of an example of the present application;
FIG. 6 is a schematic diagram of a dynamic map obtained by processing the images of FIGS. 4 and 5 when the target rendering option is the first rendering option in an example of the present application;
FIG. 7 is a first lane map illustration of an example of the present application;
FIG. 8 is a second lane map illustration of an example of the present application;
FIG. 9 is a schematic diagram of a dynamic map obtained by processing the images of FIGS. 7 and 8 when the target rendering option is a second rendering option in an example of the present application;
FIG. 10 is a fourth lane map of an example of the present application;
FIG. 11 is a color channel map schematic of an example of the present application;
FIG. 12 is a schematic diagram of a dynamic map obtained by processing the images of FIGS. 10 and 11 when the target rendering option is a third rendering option in an example of the present application;
FIG. 13 is a schematic diagram of a dynamic map obtained when a target rendering option is a fourth rendering option according to an example of the present application;
FIG. 14 is a diagram illustrating a difference between adjacent pixels according to an embodiment of the present disclosure;
fig. 15 is a block diagram of a structure of a model rendering apparatus for dynamic effects according to an embodiment of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In a game scene or an animated video scene, it is often necessary to present a model with a dynamic effect. For example, in some super-realistic racing games, rich, cool, personalized car paint is clearly an indispensable feature for racing games. The transition of vehicle levels and differentiation of identity are typically communicated to the player through different styling, material rendering, skill special effects, and the like.
In the prior art, when a model with dynamic effects is manufactured for each model, a large number of maps need to be drawn for each dynamic effect to realize the corresponding dynamic effect, the generation efficiency is low, each map can only be applied to one dynamic effect generally, the utilization rate of the map is low, and a great resource waste situation exists. One racing game may have nearly one hundred stock cars, and on average each car needs to be designed with two to three different static painting and dynamic effects, which is clearly difficult to achieve with prior art manufacturing schemes.
In view of this, the embodiment of the present application provides a model rendering method for dynamic effects, which includes processing an element map through a rendering algorithm to obtain a corresponding dynamic map, and then rendering a target model based on the dynamic map to obtain a dynamic effect of the target model; the corresponding dynamic effect can be generated only by one element map, and the generation efficiency of the dynamic effect is improved; and multiple rendering modes are integrated into the same graphical user interface, and models with different dynamic effects can be manufactured by only one element map, so that the generation efficiency of the dynamic effects is improved, and map resources and storage space are greatly reduced.
The dynamic effect model rendering method provided by the embodiment of the application can be operated on local terminal equipment or a server. When the model rendering method for the dynamic effect is run on the server, the model rendering method for the dynamic effect can be implemented and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an optional embodiment, various cloud applications can be run under the cloud interaction system. In the running mode of the cloud application, the running main body and the picture presenting main body of the application program are separated, the storage and the running of the model rendering method of the dynamic effect are completed on the cloud game server, and the client device is used for receiving and sending data and presenting pictures, for example, the client device can be a display device with a data transmission function close to a user side, such as a first terminal device, a television, a computer, a palm computer and the like; however, the cloud server for performing the model rendering method of the dynamic effect is a cloud server. When the cloud application runs, a user operates the client device to send an operation instruction to the cloud server, the cloud server runs the application according to the operation instruction, data such as corresponding pictures are coded and compressed, the data are returned to the client device through a network, and finally the data are decoded through the client device and the corresponding pictures are output.
Referring to fig. 1, a flowchart illustrating steps of a method for rendering a dynamic model is shown, according to an embodiment of the present application, in the embodiment of the present application, a graphical user interface is displayed on a display screen of a terminal device by running an application, where the graphical user interface includes a plurality of different rendering options; it can be understood that the application program may run on the terminal device and display the graphical user interface on the display screen of the terminal device, or may run on the server, and the server makes the graphical user interface displayed on the display screen of the terminal device by interacting with the terminal device. One rendering option corresponds to a set of rendering algorithms, and different rendering options correspond to different rendering algorithms. The method may comprise the steps of:
step 101, in response to a starting operation for a target rendering option, obtaining an element map of a target model to be rendered, where the target rendering option is any one of the multiple different rendering options, and the element map is used to determine a dynamic element of the target model.
Fig. 2 is a schematic diagram of a graphical user interface in an example of the present application, where the graphical user interface includes a menu bar, a view area, and a parameter control area, where the menu bar is used to show an entry of most functions, the view area is used to display an object model to be rendered and an effect of the object model after rendering, and the parameter control area displays a parameter item for a user to select or modify, and as shown in fig. 2, the parameter control area may display multiple rendering options, such as a first rendering option, a second rendering option, a third rendering option, and a fourth rendering option.
The user can select any one rendering option to process the element map by using a rendering algorithm corresponding to the rendering option, and render the target model to obtain a corresponding dynamic effect.
Upon detecting a start operation for a target rendering option, an element map of a target model to be rendered may be obtained. The starting operation may include that the user starts the function of the corresponding control by pressing one or more preset physical keys, for example, when the pointer points to the target rendering option, the function of the target rendering option is started by pressing one or more preset physical keys. When the display screen of the terminal device is a touch screen, the user trigger may also be operations such as clicking, long-pressing, and the like, which are applied to the position where the corresponding control is located, to start the function of the corresponding control, for example, when the user clicks the target rendering option, the function of the target rendering option is started.
Optionally, in the graphical user interface, the target rendering option corresponding to the start operation may be displayed separately from other rendering options, or a corresponding start identifier may be added. As shown in fig. 2, the display effect of the box corresponding to the first rendering option is different from that of the boxes corresponding to the other rendering options, which indicates that the currently selected target option is the first rendering option.
The target model can be imported by a user, and can also be read from the model library by an application program according to a preset sequence. For example, the model library may include a plurality of models to be rendered, the plurality of models to be rendered may be sorted according to generation time or according to model names, a preset order of the plurality of models to be rendered is obtained, and the application program reads out the models that need to be processed currently from the model library in sequence according to the preset order as the target models.
The element map is used for determining dynamic elements of the target model, wherein the dynamic elements refer to original information used for generating each pixel of the dynamic map, and it can be understood that dynamic elements with positions and pixel values changing with time are presented in the dynamic map by processing the dynamic elements in the element map. Preferably, the element map is a four-square continuous map to avoid generating a distinct boundary sense that the pixels are cut. The element map can be imported into the application program by a user, and the element map of the target model can be selected from an element map library associated with the application program. In an optional embodiment, the graphical user interface may further include an element map option, the element map option is associated with an element map library, and the user may open the element map library through the element map option and then select an element map of the target model from the element map library. Namely, the method may further include:
in response to a triggering operation aiming at the element map option, displaying an element map library interface in the graphical user interface, wherein the element map library interface comprises a plurality of selectable candidate element maps;
and responding to the selection operation aiming at any one candidate element map, and determining the candidate element map corresponding to the selection operation as the element map of the target model.
The candidate element maps in the element map library can be updated according to the element maps imported by the user.
It should be noted that, in the process of rendering the target model, the sequence of the user performing the operation of starting the target rendering option and selecting or importing the element map of the target model may not be sequential.
And 102, processing the element map by adopting a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map, wherein the dynamic map comprises the relation of the position of the dynamic element changing along with time.
Different rendering options correspond to different rendering algorithms, and it can be understood that the processing procedures of the element maps by the different rendering algorithms are different. And responding to the starting operation aiming at the target rendering option, also determining a rendering algorithm corresponding to the target rendering option, and further processing the element map by adopting the rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map.
In some optional embodiments of the present application, the element map includes a plurality of channel maps, and in a process of processing the element map by using a target rendering algorithm, at least two channel maps in the element map may be processed, where one channel map is used to store a dynamic element, and the other channel maps are used to affect a display effect of the dynamic element, including affecting a display brightness, a moving speed, a moving direction, a scaling direction, and the like of the dynamic element.
The dynamic map contains the relation of the position of the dynamic element changing along with the time, and the target model is rendered by using the dynamic map subsequently, so that the target model can present a corresponding dynamic effect.
And 103, rendering the target model based on the dynamic map to obtain a dynamic effect of the target model.
And after the dynamic map of the target model is obtained, rendering the target model based on the dynamic map, so that the surface of the rendered target model can present the effect of dynamic change of dynamic elements.
In the embodiment of the application, a user can select any one rendering option to process the element map by using a rendering algorithm corresponding to the rendering option, and perform rendering processing on the target model to obtain a corresponding dynamic effect. The corresponding dynamic effect can be obtained only by one element map, the manufacturing cost of the map is reduced, and the rendering efficiency of the dynamic effect is improved; and multiple rendering modes are integrated into the same graphical user interface, and models with different dynamic effects can be manufactured by only one element map, so that the generation efficiency of the dynamic effects is improved, and map resources and storage space are greatly reduced.
Further, consider that in some scenarios, the target model has a corresponding original texture map, which may be understood as the original surface texture when the target model does not need to render dynamic effects. In some optional embodiments of the present application, the rendering the target model based on the dynamic map to obtain a dynamic effect of the target model may further include:
acquiring an original texture map and a dynamic mask of the target model; each pixel in the dynamic mask is used for determining the fusion weight of each pixel of the original texture map and the pixel corresponding to the dynamic map;
fusing the original texture map and the dynamic map according to the dynamic mask to obtain a fused map;
and rendering the dynamic area based on the fusion map to obtain the dynamic effect of the target model.
In the embodiment, the fusion weight of the original texture map and the dynamic map is determined through the dynamic mask, so that the rendering of the dynamic effect on the partial region of the target model can be realized, and the weight of the dynamic effect can be controlled to meet the rendering requirements of different scenes.
Illustratively, a dynamic mask edit entry may also be included in the graphical user interface, and the user may open the dynamic mask edit interface by triggering the dynamic mask edit entry to produce or modify the dynamic mask.
Fig. 3 is a schematic diagram of a dynamic mask according to an example of the present application, in which the pixel value of each pixel in the dynamic mask is between 0 and 1. The first model region of the target model corresponding to the region of the dynamic mask with pixel value 0 (i.e. the black region in the dynamic mask) is affected by the dynamic map to be 0, i.e. the fusion weight of the dynamic map in the first model region is 0, and the fusion weight of the original texture map in the first model region is 1, i.e. the original texture map is represented in the first model region. The second model region of the target model corresponding to the region of the dynamic mask with pixel value 1 (i.e. the white region in the dynamic mask) is affected by the dynamic map to be 1, and the fusion weight of the original texture map in the second model region is 0, that is, the second model region represents the dynamic map. The third model region corresponding to the region of the target model having a pixel value greater than 0 and less than 1 (i.e. the gray region in the dynamic mask) is affected by the dynamic map to the same extent as the pixel value, i.e. when the pixel value is 0.5, the fusion weight of the dynamic map in the third model region is 50%, and the fusion weight of the original texture map in the third model region is 50%, that is, the third model region is affected by both the original texture map and the dynamic map.
Further, in order to improve the diversity of the dynamic element colors, in some optional embodiments of the present application, an element color option for a user to adjust the dynamic element colors may be further included in the graphical user interface, and the method may further include:
acquiring element colors of the target model; the element color is used for adjusting the color of the dynamic element;
and processing the dynamic map according to the element colors to update the dynamic map.
In this embodiment, a user may adjust the element color through an element color option provided by the graphical user interface, for example, after the user triggers the element color option, a color panel may be displayed on the graphical user interface, where the color panel includes multiple colors, and the user may select any one of the colors as the element color. After the dynamic map is generated, the dynamic map can be updated by acquiring the element color of the target model and processing the dynamic map according to the element color, and the updated dynamic map is influenced by the element color.
The dynamic map is processed according to the element color, and the weight of the dynamic map is determined according to the pixel value of each pixel in the dynamic map, and the element color and the dynamic map are fused according to the weight to update the dynamic map, so that the updated dynamic map is obtained.
Taking the black and white dynamic map as an example, for a region with a pixel value of 0 in the dynamic map (i.e. a black region of the dynamic map), the weight of the dynamic map is 0, i.e. the region is affected by the color of an element, and assuming that the color of the element is red, the region appears red. For a region with a pixel value of 1 in the dynamic map (i.e. a white region of the dynamic map), the weight of the dynamic map is 1, i.e. the region is not affected by the color of the element, i.e. the region appears white no matter what the color of the element is.
In this embodiment, the user may select an element color through the element color option, and realize a dynamic effect of a dynamic element of different colors by selecting different element colors.
It should be noted that, in some optional embodiments of the present application, in the process of processing the dynamic map according to the element colors, corresponding color mixing algorithms may also be configured according to different target rendering algorithms, and adjustable parameters including brightness, mixing weight, and the like are developed to influence the result of color mixing. For example, an interpolation function lerp () may be employed to realize color mixing.
In the present exemplary embodiment, a process of processing the element map by using a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map will be further described below.
The element map includes a first channel map, a second channel map, a third channel map, and a fourth channel map, where the first channel map, the second channel map, and the third channel map respectively represent three color channel maps of the element map, for example, the first channel map may refer to a red channel map of the element map, i.e., an R channel map, the second channel map may refer to a green channel map of the element map, i.e., a G channel map, the third channel map may refer to a blue channel map of the element map, i.e., a B channel map, and the fourth channel map may refer to a transparency channel map of the element map, i.e., an a channel map. And the first channel map, the second channel map and the third channel map together form a color channel map of the element map.
It is understood that the first channel map, the second channel map, the third channel map, and the fourth channel map are used to store black-and-white information, i.e., grayscale information, and the color channel map is used to store color information.
In an optional embodiment of the present application, when the target rendering option is a first rendering option, the target rendering algorithm corresponding to the first rendering option may be a rendering algorithm for implementing highly controllable random texture variation.
In this embodiment, the rendering algorithm corresponding to the first rendering option may only use the first channel map and the second channel map of the element map, and the process of processing the element map by using the rendering algorithm corresponding to the target rendering option to obtain the corresponding dynamic map may include:
carrying out zooming processing on the texture coordinate information of the target model according to a first zooming parameter, and carrying out offset processing on the texture coordinate information according to a first offset to obtain first texture coordinate information; the first offset is obtained according to a first offset parameter and time information;
sampling the first channel map according to the first texture coordinate information to obtain a first sampling result;
zooming the texture coordinate information according to a second zooming parameter and the first sampling result, and offsetting the texture coordinate information according to a second offset to obtain second texture coordinate information; the second offset is obtained according to a second offset parameter and time information;
and sampling the second channel map according to the third texture coordinate information to obtain a dynamic map corresponding to the first rendering option.
The object model has texture coordinate information, also called UV information, which is index information for connecting the map and the three-dimensional model, and indicates a correspondence between a certain part of the map and a certain surface of the three-dimensional model. The texture coordinate information of the target model may be denoted as in.texcoord.xy. The first scaling parameter is a two-dimensional parameter, which may be denoted by detailuv.xy, and the scaling process performed on the texture coordinate information according to the first scaling parameter may be denoted by in.texcoord.xy detailuv.xy, i.e. the scaling process associated with the first scaling parameter denotes multiplication by the first scaling parameter. The first offset parameter is a two-dimensional parameter, and may be denoted by detailuv.zw, and the first offset value is obtained according to the first offset parameter and TIME information, and may be denoted by detailuv.zw TIME, where TIME denotes TIME information, and the texture coordinate information is offset according to the first offset parameter, and may be denoted by in.tex coordinate.xy + detailuv.zw TIME, that is, the offset process related to the first offset value is added to the first offset parameter.
In an example, the above scaling the texture coordinate information of the target model according to the first scaling parameter, and shifting the texture coordinate information according to the first shift amount to obtain the first texture coordinate information may be represented as: flowTC1= in.texcoord.xy × detailuv.xy + detailuv.zw × TIME, where flowTC1 is first texture coordinate information, that is, the first texture coordinate information is equal to the texture coordinate information multiplied by a first scaling parameter and then added to a first offset, so that the first texture coordinate information may change with TIME; correspondingly, the first channel map is sampled according to the first texture coordinate information, and the obtained first sampling result also changes along with the change of time.
Taking the first channel map as an example, which is an R channel map, the first channel map is sampled according to the first texture coordinate information, and a first sampling result is obtained, which may be represented as: detailMask = tcarddetailmap.samplebias (scaardetailmapampler, flowTC, sampleMipBias). R, where detailMask represents a first sampling result, carddetailmap represents an element map, sampleBias () represents a sampling function, R represents an R channel, and tcaardetailmap.samplebias (scaardetailmapampler, flowTC, sampleMipBias). R represents sampling of a first channel map of the element map according to first texture coordinate information. The first sampling result is two-dimensional information, that is, the first sampling result can be regarded as a result of the time-lapse motion of the first channel map after scaling, the scaling degree is determined by a first scaling parameter, and the moving speed and moving direction are determined by a first offset parameter.
Since the first channel map is used for storing black and white information, a first sampling result obtained by sampling the first channel map can be represented as a first black and white map.
In other examples, the above-mentioned scaling process is performed on the texture coordinate information of the target model according to the first scaling parameter, and the offset process is performed on the texture coordinate information according to the first offset, so as to obtain the first texture coordinate information, which may also be represented as flowTC1= (in.texcoord.xy + detailuv.zw = TIME) × detailuv.xy, that is, the first texture coordinate information is equal to the texture coordinate information, is added to the first offset, and then is multiplied by the first scaling parameter; the obtained first texture coordinate information may also change with time, and further, a first sampling result obtained by sampling the first channel map according to the first texture coordinate information may also change with time. The first sampling result is also two-dimensional information, which can be represented by a corresponding two-dimensional picture.
The second scaling parameter is a two-dimensional parameter, which may be denoted as disturb.xy, and the scaling process performed on the texture coordinate information according to the second scaling parameter may be denoted as in.texcoord.xy. Disturb.xy, i.e. the scaling process associated with the second scaling parameter indicates a multiplication with the second scaling parameter. The second offset parameter is a two-dimensional parameter, and may be represented by disturb.zw, and the second offset is obtained according to the second offset parameter and TIME information, and may be represented by disturb.zw _ TIME, where TIME represents TIME information, and the texture coordinate information is offset according to the second offset parameter, and may be represented by in.texcoord.xy + disturb.zw _ TIME, that is, the offset process related to the second offset parameter represents adding to the second offset parameter.
In an example, the scaling processing is performed on the texture coordinate information according to the second scaling parameter and the first sampling result, and the offset processing is performed on the texture coordinate information according to the second offset, so as to obtain second texture coordinate information, which may be represented as: flowTC2= in.texcoord.xy × detailMask + disturb.zw × TIME, wherein flowTC2 represents the second texture coordinate information, that is, the second texture coordinate information is equal to the texture coordinate information multiplied by the second scaling parameter and the first sampling result, and then added with the second offset, so that the second texture coordinate information can be changed with TIME; correspondingly, the sampling result obtained by sampling the second channel map according to the second texture coordinate information also changes with time, and it can be understood that the first sampling result obtained by sampling the first channel map affects the presentation effect of the pixels of the second channel map, that is, the information stored in the first channel map may affect the presentation effect of the dynamic elements stored in the second channel map, for example, the dynamic elements stored in the second channel map may have light and shade changes. The sampling result at this time is two-dimensional information, which can be represented by a corresponding two-dimensional picture, and the picture is a dynamic map corresponding to the first rendering option.
Taking the example that the second channel map is a G channel map, sampling the second channel map according to the second texture coordinate information to obtain a dynamic map, which can be represented as: detailMask2= tcardetailmap. Samplebias (scaardetailmapampplexer, flowTC2, samplemimibs). The detailMask2 represents a result of sampling the second channel map by a target rendering algorithm corresponding to the first target option, that is, a dynamic map, which is also a two-dimensional black-and-white picture.
Fig. 4 is a schematic diagram of a first channel map of an example of the present application, fig. 5 is a schematic diagram of a second channel map of the example of the present application, and fig. 6 is a schematic diagram of a dynamic map, which is obtained by a rendering algorithm corresponding to a first rendering option based on the first channel map shown in fig. 4 and the second channel map shown in fig. 5, changing with time. It can be understood that the element map is sampled twice by using the rendering algorithm corresponding to the first rendering option, the first channel map of the element map is sampled for the first time, the second channel map of the element map is sampled for the second time, and the two sampling results change different texture states in real time when each frame is overlapped and collided, that is, the dynamic map changes different texture states in real time, so as to realize a dynamic effect.
In this embodiment, the first scaling parameter and the first offset parameter are used to control the sampling result of the first channel map, i.e. the first sampling result. The first sampling result, the second scaling parameter and the second offset parameter are used for controlling the sampling result of the second channel map so as to obtain the corresponding dynamic map.
The first zooming parameter, the first offset parameter, the second zooming parameter and the second offset parameter can be set by default of the system or can be set by user-defined, wherein the user-defined setting comprises the modification of the parameters set by default of the system. Illustratively, the graphical user interface comprises a plurality of parameter options, wherein the parameter options at least comprise a first zooming parameter option, a first offset parameter option, a second zooming parameter option and a second offset parameter option, each parameter option has a corresponding parameter input box, and when a dynamic effect generated by a parameter set by default of the system cannot meet the user requirement, the user can adjust the corresponding parameter through the parameter input box, so that the application program processes the element map based on the new parameter adjusted by the user.
It is understood that the above method may further include:
in response to an adjustment operation for a parameter in at least one parameter input box, determining a new parameter corresponding to the adjustment operation;
and based on the new parameters, processing the element map by adopting a rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map.
Since the first scaling parameter, the first offset parameter, the second scaling parameter, and the second offset parameter are all two-dimensional parameters, and the first scaling parameter is denoted as detail uv.xy, it can be understood that the first scaling parameter is determined by detail uv.x and detail uv.y, and therefore, when the first scaling parameter is set or adjusted, the detail uv.x and/or detail uv.y can be set or adjusted. Similarly, when the first offset parameter, the second scaling parameter and the second offset parameter are set or adjusted, at least one-dimensional parameter is set or adjusted.
In another optional embodiment of the present application, when the target rendering option is a second rendering option, the target rendering algorithm corresponding to the second rendering option may be a rendering algorithm for implementing texture perturbation.
In this embodiment, the rendering algorithm corresponding to the second rendering option may only use the first channel map and the second channel map of the element map, and the process of processing the element map by using the rendering algorithm corresponding to the target rendering option to obtain the corresponding dynamic map may include:
carrying out zooming processing on the texture coordinate information of the target model according to the first zooming parameter, and carrying out offset processing on the texture coordinate according to the first offset to obtain first texture coordinate information; the first offset is obtained according to a first offset parameter and time information;
sampling the first channel map according to the first texture coordinate information to obtain a first sampling result;
zooming the texture coordinate information according to the second zooming parameter, and offsetting the texture coordinate information according to the second offset and the disturbance information to obtain third texture coordinate information; the second offset is obtained according to a second offset parameter and time information; the disturbance information is obtained according to the disturbance intensity control parameter and the first sampling result;
and sampling the second channel map according to the third texture coordinate information to obtain a dynamic map corresponding to the second rendering option.
In this embodiment, a process of obtaining the first sampling result is similar to a process of obtaining the first sampling result by a rendering algorithm corresponding to the first rendering option, which is specifically referred to the description of the foregoing embodiment and is not repeated herein.
The second scaling parameter is a two-dimensional parameter, which may be denoted as disturb.xy, and the scaling process performed on the texture coordinate information according to the second scaling parameter may be denoted as in.texcoord.xy. Disturb.xy, i.e. the scaling process associated with the second scaling parameter indicates a multiplication with the second scaling parameter. The second offset parameter is a two-dimensional parameter and may be represented by disturb.zw, and the second offset amount is composed of the second offset parameter and TIME information and may be represented by disturb.zw.time, where TIME represents the TIME information, and the texture coordinate information is offset according to the second offset parameter and may be represented by in.texcoord.xy + disturb.zw.time, that is, the offset process associated with the second offset parameter represents the addition of the second offset parameter.
The disturbance information is obtained according to the disturbance intensity control parameter and the first sampling result, that is, the disturbance information is determined by the first sampling result and the disturbance intensity control parameter, and may be expressed as a product of the disturbance intensity parameter and the first sampling result, that is, cDarkDgree × detailMask, cDarkDgree denotes the disturbance intensity control parameter, and is used to adjust the brightness of the first sampling result.
In an example, the scaling processing is performed on the texture coordinate information according to the second scaling parameter, and the offset processing is performed on the texture coordinate information according to the second offset and the disturbance information, so as to obtain third texture coordinate information, which may be represented as: flowTC3= in.texcoord.xy + disturb.zw + TIME + cDarkDgree + detailMask, wherein flowTC3 is third texture coordinate information, that is, the third texture coordinate information is equal to the texture coordinate information multiplied by the second scaling parameter, then added with the second offset and added with the product of the disturbance intensity control parameter and the first sampling result, so that the third texture coordinate information can change along with TIME change; accordingly, the sampling result obtained by sampling the second channel map according to the third texture coordinate information also changes along with the change of time. It can be understood that the first sampling result obtained by sampling the first channel map may affect the rendering effect of the pixels of the second channel map, that is, the information stored in the first channel map may affect the rendering effect of the dynamic elements stored in the second channel map, for example, the dynamic elements stored in the second channel map may have distortion and disturbance effects. The sampling result at this time is two-dimensional information, which can be represented by a corresponding two-dimensional picture, and the picture is a dynamic map corresponding to the second rendering option.
Taking the example that the second channel map is a G channel map, sampling the second channel map according to the third texture coordinate information to obtain a dynamic map, which may be represented as: detailMask3= tcardetailmap. Samplebias (scaardetailmapampplexer, flowTC3, samplemimibs). The detailMask3 indicates a result of sampling the second channel map by using a target rendering algorithm corresponding to the second target option, that is, the dynamic map is also a two-dimensional black-and-white picture, and G indicates a G channel map.
Fig. 7 is a schematic diagram of a first channel map of an example of the present application, fig. 8 is a schematic diagram of a second channel map of the example of the present application, and fig. 9 is a schematic diagram of a change with time of a dynamic map obtained by a target rendering algorithm corresponding to a second rendering option based on processing of the first channel map shown in fig. 7 and the second channel map shown in fig. 8 in the example of the present application. It can be understood that the rendering algorithm corresponding to the second rendering option is adopted to perform sampling processing on the element map twice, the first time is to perform sampling processing on the first channel map of the element map, the second time is to perform sampling processing on the second channel map of the element map, and the two-time sampling result can present distorted and disturbed artistic effects when each frame is overlapped and collided, and the effects are particularly suitable for having image-bearing pattern textures, such as line distortion, water surface ripple, fish swarm swimming and the like.
In this embodiment, the first scaling parameter and the first offset parameter are used to control the sampling result of the first channel map, i.e. the first sampling result. The first sampling result, the disturbance intensity control parameter, the second scaling parameter and the second offset parameter are used for controlling the sampling result of the second channel map so as to obtain the corresponding dynamic map. The disturbance intensity control parameter is used for controlling the brightness of the picture corresponding to the first sampling result, and when the disturbance intensity control parameter is increased, the brightness of the picture corresponding to the first sampling result is increased, so that the distortion and disturbance degrees of the two sampling results are increased when each frame is overlapped and collided. The user may control the dynamic effect of the dynamic elements of the dynamic map by modifying the disturbance intensity control parameter.
The first scaling parameter, the first offset parameter, the second scaling parameter, the second offset parameter, and the disturbance intensity parameter may be set by default by the system, or may be set by user-defined, where the user-defined setting includes modifying the parameter set by default by the system, and may specifically participate in the foregoing description, and details are not described here.
In another optional embodiment of the present application, when the target rendering option is a third rendering option, the rendering algorithm corresponding to the third rendering option may be a rendering algorithm for implementing random texture change and texture disturbance on the color image.
In this embodiment, the process of processing the element map by using the target rendering algorithm corresponding to the target rendering option to obtain the corresponding dynamic map may include:
carrying out zooming processing on the texture coordinate information of the target model according to a first zooming parameter, and carrying out offset processing on the texture coordinate information according to a first offset to obtain first texture coordinate information; the first offset is obtained according to a first offset parameter and time information;
sampling the fourth channel map according to the first texture coordinate information to obtain a second sampling result;
zooming the texture coordinate information according to the rolling information and the second zooming parameter, and offsetting the texture coordinate information according to the second offset and the distortion information to obtain fourth texture coordinate information; the rolling information is obtained according to the second sampling result and the rolling strength control parameter, the second offset is obtained according to a second offset parameter and time information, and the warping information is obtained according to the second sampling result and the warping strength control parameter;
and sampling the color channel map according to the fourth texture coordinate information to obtain a dynamic map corresponding to the third rendering option.
In this embodiment, a process of obtaining the first texture coordinate information is similar to a process of obtaining the first texture coordinate information by a rendering algorithm corresponding to the first rendering option, which is specifically referred to the description of the foregoing embodiment and is not repeated herein.
Different from the first rendering option, the rendering algorithm corresponding to the third rendering option samples the fourth channel map of the element map according to the first texture coordinate information after obtaining the first texture coordinate information, and it can be understood that, in this embodiment, the influence channel map is used to store the dynamic elements, and the information stored in the fourth channel map is used to influence the presentation manner of the dynamic elements stored in the color channel map.
The sampling processing is performed on the fourth channel map according to the first texture coordinate information, so as to obtain a second sampling result, which may be represented as: detailMask4= tcardetailmap. Samplebias (scaardetailmapampler, flowTC, sampleMipBias). That is, the second sampling result can be regarded as the result of the fourth channel map scaling moving with time, the scaling degree is determined by the first scaling parameter, and the moving speed and moving direction are determined by the first offset parameter.
Since the fourth channel map is used for storing black-and-white information, a second sampling result obtained by sampling the fourth channel map can be represented as a second black-and-white map.
The scroll information is obtained according to the second sampling result and the scroll strength control parameter, that is, the scroll information is determined by the second sampling result and the scroll strength control parameter, and the scroll information may be represented as: flow = detailMask4 flowStrength + (1 flowStrength), where flow denotes scroll information and flowStrength denotes a scroll intensity control parameter, and it can be understood that the scroll intensity control information is used to adjust the brightness of the second black-and-white map corresponding to the second sampling result. In the process of using the color channel map, the rolling information is used for scaling the texture coordinate information, which represents multiplication with the rolling information, and it can be considered that the rolling information affects the light and shade effect of the dynamic elements in the color channel map, and further affects the dynamic effect of the dynamic map.
The warp information is derived from the second sampling result and the warp strength control parameter, that is, the warp information is determined by the second sampling result and the warp strength control parameter, and may be expressed as a product of the warp strength parameter and the second sampling result, i.e., twist strength parameter × detailMask4, where twist strength parameter is expressed by twist strength parameter. It is understood that the warping strength control parameter is used to adjust the brightness of the second black-and-white map corresponding to the second sampling result. In the process of sampling the color channel map, the warping information is used to perform offset processing on the texture coordinate information, and the represented texture coordinate information is added to the warping information. The twist strength control parameter is used for controlling the strength of twist and disturbance.
The second scaling parameter is a two-dimensional parameter, which may be denoted as disturb.xy, and the scaling process performed on the texture coordinate information according to the second scaling parameter may be denoted as in.texcoord.xy. Disturb.xy, i.e. the scaling process associated with the second scaling parameter indicates a multiplication with the second scaling parameter. The second offset parameter is a two-dimensional parameter and may be represented by disturb.zw, and the second offset amount is composed of the second offset parameter and TIME information and may be represented by disturb.zw.time, where TIME represents the TIME information, and the texture coordinate information is offset according to the second offset parameter and may be represented by in.texcoord.xy + disturb.zw.time, that is, the offset process associated with the second offset parameter represents the addition of the second offset parameter.
In an example, the above-mentioned scaling processing is performed on the texture coordinate information according to the scroll information and the second scaling parameter, and the offset processing is performed on the texture coordinate information according to the second offset and the warping information, so as to obtain fourth texture coordinate information, which may be represented as: flowTC4= in.texcoord.xy _ flow _ disturb.xy + disturb.zw _ TIME + twist _ strength _ detailMask4, wherein flowTC4 is fourth texture coordinate information, that is, fourth texture coordinate information is equal to texture coordinate information multiplied by rolling information and a second scaling parameter, and then added to a second offset and added to a product of a twist strength control parameter and a second sampling result, so that the fourth texture coordinate information may vary with TIME, and accordingly, a sampling result obtained by sampling the color channel map according to the fourth texture coordinate information may also vary with TIME. It will be appreciated that the information stored by the fourth channel map may affect the way the color channel map is presented. The sampling result at this time is two-dimensional information, which can be represented by a corresponding two-dimensional picture, and since the color channel map is used to store color information, the picture obtained by sampling the color channel map is also color at this time, that is, the dynamic map corresponding to the third rendering option is color.
Wherein, sampling the color channel map according to the fourth texture coordinate information to obtain a dynamic map, which can be expressed as: colorMap = tcardetailmap. Samplebias (scaardetailmapampplex, flowTC4, sampleMipBias). Wherein colorMap represents the result of sampling the color channel map, i.e. the dynamic map is a color picture.
Fig. 10 is a schematic diagram of a fourth channel map of an example of the present application, fig. 11 is a schematic diagram of a color channel map of an example of the present application, and fig. 12 is a schematic diagram of a dynamic map, which is obtained by a rendering algorithm corresponding to a third rendering option based on the fourth channel map shown in fig. 10 and the color channel map shown in fig. 11 in the example of the present application, changing with time. It can be understood that, the rendering algorithm corresponding to the third rendering option is used to perform sampling processing on the element map twice, the first is to perform sampling processing on the fourth channel map of the element map, the second is to perform sampling processing on the color channel map of the element map, and in the process of sampling the color channel map, the sampling processing is multiplied and added with the second sampling result.
In this embodiment, the first scaling parameter and the first offset parameter are used to control the sampling result of the fourth channel map, i.e. the second sampling result. The second sampling result, the scroll intensity control parameter, the twist intensity control parameter, the second scaling parameter, and the second offset parameter are used to control the sampling result of the color channel map to obtain the corresponding dynamic map. The user can control the dynamic effect of the dynamic elements of the dynamic map by modifying the scroll intensity control parameter, the twist intensity control parameter.
The first zoom parameter, the first offset parameter, the second zoom parameter, the second offset parameter, the scroll strength control parameter, and the twist strength control parameter may be set by default by the system, or may be set by user-defined, where the user-defined setting includes modifying the parameter set by default by the system, and may specifically participate in the foregoing description, which is not described herein again.
Because the rendering algorithm corresponding to the third rendering option multiplies the element map to obtain the dynamic map, and adds the multiplied value to the scroll information and the warped information, compared with the rendering algorithm corresponding to the first rendering option and the rendering algorithm corresponding to the second rendering option, the required parameters are more, increasing the parameters in the graphical user interface means that the memory bandwidth is increased, and the calculation amount of a GPU (Graphics Processing Unit) is increased accordingly. To address this problem, the inventors thought that the parameters could be reasonably compressed to cover the actual parameters needed with the minimum panel parameters to achieve performance optimization. Considering that the dynamic map obtained by the third rendering option is a color map, the rendering algorithm corresponding to the third rendering option may be divided into two phases, i.e., a sampling phase and a color mixing phase.
In the sampling stage, the rolling strength control parameter and the twisting strength control parameter can play an obvious effect when the deviation value is 0.01, and obviously, the rolling strength control parameter and the twisting strength control parameter are not suitable for compression and combination; in the mixed color stage, the parameters to be used include a coating integral color superposition parameter BaseColor, a static coating color brightness parameter colorlntensity, a fourth channel chartlet color parameter CarDetailColor, a fourth channel chartlet brightness parameter cardetaillnensity and the like, and the inventor finds that the static coating color brightness parameter has unobvious change when the deviation value is less than 0, so in some optional embodiments of the application, the static coating color brightness parameter colorlntensity and a rolling strength control parameter FlowStrength are combined, that is, one parameter input box is used for representing the two parameters, specifically, a real part in the parameter input box can be used as the static coating color brightness parameter, and a fractional part is used as the rolling strength control parameter; or, the value in the parameter input box is used as the static painting color brightness parameter, and the decimal part is used as the rolling strength control parameter. The static painting color brightness parameter is used for controlling the static painting color brightness, and the static painting color brightness can be understood as the brightness of a color channel map. By controlling the brightness of the color channel map, the effect of the generated dynamic map presentation can also be changed.
When defining the scroll intensity control parameter, it can be expressed as:
#define flowStrength(frac(cDiffColRange));
wherein, the cDiffColRange is a parameter of a different color range, and can be obtained through a corresponding parameter input box in a graphical user interface. In this example, when the diffcolrange is acquired, the fractional part of the diffcolrange may be further extracted as a scroll strength control parameter.
In another optional embodiment of the present application, when the target rendering option is a fourth rendering option, the target rendering algorithm corresponding to the fourth rendering option may be a rendering algorithm for implementing a dual-layer streaming effect.
In this embodiment, the process of processing the element map by using the rendering algorithm corresponding to the target rendering option to obtain the corresponding dynamic map may include:
zooming the texture coordinate information of the target model according to the second zooming parameter, and offsetting the texture coordinate information according to the second offset to obtain fifth texture coordinate information; the second offset is obtained according to a second offset parameter and time information;
sampling the fourth channel map according to the fifth texture coordinate information to obtain a third sampling result;
determining sixth texture coordinate information according to the third sampling result and the parallax offset texture coordinate information, wherein the parallax offset texture coordinate information is obtained by performing parallax offset on the texture coordinate information;
and sampling the color channel map according to the sixth texture coordinate information to obtain a dynamic map corresponding to the fourth rendering option.
The object model has texture coordinate information, also called UV information, which is index information for connecting the map and the three-dimensional model, and indicates a correspondence between a certain part of the map and a certain surface of the three-dimensional model. The texture coordinate information of the target model may be denoted as in.texcoord.xy.
The second scaling parameter is a two-dimensional parameter, which may be denoted as disturb.xy, and the scaling process performed on the texture coordinate information according to the second scaling parameter may be denoted as in.texcoord.xy. Disturb.xy, i.e. the scaling process associated with the second scaling parameter indicates a multiplication with the second scaling parameter. The second offset parameter is a two-dimensional parameter and may be represented by disturb.zw, and the second offset amount is composed of the second offset parameter and TIME information and may be represented by disturb.zw.time, where TIME represents the TIME information, and the texture coordinate information is offset according to the second offset parameter and may be represented by in.texcoord.xy + disturb.zw.time, that is, the offset process associated with the second offset parameter represents the addition of the second offset parameter.
In an example, the above-mentioned scaling process is performed on the texture coordinate information of the target model according to the second scaling parameter, and the offset process is performed on the texture coordinate information according to the second offset, so as to obtain fifth texture coordinate information, which may be represented as: flowTC5= in.texcoord.xy _ disturb.xy + disturb.zw _ TIME, where flowTC5 represents fifth texture coordinate information, that is, fifth texture coordinate information is equal to the texture coordinate information multiplied by the second scaling parameter, and then is added to the first offset, so that the fifth texture coordinate information may change with TIME change, and accordingly, the fourth channel map is sampled according to the fifth texture coordinate information, and the obtained third sampling result may also change with TIME change.
The sampling processing on the fourth channel map according to the fifth texture coordinate information to obtain a third sampling result may be represented as: alphaTex = tcardetailmap. Samplebias (scaardetailmapampampamer, flowTC5, samplembias). That is, the third sampling result can be regarded as the result of the fourth channel map moving with time after being scaled, the degree of scaling is determined by the second scaling parameter, and the moving speed and moving direction are determined by the second offset parameter.
Since the fourth channel map is used for storing black-and-white information, a third sampling result obtained by sampling the fourth channel map can be represented as a third black-and-white map.
The disparity offset texture coordinate information may be represented by offset, which is obtained by performing disparity offset on the texture coordinate information, and it is understood that the disparity offset texture coordinate information is related to the texture coordinate information and the view angle.
The sixth texture coordinate information determined according to the third sampling result and the parallax offset texture coordinate information may be represented as offset + AlphaTex, that is, the sixth texture coordinate information is obtained by adding the third sampling result and the parallax offset texture coordinate information. Since the third sampling result may change with time, and the parallax offset texture coordinate information may change with a change in a viewing angle, the obtained sixth texture coordinate information changes with time and with a change in a viewing angle, and accordingly, the color channel map is sampled according to the sixth texture coordinate information, and the obtained sampling result also changes with time and a viewing angle.
Since the color channel map is used for storing color information, the picture obtained by sampling the color channel map is also colored at this time, that is, the dynamic map corresponding to the fourth rendering option is colored.
Sampling the color channel map according to the sixth texture coordinate information to obtain a dynamic map corresponding to the fourth rendering option, where the dynamic map may be represented as: colorTex = tcardetailmap. Samplebias (scaardetailmapampplex, offset + AlphaTex, sampleMipBias). Rgba, where colorTex denotes the result of sampling a color channel map, i.e. a color dynamic map.
In an alternative example, the generating of the disparity offset texture coordinate information may include:
carrying out zooming processing on the texture coordinate information of the target model according to a first zooming parameter, and carrying out offset processing on the texture coordinate information according to a first offset to obtain first texture coordinate information; the first offset is obtained according to a first offset parameter and time information;
acquiring the tangent direction, the sub-tangent direction and the visual angle direction of each pixel of the target model;
determining parallax offset information according to the tangent direction, the sub-tangent direction, the view angle direction and the parallax depth control parameter;
and generating parallax offset texture coordinate information according to the parallax offset information and the first texture coordinate information.
In this example, the process of obtaining the first texture coordinate information is similar to the process of obtaining the first texture coordinate information by the rendering algorithm corresponding to the first rendering option, which is specifically referred to the description of the foregoing embodiment and is not repeated herein.
The rendering process of the object model is a process of projecting a three-dimensional object model into a two-dimensional object model image. Since the pixels of the target model are pixels constituting the target model image, the position of the target model corresponding to the pixels of the target model can be specified, the tangent line and the normal line can be obtained from the corresponding positions, and the secondary tangent line can be obtained from the tangent line and the normal line. The visual angle direction can be determined according to the relative position relationship between the target model and the virtual camera, and it can be understood that the visual angle direction of each pixel in the world space is the visual angle direction of the virtual camera in the game.
The parallax depth control parameter is used for controlling the depth of a viewing angle, and as can be understood, the parallax depth control parameter is used for controlling the distance between the original texture coordinate information and the corresponding texture coordinate information subjected to parallax offset in the direction perpendicular to the screen.
Determining parallax offset information according to the Tangent direction, the sub-Tangent direction, the View angle direction and a parallax depth control parameter, wherein the parallax offset information can be represented by float2 (dot, view), dot (binary, view)) -HoleDepth, and the float represents a unit vector in the Tangent direction, namely a Tangent line; view represents a unit vector of a View direction, i.e., a View; bitangent represents a unit vector in the direction of the minor tangent, namely the minor tangent; dot denotes dot product, holeDepth denotes parallax depth control parameter, and float2 denotes two-dimensional data type.
Generating disparity shift texture coordinate information according to the disparity shift information and the first texture coordinate information, which can be expressed as: offset = float2 (dot (range, view), dot (binary, view)) + flowTC1, where offset represents disparity-offset texture coordinate information and flowTC1 represents first texture coordinate information. The texture coordinate information is shifted through parallax, so that the finally obtained dynamic effect has stereoscopic impression, namely, the parallax effect is generated. Fig. 13 is a schematic diagram of a dynamic map generated by using a target rendering algorithm corresponding to the fourth rendering option in an example of the present application. In the present embodiment, the principle of disparity is to interpolate pixels of world coordinates to shift pixels under model texture coordinate information. Causing the pixel to appear at point a but the human eye sees the illusion of point B.
For example, the calculation process of the tangential direction and the sub-tangential direction may include:
determining a first distance of adjacent pixels of the target model in a world coordinate system and a second distance of the adjacent pixels in a texture space;
determining the position offset of the texture space according to the first distance and the second distance;
and determining a tangent line and a secondary tangent line according to the position offset of the texture space.
The world coordinate system worldposition is an objective space defining a physical world position, and a coordinate difference between two adjacent pixel blocks can be obtained by ddx (world coordinate: xyz) and ddy (world coordinate: worldposition: xyz). The GPU is generally in units of 2 × 2 when pixelized, and the result of ddx (p (x, y)) among four pixels shown in fig. 14 is the x coordinate of its right-side neighboring pixel minus the x coordinate of the left-side pixel. ddy works the same way to compute the interpolation of the ordinate. They calculate the distance of two adjacent pixels in the world coordinate system.
Representing the difference of the pixels under the world coordinates by P and Q, the following can be obtained:
Float3 P=ddx(WorldPosition.xyz);
Float3 Q=ddy(WorldPosition.xyz);
wherein, float3 represents a three-dimensional data type.
Obtaining first texture coordinate information by scaling and offsetting the texture coordinate information of the target model, wherein the first texture coordinate information can be expressed as: flowTC1= in.texcoord. Xy detailuv. Xy + detailuv. Zw TIME.
Representing flowTC1 in texture space pixel difference values by C and D, we can get:
Float2 C=ddx(detailUV);
Float2 D=ddy(detailUV);
where Float2 represents a two-dimensional data type.
With the distance formula p = root square ((x 1-x 2) 2+ (y 1-y 2) 2), the texture spatial position offset can be derived: r = c.y d.x-d.y c.x; where c.y represents the second term in the two-dimensional data C, c.x represents the first term in the two-dimensional data C, and similarly, d.x represents the first term in the two-dimensional data D, and d.y represents the second term in the two-dimensional data D.
The texture spatial position offset is normalized, i.e., R is limited to a range between 0 and 1, i.e., R = rcp (c.y × d.x-d.y × c.x).
And finally, the coordinate directions of two tangent spaces of a tangent line and a secondary tangent line are needed, a parallax matrix is constructed, the tangent space is converted into a world space, the parallax direction is added into a texture space from the world space, and finally, pixels are sampled by the texture space, so that the parallax is obtained, namely, the pixels obtained through parallax transformation are actually not the pixels of the corresponding vertexes after current rasterization.
An x axis and a y axis of a texture space are constructed through a tangent line and a secondary tangent line, the visual angle direction of the virtual camera is a z axis, and the depth of the z axis is represented by a parallax depth control parameter, so that a tangent line and a secondary tangent line can be obtained as follows:
Float3 Tangent=-(P*D.y-Q*C.y)*R;
Float3 Bitangent=(P*D.x-Q*C.x)*R。
further, considering that the requirements for the exhibition effect of the model may be different in different scene requirements during the game, for example, in the exhibition page of the model, the model is required to have the highest model surface number and the best material effect, and in the motion state of the model, such as when the racing model is running in a lane, the requirements for the exhibition effect of the model are lower than the requirements for the exhibition effect in the exhibition page in order to optimize the performance and reduce the calculation amount.
In order to enhance the representation effect of the model, in some optional embodiments of the present application, the scaling the texture coordinate information of the target model according to the second scaling parameter, and performing offset processing on the texture coordinate information according to the second offset to obtain fifth texture coordinate information may further include:
performing offset processing on the fifth texture coordinate information according to the parallax information to update the fifth texture coordinate information; the parallax information is obtained according to the view angle direction and normal direction of each pixel of the target model and the distortion intensity control parameter.
The disparity information is obtained according to the viewing angle direction and the normal direction of each pixel of the target model and the distortion intensity control parameter, and it can be understood that the disparity information is related to the viewing angle direction, the normal direction and the distortion intensity control parameter of the pixel.
Illustratively, the disparity information may be expressed as saturrate (NoV) × twist strength, where saturrate (NoV) denotes normalizing a dot product of a normal dot product by a viewing angle direction, and twist strength control parameters.
The shifting process of the fifth texture coordinate information according to the disparity information may refer to adding disparity information to the fifth texture coordinate information, and the updated fifth texture coordinate information may be represented as in.texcoord.xy _ disparity.xy + disparity.zw _ TIME + saturrate (NoV)' twist _ strength, that is, the updated fifth texture coordinate information is equal to the texture coordinate information multiplied by the second scaling parameter and then added to the first offset and the disparity information, so that the updated fifth texture coordinate information may change with the viewing angle. Correspondingly, sampling processing is carried out on the fourth channel map according to the updated fifth texture coordinate information, and an obtained third sampling result can also change along with the change of the visual angle, so that the display effect of the dynamic map obtained by the fourth rendering option is enhanced.
Further, in order to enhance the representation effect of the model, in some optional embodiments of the present application, the determining sixth texture coordinate information according to the third sampling result and the disparity-shifted texture coordinate information may further include:
adjusting the third sampling result by adopting a parallax intensity control parameter to update the third sampling result;
and determining sixth texture coordinate information according to the updated third sampling result and the parallax offset texture coordinate information.
The parallax intensity control parameter may be represented by NovStrength, and may be obtained by obtaining corresponding parameter options, and the parallax intensity control parameter is used to adjust the third sampling result to update the third sampling result, which may be represented as AlphaTex NovStrength, and thus the parallax intensity control parameter is used to adjust the brightness of the third sampling result.
The sixth texture coordinate information determined according to the updated third sampling result and the parallax offset texture coordinate information may be expressed as offset + AlphaTex NovStrength, that is, the sixth texture coordinate information is obtained by multiplying the third sampling result by the parallax intensity control parameter and then adding the result to the parallax offset texture coordinate information, so that the sixth texture coordinate information may be controlled by the parallax intensity control parameter, and the diversity of the dynamic mapping may be further improved by adjusting the parallax intensity control parameter.
The rendering algorithm corresponding to the fourth rendering option in the embodiment is particularly suitable for top-level coating, combines scenes such as static automobile coating design and the like, and achieves three-dimensional and transparent artistic effects.
Further, in order to make the fourth channel map have a strong visual impact on the color representation, in some alternative embodiments of the present application, the contrast ratio may be enhanced in a power manner, and particularly, the number of the power may be controlled by the brightness control parameter. To save parameters to optimize performance, the panel color picker can be multiplied by 30, which will be the result of 1 x 30 luminance values when selecting white, i.e. 30, 0 x 30 when selecting black, still keeping 0; can be expressed as: basecolor detail = pow (ColorTex cCarDetailColor 30, cDiffColRange), where basecolor detail represents the base color of the element map, pow () represents the exponentiation operation, colorTex represents the fourth channel map color, carDetailColor represents the panel picker color, and cDiffColRange represents the heterochromatic range parameter.
Similarly, in order to save parameters, 30 may also be pre-multiplied by the basic color BaseColor parameter, and finally the gray value of the fourth channel map is used to control the mixing of linear interpolation, which may be specifically expressed as: paintColor = lerp ((ColorTex cBaseColor 30), basecolor detail, alphaTex).
In this embodiment, the first scaling parameter, the first offset parameter, the second scaling parameter, the second offset parameter, the parallax depth control parameter, the distortion intensity control parameter, the parallax intensity control parameter, and the different color range parameter may be set by default by a system, or may be set by a user in a user-defined manner, where the user-defined setting includes modifying the default parameters of the system, which may specifically take part in the foregoing description, and details are not repeated here.
In the process of obtaining the dynamic map by Processing the element map by the rendering algorithm corresponding to the fourth rendering option, the first scaling parameter, the first offset parameter, the second scaling parameter, the second offset parameter, the parallax depth control parameter, the distortion intensity control parameter, the parallax intensity control parameter, the heterochromatic range parameter, and the like are involved. To address this problem, the inventors thought that the parameters could be reasonably compressed to cover the actual parameters needed with the minimum panel parameters to achieve performance optimization. Considering that the luminance control parameter for controlling the power number does not need to be accurate to a decimal, and the parallax depth control parameter can express obvious difference when the parallax depth control parameter changes in 0.01 unit, the luminance control parameter and the parallax depth control parameter can be combined, namely, one parameter input box is used for representing the two parameters, specifically, a real part in the corresponding parameter input box can be used as the luminance control parameter, and a decimal part is used as the parallax depth control parameter.
Similarly, the distortion intensity control parameter and the parallax intensity control parameter are combined, the parallax intensity control parameter is represented by a tenth digit in the corresponding parameter input frame, and the distortion intensity control parameter is represented by a percentile.
In the embodiment of the application, four rendering algorithms are disclosed, each rendering algorithm is realized by processing at least two channel maps in the element maps and is influenced by sampling results of other channel maps in the process of sampling one dynamic map storing dynamic elements to obtain the dynamic map, so that the diversified dynamic effect of the dynamic elements can be realized by one element map, the map resources and the storage space are saved, and the generation efficiency of the dynamic effect is improved. In addition, the four rendering algorithms are integrated into the same graphical user interface, so that a user can conveniently generate models with different dynamic effects by adopting different rendering algorithms, and the generation efficiency of the dynamic effects is also improved.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the embodiments. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred and that no particular act is required of the embodiments of the application.
Referring to fig. 15, a block diagram of a dynamic effect model rendering apparatus according to an embodiment of the present disclosure is shown, where the block diagram corresponds to an embodiment of the dynamic effect model rendering method, in the embodiment of the present disclosure, the dynamic effect model rendering apparatus displays a graphical user interface on a display screen of a terminal device by running an application, where the graphical user interface includes a plurality of different rendering options, and the dynamic effect model rendering apparatus may include the following modules:
a rendering mode determining module 1501, configured to, in response to a start operation for a target rendering option, obtain an element map of a target model to be rendered, where the target rendering option is any one of the multiple different rendering options, and the element map is used to determine a dynamic element of the target model;
a dynamic map generating module 1502, configured to process the element map by using a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map, where the dynamic map includes a relationship that a position of the dynamic element changes with time;
and a dynamic effect rendering module 1503, configured to perform rendering processing on the target model based on the dynamic map to obtain a dynamic effect of the target model.
Further, the dynamic effect rendering module 1503 may further include:
the obtaining sub-module is used for obtaining an original texture map and a dynamic mask of the target model; each pixel in the dynamic mask is used for determining the fusion weight of each pixel of the original texture map and the pixel corresponding to the dynamic map;
the fusion sub-module is used for fusing the original texture map and the dynamic map according to the dynamic mask to obtain a fusion map;
and the rendering submodule is used for rendering the dynamic area based on the fusion map to obtain the dynamic effect of the target model.
Further, the apparatus may further include:
the color obtaining module is used for obtaining element colors of the target model; the element color is used for adjusting the color of the dynamic element;
and the color mixing module is used for processing the dynamic map according to the element colors so as to update the dynamic map.
Further, the element map includes a first channel map and a second channel map, and when the target rendering option is the first rendering option, the dynamic map generating module 1502 may include:
the first coordinate determination submodule is used for carrying out scaling processing on the texture coordinate information of the target model according to a first scaling parameter and carrying out offset processing on the texture coordinate information according to a first offset to obtain first texture coordinate information; the first offset is obtained according to a first offset parameter and time information;
the first sampling submodule is used for sampling the first channel map according to the first texture coordinate information to obtain a first sampling result;
the second coordinate determination submodule is used for carrying out scaling processing on the texture coordinate information according to a second scaling parameter and the first sampling result and carrying out offset processing on the texture coordinate information according to a second offset to obtain second texture coordinate information; the second offset is obtained according to a second offset parameter and time information;
and the first generation submodule is used for sampling the second channel map according to the third texture coordinate information to obtain a dynamic map corresponding to the first rendering option.
Further, the element map includes a first channel map and a second channel map, and when the target rendering option is a second rendering option, the dynamic map generating module 1502 may include:
the first coordinate determination submodule is used for carrying out scaling processing on the texture coordinate information of the target model according to the first scaling parameter and carrying out offset processing on the texture coordinate according to the first offset to obtain first texture coordinate information; the first offset is obtained according to a first offset parameter and time information;
the first sampling submodule is used for sampling the first channel map according to the first texture coordinate information to obtain a first sampling result;
the third coordinate determination submodule is used for carrying out scaling processing on the texture coordinate information according to the second scaling parameter and carrying out offset processing on the texture coordinate information according to the second offset and the disturbance information to obtain third texture coordinate information; the second offset is obtained according to a second offset parameter and time information; the disturbance information is obtained according to a disturbance intensity control parameter and the first sampling result;
and the second generation sub-module is used for sampling the second channel map according to the third texture coordinate information to obtain a dynamic map corresponding to the second rendering option.
Further, the element map includes a color channel map and a fourth channel map, and when the target rendering option is a third rendering option, the dynamic map generating module 1502 may include:
the first coordinate determination submodule is used for carrying out scaling processing on the texture coordinate information of the target model according to a first scaling parameter and carrying out offset processing on the texture coordinate information according to a first offset to obtain first texture coordinate information; the first offset is obtained according to a first offset parameter and time information;
the second sampling submodule is used for sampling the fourth channel map according to the first texture coordinate information to obtain a second sampling result;
the fourth coordinate determination submodule is used for carrying out zooming processing on the texture coordinate information according to the rolling information and the second zooming parameters and carrying out offset processing on the texture coordinate information according to the second offset and the distortion information to obtain fourth texture coordinate information; the rolling information is obtained according to the second sampling result and the rolling strength control parameter, the second offset is obtained according to a second offset parameter and time information, and the warping information is obtained according to the second sampling result and the warping strength control parameter;
and the third generation submodule is used for sampling the color channel map according to the fourth texture coordinate information to obtain a dynamic map corresponding to the third rendering option.
Further, the element map includes a color channel map and a fourth channel map, and when the target rendering option is a fourth rendering option, the dynamic map generating module 1502 may include:
the fifth coordinate determination submodule is used for carrying out scaling processing on the texture coordinate information of the target model according to the second scaling parameter and carrying out offset processing on the texture coordinate information according to the second offset to obtain fifth texture coordinate information; the second offset is obtained according to a second offset parameter and time information;
the third sampling sub-module is used for sampling the fourth channel map according to the fifth texture coordinate information to obtain a third sampling result;
a sixth coordinate determination submodule, configured to determine sixth texture coordinate information according to the third sampling result and the parallax offset texture coordinate information, where the parallax offset texture coordinate information is obtained by performing parallax offset on the texture coordinate information;
and the fourth generation submodule is used for sampling the color channel map according to the sixth texture coordinate information to obtain a dynamic map corresponding to the fourth rendering option.
Further, the fifth coordinate determination sub-module may be further configured to perform offset processing on the fifth texture coordinate information according to the disparity information, so as to update the fifth texture coordinate information; the parallax information is obtained according to the view direction and normal direction of each pixel of the target model and the distortion intensity control parameter.
Further, the sixth coordinate determination sub-module may further include:
the third sampling updating unit is used for adjusting the third sampling result by adopting the parallax intensity control parameter so as to update the third sampling result;
and the sixth coordinate determining unit is used for determining sixth texture coordinate information according to the updated third sampling result and the parallax offset texture coordinate information.
Further, when the target rendering option is a fourth rendering option, the dynamic map generation module 1502 may further include:
the first coordinate generation submodule is used for carrying out zooming processing on the texture coordinate information of the target model according to a first zooming parameter and carrying out offset processing on the texture coordinate information according to a first offset to obtain first texture coordinate information; the first offset is obtained according to a first offset parameter and time information;
the visual angle acquisition submodule is used for acquiring the tangent direction, the secondary tangent direction and the visual angle direction of each pixel of the target model;
the parallax offset determining submodule is used for determining parallax offset information according to the tangent direction, the sub-tangent direction, the view angle direction and the parallax depth control parameter;
and the parallax offset coordinate generating submodule is used for generating parallax offset texture coordinate information according to the parallax offset information and the first texture coordinate information.
Further, the graphical user interface comprises a plurality of parameter options, wherein the parameter options at least comprise a first zooming parameter option, a first offset parameter option, a second zooming parameter option and a second offset parameter option; each parameter option has a corresponding parameter input box, and the apparatus may further include:
the parameter updating module is used for responding to the adjusting operation aiming at the parameters in at least one parameter input box and determining new parameters corresponding to the adjusting operation;
and the map updating module is used for processing the element map by adopting a target rendering algorithm corresponding to the target rendering option based on the new parameter to obtain a corresponding dynamic map.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiment of the present application also discloses an electronic device, which includes a processor, a memory, and a computer program stored on the memory and capable of running on the processor, and when the computer program is executed by the processor, the steps of the model rendering method for dynamic effects described above are implemented, for example: displaying a graphical user interface, the graphical user interface including a plurality of different rendering options;
responding to starting operation aiming at a target rendering option, acquiring an element map of a target model to be rendered, wherein the target rendering option is any one of a plurality of different rendering options, and the element map is used for determining dynamic elements of the target model;
processing the element map by adopting a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map, wherein the dynamic map comprises the relation of the position of the dynamic element changing along with time;
and rendering the target model based on the dynamic map to obtain the dynamic effect of the target model.
Optionally, rendering the target model based on the dynamic map to obtain a dynamic effect of the target model, further comprising:
acquiring an original texture map and a dynamic mask of a target model; each pixel in the dynamic mask is used for determining the fusion weight of each pixel of the original texture map and the pixel corresponding to the dynamic map;
fusing the original texture map and the dynamic map according to the dynamic mask to obtain a fused map;
and rendering the dynamic area based on the fusion chartlet to obtain the dynamic effect of the target model.
Optionally, the method further comprises:
acquiring element colors of a target model; the element color is used for adjusting the color of the dynamic element;
and processing the dynamic map according to the element colors to update the dynamic map.
Optionally, the element map includes a first channel map and a second channel map, and when the target rendering option is the first rendering option, the element map is processed by using a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map, including:
carrying out zooming processing on the texture coordinate information of the target model according to the first zooming parameter, and carrying out offset processing on the texture coordinate information according to the first offset to obtain first texture coordinate information; the first offset is obtained according to the first offset parameter and the time information;
sampling the first channel map according to the first texture coordinate information to obtain a first sampling result;
zooming the texture coordinate information according to the second zooming parameter and the first sampling result, and offsetting the texture coordinate information according to the second offset to obtain second texture coordinate information; the second offset is obtained according to the second offset parameter and the time information;
and sampling the second channel map according to the third texture coordinate information to obtain a dynamic map corresponding to the first rendering option.
Optionally, the element map includes a first channel map and a second channel map, and when the target rendering option is the second rendering option, the element map is processed by using a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map, including:
carrying out zooming processing on the texture coordinate information of the target model according to the first zooming parameter, and carrying out offset processing on the texture coordinate according to the first offset to obtain first texture coordinate information; the first offset is obtained according to the first offset parameter and the time information;
sampling the first channel map according to the first texture coordinate information to obtain a first sampling result;
zooming the texture coordinate information according to the second zooming parameter, and offsetting the texture coordinate information according to the second offset and the disturbance information to obtain third texture coordinate information; the second offset is obtained according to the second offset parameter and the time information; the disturbance information is obtained according to the disturbance intensity control parameter and the first sampling result;
and sampling the second channel map according to the third texture coordinate information to obtain a dynamic map corresponding to the second rendering option.
Optionally, the element map includes a color channel map and a fourth channel map, and when the target rendering option is the third rendering option, the element map is processed by using a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map, including:
carrying out zooming processing on the texture coordinate information of the target model according to the first zooming parameter, and carrying out offset processing on the texture coordinate information according to the first offset to obtain first texture coordinate information; the first offset is obtained according to the first offset parameter and the time information;
sampling the fourth channel map according to the first texture coordinate information to obtain a second sampling result;
zooming the texture coordinate information according to the rolling information and the second zooming parameter, and offsetting the texture coordinate information according to the second offset and the distortion information to obtain fourth texture coordinate information; the rolling information is obtained according to the second sampling result and the rolling strength control parameter, the second offset is obtained according to the second offset parameter and the time information, and the distortion information is obtained according to the second sampling result and the distortion strength control parameter;
and sampling the color channel map according to the fourth texture coordinate information to obtain a dynamic map corresponding to the third rendering option.
Optionally, the element map includes a color channel map and a fourth channel map, and when the target rendering option is the fourth rendering option, the element map is processed by using a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map, including:
zooming the texture coordinate information of the target model according to the second zooming parameter, and offsetting the texture coordinate information according to the second offset to obtain fifth texture coordinate information; the second offset is obtained according to the second offset parameter and the time information;
sampling the fourth channel map according to the fifth texture coordinate information to obtain a third sampling result;
determining sixth texture coordinate information according to the third sampling result and the parallax offset texture coordinate information, wherein the parallax offset texture coordinate information is obtained by performing parallax offset on the texture coordinate information;
and sampling the color channel map according to the sixth texture coordinate information to obtain a dynamic map corresponding to the fourth rendering option.
Optionally, the scaling processing is performed on the texture coordinate information of the target model according to the second scaling parameter, and the offset processing is performed on the texture coordinate information according to the second offset to obtain fifth texture coordinate information, further including:
performing offset processing on the fifth texture coordinate information according to the parallax information to update the fifth texture coordinate information; the parallax information is obtained from the viewing angle direction and normal direction of each pixel of the object model, and the distortion intensity control parameter.
Optionally, determining sixth texture coordinate information according to the third sampling result and the disparity-shifted texture coordinate information, further comprising:
adjusting the third sampling result by adopting the parallax intensity control parameter to update the third sampling result;
and determining sixth texture coordinate information according to the updated third sampling result and the parallax offset texture coordinate information.
Optionally, before obtaining the sixth texture coordinate information, the method further includes:
carrying out zooming processing on the texture coordinate information of the target model according to the first zooming parameter, and carrying out offset processing on the texture coordinate information according to the first offset to obtain first texture coordinate information; the first offset is obtained according to the first offset parameter and the time information;
acquiring the tangent direction, the sub-tangent direction and the visual angle direction of each pixel of the target model;
determining parallax offset information according to the tangential direction, the sub-tangential direction, the view angle direction and the parallax depth control parameter;
and generating parallax offset texture coordinate information according to the parallax offset information and the first texture coordinate information.
Optionally, the graphical user interface comprises a plurality of parameter options, the parameter options comprising at least a first zoom parameter option, a first offset parameter option, a second zoom parameter option, a second offset parameter option; each parameter option has a corresponding parameter input box, and the method further comprises:
in response to an adjustment operation for a parameter in at least one parameter input box, determining a new parameter corresponding to the adjustment operation;
and based on the new parameters, processing the element map by adopting a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map.
Embodiments of the present application also disclose a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the model rendering method for dynamic effects as described above, for example: displaying a graphical user interface, the graphical user interface including a plurality of different rendering options;
responding to starting operation aiming at a target rendering option, acquiring an element map of a target model to be rendered, wherein the target rendering option is any one of a plurality of different rendering options, and the element map is used for determining dynamic elements of the target model;
processing the element map by adopting a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map, wherein the dynamic map comprises the relation of the position of the dynamic element changing along with time;
and rendering the target model based on the dynamic map to obtain the dynamic effect of the target model.
Optionally, rendering the target model based on the dynamic map to obtain a dynamic effect of the target model, further comprising:
acquiring an original texture map and a dynamic mask of a target model; each pixel in the dynamic mask is used for determining the fusion weight of each pixel of the original texture map and the pixel corresponding to the dynamic map;
fusing the original texture map and the dynamic map according to the dynamic mask to obtain a fused map;
and rendering the dynamic area based on the fusion chartlet to obtain the dynamic effect of the target model.
Optionally, the method further comprises:
acquiring element colors of a target model; the element color is used for adjusting the color of the dynamic element;
and processing the dynamic map according to the element colors to update the dynamic map.
Optionally, the element map includes a first channel map and a second channel map, and when the target rendering option is the first rendering option, the element map is processed by using a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map, where the method includes:
carrying out zooming processing on the texture coordinate information of the target model according to the first zooming parameter, and carrying out offset processing on the texture coordinate information according to the first offset to obtain first texture coordinate information; the first offset is obtained according to the first offset parameter and the time information;
sampling the first channel map according to the first texture coordinate information to obtain a first sampling result;
zooming the texture coordinate information according to the second zooming parameter and the first sampling result, and offsetting the texture coordinate information according to the second offset to obtain second texture coordinate information; the second offset is obtained according to the second offset parameter and the time information;
and sampling the second channel map according to the third texture coordinate information to obtain a dynamic map corresponding to the first rendering option.
Optionally, the element map includes a first channel map and a second channel map, and when the target rendering option is the second rendering option, the element map is processed by using a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map, including:
carrying out zooming processing on the texture coordinate information of the target model according to the first zooming parameter, and carrying out offset processing on the texture coordinate according to the first offset to obtain first texture coordinate information; the first offset is obtained according to the first offset parameter and the time information;
sampling the first channel map according to the first texture coordinate information to obtain a first sampling result;
zooming the texture coordinate information according to the second zooming parameter, and offsetting the texture coordinate information according to the second offset and the disturbance information to obtain third texture coordinate information; the second offset is obtained according to the second offset parameter and the time information; the disturbance information is obtained according to the disturbance intensity control parameter and the first sampling result;
and sampling the second channel map according to the third texture coordinate information to obtain a dynamic map corresponding to the second rendering option.
Optionally, the element map includes a color channel map and a fourth channel map, and when the target rendering option is the third rendering option, the element map is processed by using a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map, including:
carrying out zooming processing on the texture coordinate information of the target model according to the first zooming parameter, and carrying out offset processing on the texture coordinate information according to the first offset to obtain first texture coordinate information; the first offset is obtained according to the first offset parameter and the time information;
sampling the fourth channel map according to the first texture coordinate information to obtain a second sampling result;
zooming the texture coordinate information according to the rolling information and the second zooming parameter, and offsetting the texture coordinate information according to the second offset and the distortion information to obtain fourth texture coordinate information; the rolling information is obtained according to the second sampling result and the rolling strength control parameter, the second offset is obtained according to the second offset parameter and the time information, and the distortion information is obtained according to the second sampling result and the distortion strength control parameter;
and sampling the color channel map according to the fourth texture coordinate information to obtain a dynamic map corresponding to the third rendering option.
Optionally, the element map includes a color channel map and a fourth channel map, and when the target rendering option is the fourth rendering option, the element map is processed by using a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map, where the method includes:
zooming the texture coordinate information of the target model according to the second zooming parameter, and offsetting the texture coordinate information according to the second offset to obtain fifth texture coordinate information; the second offset is obtained according to the second offset parameter and the time information;
sampling the fourth channel map according to the fifth texture coordinate information to obtain a third sampling result;
determining sixth texture coordinate information according to the third sampling result and the parallax offset texture coordinate information, wherein the parallax offset texture coordinate information is obtained by performing parallax offset on the texture coordinate information;
and sampling the color channel map according to the sixth texture coordinate information to obtain a dynamic map corresponding to the fourth rendering option.
Optionally, the scaling processing is performed on the texture coordinate information of the target model according to the second scaling parameter, and the offset processing is performed on the texture coordinate information according to the second offset to obtain fifth texture coordinate information, further including:
performing offset processing on the fifth texture coordinate information according to the parallax information to update the fifth texture coordinate information; the parallax information is obtained from the viewing angle direction and normal direction of each pixel of the object model, and the distortion intensity control parameter.
Optionally, determining sixth texture coordinate information according to the third sampling result and the disparity offset texture coordinate information, further comprising:
adjusting the third sampling result by using the parallax intensity control parameter to update the third sampling result;
and determining sixth texture coordinate information according to the updated third sampling result and the parallax offset texture coordinate information.
Optionally, before obtaining the sixth texture coordinate information, the method further includes:
carrying out zooming processing on the texture coordinate information of the target model according to the first zooming parameter, and carrying out offset processing on the texture coordinate information according to the first offset to obtain first texture coordinate information; the first offset is obtained according to the first offset parameter and the time information;
acquiring the tangent direction, the sub-tangent direction and the visual angle direction of each pixel of the target model;
determining parallax offset information according to the tangential direction, the sub-tangential direction, the view angle direction and the parallax depth control parameter;
and generating parallax offset texture coordinate information according to the parallax offset information and the first texture coordinate information.
Optionally, the graphical user interface comprises a plurality of parameter options, the parameter options comprising at least a first zoom parameter option, a first offset parameter option, a second zoom parameter option, a second offset parameter option; each parameter option has a corresponding parameter input box, and the method further comprises:
in response to an adjustment operation for a parameter in at least one parameter input box, determining a new parameter corresponding to the adjustment operation;
and based on the new parameters, processing the element map by adopting a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one of skill in the art, embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrases "comprising one of \ 8230; \8230;" does not exclude the presence of additional like elements in a process, method, article, or terminal device that comprises the element.
The method and the device for rendering a model with a dynamic effect, the electronic device and the storage medium provided by the application are introduced in detail, and a specific example is applied in the text to explain the principle and the implementation of the application, and the description of the embodiment is only used for helping to understand the method and the core idea of the application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (14)

1. A method for rendering a model of dynamic effects, wherein a graphical user interface is displayed on a display screen of a terminal device by running an application, the graphical user interface comprising a plurality of different rendering options, the method comprising:
in response to a starting operation aiming at a target rendering option, acquiring an element map of a target model to be rendered, wherein the target rendering option is any one of the plurality of different rendering options, and the element map is used for determining a dynamic element of the target model;
processing the element map by adopting a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map, wherein the dynamic map comprises the relation of the position of the dynamic element changing along with time;
and rendering the target model based on the dynamic map to obtain the dynamic effect of the target model.
2. The method of claim 1, wherein the rendering the object model based on the dynamic map to obtain a dynamic effect of the object model, further comprises:
acquiring an original texture map and a dynamic mask of the target model; each pixel in the dynamic mask is used for determining the fusion weight of each pixel of the original texture map and the pixel corresponding to the dynamic map;
fusing the original texture map and the dynamic map according to the dynamic mask to obtain a fused map;
and rendering the dynamic area based on the fusion map to obtain the dynamic effect of the target model.
3. The method of claim 2, further comprising:
acquiring element colors of the target model; the element color is used for adjusting the color of the dynamic element;
and processing the dynamic map according to the element colors to update the dynamic map.
4. The method of claim 3, wherein the element map comprises a first channel map and a second channel map, and when the target rendering option is a first rendering option, the processing the element map by using a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map comprises:
carrying out zooming processing on the texture coordinate information of the target model according to a first zooming parameter, and carrying out offset processing on the texture coordinate information according to a first offset to obtain first texture coordinate information; the first offset is obtained according to a first offset parameter and time information;
sampling the first channel map according to the first texture coordinate information to obtain a first sampling result;
zooming the texture coordinate information according to a second zooming parameter and the first sampling result, and offsetting the texture coordinate information according to a second offset to obtain second texture coordinate information; the second offset is obtained according to a second offset parameter and time information;
and sampling the second channel map according to the third texture coordinate information to obtain a dynamic map corresponding to the first rendering option.
5. The method of claim 3, wherein the element map comprises a first channel map and a second channel map, and when the target rendering option is a second rendering option, the processing the element map by using a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map comprises:
carrying out zooming processing on the texture coordinate information of the target model according to the first zooming parameter, and carrying out offset processing on the texture coordinate according to the first offset to obtain first texture coordinate information; the first offset is obtained according to a first offset parameter and time information;
sampling the first channel map according to the first texture coordinate information to obtain a first sampling result;
zooming the texture coordinate information according to the second zooming parameter, and offsetting the texture coordinate information according to the second offset and the disturbance information to obtain third texture coordinate information; the second offset is obtained according to a second offset parameter and time information; the disturbance information is obtained according to a disturbance intensity control parameter and the first sampling result;
and sampling the second channel map according to the third texture coordinate information to obtain a dynamic map corresponding to the second rendering option.
6. The method of claim 3, wherein the element map comprises a color channel map and a fourth channel map, and when the target rendering option is a third rendering option, the processing the element map by using a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map comprises:
carrying out zooming processing on the texture coordinate information of the target model according to a first zooming parameter, and carrying out offset processing on the texture coordinate information according to a first offset to obtain first texture coordinate information; the first offset is obtained according to a first offset parameter and time information;
sampling the fourth channel map according to the first texture coordinate information to obtain a second sampling result;
zooming the texture coordinate information according to the rolling information and the second zooming parameter, and offsetting the texture coordinate information according to the second offset and the distortion information to obtain fourth texture coordinate information; the rolling information is obtained according to the second sampling result and the rolling strength control parameter, the second offset is obtained according to a second offset parameter and time information, and the warping information is obtained according to the second sampling result and the warping strength control parameter;
and sampling the color channel map according to the fourth texture coordinate information to obtain a dynamic map corresponding to the third rendering option.
7. The method of claim 3, wherein the element map comprises a color channel map and a fourth channel map, and when the target rendering option is a fourth rendering option, the processing the element map by using a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic map comprises:
zooming the texture coordinate information of the target model according to the second zooming parameter, and offsetting the texture coordinate information according to the second offset to obtain fifth texture coordinate information; the second offset is obtained according to a second offset parameter and time information;
sampling the fourth channel map according to the fifth texture coordinate information to obtain a third sampling result;
determining sixth texture coordinate information according to the third sampling result and the parallax offset texture coordinate information, wherein the parallax offset texture coordinate information is obtained by performing parallax offset on the texture coordinate information;
and sampling the color channel map according to the sixth texture coordinate information to obtain a dynamic map corresponding to the fourth rendering option.
8. The method according to claim 7, wherein the scaling the texture coordinate information of the target model according to the second scaling parameter and the shifting the texture coordinate information according to the second shift amount to obtain fifth texture coordinate information, further comprising:
performing offset processing on the fifth texture coordinate information according to the parallax information to update the fifth texture coordinate information; the parallax information is obtained according to the view direction and normal direction of each pixel of the target model and the distortion intensity control parameter.
9. The method of claim 7, wherein determining sixth texture coordinate information based on the third sampling result and the disparity-shifted texture coordinate information further comprises:
adjusting the third sampling result by using a parallax intensity control parameter to update the third sampling result;
and determining sixth texture coordinate information according to the updated third sampling result and the parallax offset texture coordinate information.
10. The method according to any of claims 7-9, wherein prior to said obtaining sixth texture coordinate information, the method further comprises:
zooming the texture coordinate information of the target model according to the first zooming parameter, and offsetting the texture coordinate information according to the first offset to obtain first texture coordinate information; the first offset is obtained according to a first offset parameter and time information;
acquiring the tangent direction, the sub-tangent direction and the visual angle direction of each pixel of the target model;
determining parallax offset information according to the tangent direction, the sub-tangent direction, the view angle direction and the parallax depth control parameter;
and generating parallax offset texture coordinate information according to the parallax offset information and the first texture coordinate information.
11. The method of any of claims 1-9, wherein the graphical user interface comprises a plurality of parameter options, the parameter options comprising at least a first zoom parameter option, a first offset parameter option, a second zoom parameter option, a second offset parameter option; each parameter option has a corresponding parameter input box, the method further comprising:
in response to an adjustment operation for a parameter in at least one of the parameter input boxes, determining a new parameter corresponding to the adjustment operation;
and processing the element map by adopting a target rendering algorithm corresponding to the target rendering option based on the new parameters to obtain a corresponding dynamic map.
12. An apparatus for model rendering of dynamic effects, wherein a graphical user interface is displayed on a display screen of a terminal device by running an application, the graphical user interface comprising a plurality of different rendering options, the apparatus comprising:
a rendering mode determining module, configured to, in response to a start operation for a target rendering option, obtain an element map of a target model to be rendered, where the target rendering option is any one of the multiple different rendering options, and the element map is used to determine a dynamic element of the target model;
the dynamic chartlet generating module is used for processing the element chartlet by adopting a target rendering algorithm corresponding to the target rendering option to obtain a corresponding dynamic chartlet, and the dynamic chartlet comprises the relation of the position of the dynamic element changing along with time;
and the dynamic effect rendering module is used for rendering the target model based on the dynamic map to obtain the dynamic effect of the target model.
13. An electronic device comprising a processor, a memory, and a computer program stored on the memory and capable of running on the processor, the computer program when executed by the processor implementing a method of model rendering of dynamic effects according to any of claims 1-11.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, implements a method of model rendering of dynamic effects according to any one of claims 1 to 11.
CN202211648469.7A 2022-12-21 2022-12-21 Model rendering method and device for dynamic effect, electronic equipment and storage medium Pending CN115814415A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211648469.7A CN115814415A (en) 2022-12-21 2022-12-21 Model rendering method and device for dynamic effect, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211648469.7A CN115814415A (en) 2022-12-21 2022-12-21 Model rendering method and device for dynamic effect, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115814415A true CN115814415A (en) 2023-03-21

Family

ID=85517371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211648469.7A Pending CN115814415A (en) 2022-12-21 2022-12-21 Model rendering method and device for dynamic effect, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115814415A (en)

Similar Documents

Publication Publication Date Title
CN103081476B (en) The method and apparatus utilizing depth map information conversion 3-D view
US8351689B2 (en) Apparatus and method for removing ink lines and segmentation of color regions of a 2-D image for converting 2-D images into stereoscopic 3-D images
JP5706826B2 (en) Method and system for encoding 3D image signal, method and system for decoding encoded 3D image signal and 3D image signal
WO2021135320A1 (en) Video generation method and apparatus, and computer system
CN111161392B (en) Video generation method and device and computer system
EP2665280A1 (en) Rendering improvement for 3D display
WO2004019621A1 (en) Method and device for creating 3-dimensional view image
US20070139408A1 (en) Reflective image objects
KR20030029649A (en) Image conversion and encoding technique
CN101189643A (en) 3D image forming and displaying system
WO2006114898A1 (en) 3d image generation and display system
EP1157359A1 (en) Image rendering method and apparatus
WO2009018033A1 (en) Method and apparatus for graphically defining surface normal maps
KR101919077B1 (en) Method and apparatus for displaying augmented reality
WO1996013018A1 (en) Methods and apparatus for rapidly rendering photo-realistic surfaces on 3-dimensional wire frames automatically
CN113181639A (en) Method and device for processing graphics in game
US20030090485A1 (en) Transition effects in three dimensional displays
CN103514593B (en) Image processing method and device
CN115814415A (en) Model rendering method and device for dynamic effect, electronic equipment and storage medium
CN113935891B (en) Pixel-style scene rendering method, device and storage medium
JPH11331700A (en) Image processing unit and image processing method
EP2249312A1 (en) Layered-depth generation of images for 3D multiview display devices
JP3501479B2 (en) Image processing device
CN115311395A (en) Three-dimensional scene rendering method, device and equipment
WO2012096065A1 (en) Parallax image display device and parallax image display method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination