CN117292032A - Method and device for generating sequence frame and electronic equipment - Google Patents

Method and device for generating sequence frame and electronic equipment Download PDF

Info

Publication number
CN117292032A
CN117292032A CN202310976150.5A CN202310976150A CN117292032A CN 117292032 A CN117292032 A CN 117292032A CN 202310976150 A CN202310976150 A CN 202310976150A CN 117292032 A CN117292032 A CN 117292032A
Authority
CN
China
Prior art keywords
map
information
sequence frame
frame
universal sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310976150.5A
Other languages
Chinese (zh)
Inventor
何菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310976150.5A priority Critical patent/CN117292032A/en
Publication of CN117292032A publication Critical patent/CN117292032A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure discloses a method, an apparatus, an electronic device, and a computer-readable storage medium for generating a sequence frame, wherein the method includes: acquiring a universal sequence frame which is used for rendering 2D animation and does not have color information; the universal sequence frame represents first UV information, wherein the first UV information is used for indicating the position of each pixel point in a virtual model used for rendering the 2D animation in each frame map of the universal sequence frame; determining a color map for rendering the 2D animation; the color map represents second UV information, and the second UV information is used for indicating the position of each pixel point in the virtual model in the color map; and mapping the color map onto each frame map of the universal sequence frame according to the first UV information and the second UV information to obtain the sequence frame with the color information. The scheme provided by the disclosure can improve the multiplexing rate of the sequence frames and effectively reduce the storage space occupied by the game inclusion corresponding to the mapping resources.

Description

Method and device for generating sequence frame and electronic equipment
Technical Field
The disclosure relates to the technical field of computers, and in particular relates to a method and a device for generating a sequence frame, electronic equipment and a computer readable storage medium.
Background
Currently, 2D animation in virtual games is usually implemented by playing corresponding sequence frames on corresponding models in a certain order. In general, there is a need for rendering 2D animations comprising models of different colors but with the same shape and the same motion in virtual games, for example, there are two soldier models of campaigns in virtual games, which need to be distinguished by different colors. For the same model, if the images are to be rendered into different colors, the sequence frames corresponding to the respective colors need to be generated, which causes that the game inclusion corresponding to the mapping resource is oversized, occupies a larger storage space and reduces the game performance.
Disclosure of Invention
The present disclosure provides a method, an apparatus, an electronic device, and a computer readable storage medium for generating a sequence frame, which can increase the multiplexing rate of the sequence frame, and effectively reduce the storage space occupied by a game inclusion corresponding to a mapping resource. The specific scheme is as follows:
in a first aspect, an embodiment of the present disclosure provides a method for generating a sequence frame, where the method includes:
acquiring a universal sequence frame which is used for rendering 2D animation and does not have color information; the universal sequence frame represents first UV information, and the first UV information is used for indicating the position of each pixel point in a virtual model used for rendering the 2D animation in each frame map of the universal sequence frame;
Determining a color map for rendering the 2D animation; the color map represents second UV information, and the second UV information is used for indicating the position of each pixel point in the virtual model in the color map;
and mapping the color map to each frame map of the universal sequence frame according to the first UV information and the second UV information to obtain the sequence frame with the color information.
In a second aspect, an embodiment of the present disclosure provides an apparatus for generating a sequence frame, the apparatus including:
an acquisition unit for acquiring a general sequence frame for rendering a 2D animation without color information; the universal sequence frame represents first UV information, and the first UV information is used for indicating the position of each pixel point in a virtual model used for rendering the 2D animation in each frame map of the universal sequence frame;
a determining unit for determining a color map for rendering the 2D animation; the color map represents second UV information, and the second UV information is used for indicating the position of each pixel point in the virtual model in the color map;
and the mapping unit is used for mapping the color mapping to each frame mapping of the universal sequence frame according to the first UV information and the second UV information to obtain the sequence frame with the color information.
In a third aspect, the present disclosure also provides an electronic device, including:
a processor; and
a memory for storing a data processing program, the electronic device being powered on and executing the program by the processor, to perform the method according to the first aspect.
In a fourth aspect, embodiments of the present disclosure also provide a computer readable storage medium storing a data processing program, the program being executed by a processor to perform the method according to the first aspect.
Compared with the prior art, the method has the following advantages:
the method for generating the sequence frame provided by the disclosure comprises the following steps: acquiring a universal sequence frame which is used for rendering 2D animation and does not have color information; the universal sequence frame represents first UV information, wherein the first UV information is used for indicating the position of each pixel point in a virtual model used for rendering the 2D animation in each frame map of the universal sequence frame; determining a color map for rendering the 2D animation; the color map represents second UV information, and the second UV information is used for indicating the position of each pixel point in the virtual model in the color map; and mapping the color map onto each frame map of the universal sequence frame according to the first UV information and the second UV information to obtain the sequence frame with the color information.
As can be seen, when rendering 2D animations, the method for generating sequence frames provided in the present disclosure selects a preset universal sequence frame, because the universal sequence frame characterizes the first UV information for indicating the position of each pixel point in each frame of the universal sequence frame in the virtual model, and because the position of each pixel point in each frame of the map can indicate the motion of the virtual model, it can be known that when the models included in two 2D animations are the same and the motions made by the models are the same, the corresponding first UV information is the same, the corresponding universal sequence frames are the same, and the same universal sequence frame without color information can be used to render the two 2D animations, thereby improving the multiplexing rate of the universal sequence frames. In addition, in the present disclosure, a required color map is selected according to actual needs, and because the color map characterizes second UV information for indicating a position of each pixel point in the virtual model in the color map, the color map can be mapped to each frame map of a universal sequence frame without color information according to the first UV information and the second UV information, so that a required color is given to the universal sequence frame without color information.
According to the method for generating the sequence frames, different colors are given to the same general sequence frame without color information, so that the sequence frames with specific colors corresponding to the 2D animation requirements of the same virtual model and different colors of the virtual model are generated according to the game requirements in real time in the game running process, the sequence frames corresponding to various colors are not required to be set in the development stage, the multiplexing rate of the sequence frames is improved, the volume of a game inclusion of map resources in the virtual game is effectively reduced, and the storage space is released.
Drawings
FIG. 1 is a flow chart of a method of generating a sequence frame provided by an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a terminal interface after a sequence frame with transparency information and without transparency information is rendered in a sequence frame generating method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of corresponding sawtooth under different minimum subdivision value settings in the generating method of the sequence frame according to the embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a correct UV map and a wrong UV map in a method for generating a sequence frame according to an embodiment of the present disclosure;
FIG. 5 is an interface schematic diagram of rendering UV maps in an image sampler in a method of generating sequence frames provided by embodiments of the present disclosure;
FIG. 6 is a schematic diagram of UV mapping without pixel edge expansion and UV mapping after pixel edge expansion in a method for generating a sequence frame according to an embodiment of the disclosure;
FIG. 7 is a schematic diagram of rendering a transparency map and an illumination information map in an image sampler in a method of generating a sequence frame provided by an embodiment of the present disclosure;
fig. 8 is a schematic diagram of a sequence frame generated by the method according to the embodiment of the present disclosure after the transparency map and the illumination information map are combined;
fig. 9 is a block diagram showing an example of a sequence frame generating apparatus according to an embodiment of the present disclosure;
fig. 10 is a block diagram illustrating an example of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. The present disclosure may be embodied in many other forms than described herein and similarly generalized to the state of the art may be made by the person skilled in the art without departing from the spirit of the disclosure and, therefore, the disclosure is not limited to the specific implementations disclosed below.
It should be noted that the terms "first," "second," "third," and the like in the claims, specification, and drawings of the present disclosure are used for distinguishing between similar objects and not for describing a particular sequential or chronological order. The data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and their variants are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Related concepts referred to in this disclosure are described below:
1. game software application product (virtual game): a game software application product refers to an application program developed according to game application requirements, and the types of games may include, but are not limited to, at least one of the following: two-dimensional (Two-dimensional) game applications, three-dimensional (Three-dimensional) game applications, virtual Reality (VR) game applications, augmented Reality (Augmented Reality, AR) game applications, mixed Reality (MR) game applications.
2. Game engine (game engine)
The game engine is also referred to as a physics engine. The game engine refers to the core components of some compiled editable computer game systems or some interactive real-time image applications. These systems provide game designers with the various tools required to write games in order to allow the game designer to easily and quickly make game programs without starting from zero.
3. Rendering (render)
Rendering refers to the process of generating or rendering into a two-dimensional image given a number of conditions such as virtual camera, three-dimensional object, light source, illumination pattern, texture, etc.
4. UV coordinates
The UV coordinates refer to a mapping relationship from a certain vertex of a space to a certain point on a map resource in a rendering process, and each point on an image can be accurately corresponding to the surface of a model object by the UV coordinates. The UV coordinate is a coordinate value under a UV coordinate system having two coordinate axes U and V, where U represents a horizontal direction and V represents a vertical direction. The UV coordinates define the location information of each point on the image that is interrelated with the corresponding 3D model of the image.
In order to improve the multiplexing rate of the sequence frames and effectively reduce the storage space occupied by the game inclusion corresponding to the map resources, the first embodiment of the disclosure provides a method for generating the sequence frames, which is applied to electronic equipment, wherein the electronic equipment can be a desktop computer, a notebook computer, a mobile phone, a tablet computer, a server, a terminal device and the like, and can also be other electronic equipment capable of generating the sequence frames.
The following describes a method for generating a sequence frame according to an embodiment of the present disclosure with reference to fig. 1 to 8.
As shown in fig. 1, the method for generating a sequence frame provided by the present disclosure includes the following steps S101 to S103.
Step S101: acquiring a universal sequence frame which is used for rendering 2D animation and does not have color information; the universal sequence frame characterizes first UV information, and the first UV information is used for indicating positions of pixel points in a virtual model used for rendering the 2D animation in each frame map of the universal sequence frame.
This step is used to select the corresponding generic sequence frame when rendering the 2D animation.
In this step, the universal sequence frame includes multiple maps, and the universal sequence frame can be understood as a set of multi-frame maps. The map is a picture for defining each attribute of the surface of the virtual model, wherein UV coordinate information is characterized in the map, and the map can be mapped to the surface of the virtual model through the UV coordinate information. In short, the mapping can be "pasted" on the surface of the virtual model through the UV coordinates, so that the virtual model presents a more lifelike and natural effect.
In the present disclosure, the actions of the virtual model indicated in the multi-frame maps in the universal sequence frame may be different, each frame map in the universal sequence frame may be arranged according to a certain play order, when each frame map in the universal sequence frame acts on the virtual model, an animation frame of the 2D animation may be obtained, and when the animation frame plays according to the play order, the 2D animation will be formed.
Wherein, the virtual model can be a model corresponding to a virtual object in the virtual game, and the virtual object can include, but is not limited to, a virtual character, a virtual animal, a virtual article, and the like. The virtual model can be obtained by modeling by a developer, and is typically stored in a model library of the virtual game, and a corresponding virtual object can be presented on the terminal interface through the virtual model. In this disclosure, a set of actions of a virtual model corresponds to a generic sequence frame.
The universal sequence frame in this step is a sequence frame that does not have color information and represents the first UV information. The 2D animations of the same virtual model to perform the same action correspond to the same universal sequence frame, so that when different 2D animations perform the same action for the same virtual model, the same universal sequence frame can be used for rendering no matter whether the colors of the two animations are the same.
In the present disclosure, the universal sequence frame may include first UV information, where the first UV information may be used to indicate a position of each pixel point in the virtual model used for rendering the 2D animation in the universal sequence frame, where each pixel point in the virtual model may be understood as a pixel point on the surface of the virtual model, that is, a pixel point on a plane after the virtual model is expanded. The virtual model used for rendering the 2D animation may be a 3D model, the universal sequence frame may be obtained according to the virtual model, each pixel point in the virtual model and each pixel point in each frame map in the universal sequence frame may be in one-to-one correspondence according to the first UV information, and since each frame map in the universal sequence frame may indicate an action of the virtual model, the virtual model may present a corresponding action based on the first UV information and the universal sequence frame.
Step S102: determining a color map for rendering the 2D animation; the color map characterizes second UV information, which is used for indicating the position of each pixel point in the virtual model in the color map.
It will be appreciated that in a virtual game, the virtual model used by the rendered image on the terminal interface corresponds to a color map indicating the color that the rendered image is to appear. Since multiple (2 or more in the present disclosure) 2D animations may be animations where the same virtual model uses different color maps to render the same action, the same virtual model may correspond to multiple color maps.
In this disclosure, it may be specifically determined which color map of the virtual model is required for the 2D animation to be rendered according to game logic currently required to be executed in the game running. For example, the virtual game includes an soldier 1 belonging to a camping a and an soldier 2 belonging to a camping b, and the soldier 1 and the soldier 2 are rendered based on a soldier model, the soldier 1 is blue, the soldier 2 is red, if the soldier 1 needs to be displayed currently, a blue color map is selected to act on the soldier model to obtain the blue soldier 1, and if the soldier 1 needs to be displayed currently, a red color map is selected to act on the soldier model to obtain the red soldier 2.
And the color map characterizes second UV information, wherein the second UV information is used for indicating the position of each pixel point in the corresponding virtual model in the color map, so that when the color map is acted on the virtual model, the virtual model presents the corresponding color.
In particular embodiments, the Color map in the present disclosure may include, but is not limited to, any of Diffuse (Diffuse reflectance) map, base Color map (intrinsic Color map).
Step S103: and mapping the color map onto each frame map of the universal sequence frame according to the first UV information and the second UV information to obtain the sequence frame with the color information.
This step is used to assign corresponding colors to the universal sequence frames without color information, and further generate sequence frames with color information.
The color map characterizes second UV information, the second UV information is used for indicating the positions of all pixel points in the virtual model in the color map, the universal sequence frame characterizes first UV information, the first UV information is used for indicating the first UV information of the positions of all pixel points in the virtual model in each frame map in the universal sequence frame, and therefore all the positions in the color map and all the positions in each frame map in the universal sequence frame can be corresponding, and accordingly colors indicated in the color map can be given to each frame map in the universal sequence frame. Therefore, the color map can be quickly mapped into the universal sequence frame according to the first UV information and the second UV information, and the sequence frame with the color information is obtained.
The method for generating the sequence frame provided by the disclosure comprises the following steps: acquiring a universal sequence frame which is used for rendering 2D animation and does not have color information; the universal sequence frame represents first UV information, wherein the first UV information is used for indicating the position of each pixel point in a virtual model used for rendering the 2D animation in each frame map of the universal sequence frame; determining a color map for rendering the 2D animation; the color map represents second UV information, and the second UV information is used for indicating the position of each pixel point in the virtual model in the color map; and mapping the color map onto each frame map of the universal sequence frame according to the first UV information and the second UV information to obtain the sequence frame with the color information.
As can be seen, when rendering 2D animations, the method for generating sequence frames provided in the present disclosure selects a preset universal sequence frame, because the universal sequence frame characterizes the first UV information for indicating the position of each pixel point in each frame of the universal sequence frame in the virtual model, and because the position of each pixel point in each frame of the map can indicate the motion of the virtual model, it can be known that when the models included in two 2D animations are the same and the motions made by the models are the same, the corresponding first UV information is the same, the corresponding universal sequence frames are the same, and the same universal sequence frame without color information can be used to render the two 2D animations, thereby improving the multiplexing rate of the universal sequence frames. In addition, in the present disclosure, a required color map is selected according to actual needs, and because the color map characterizes second UV information for indicating a position of each pixel point in the virtual model in the color map, the color map can be mapped to each frame map of a universal sequence frame without color information according to the first UV information and the second UV information, so that a required color is given to the universal sequence frame without color information.
According to the method for generating the sequence frames, different colors are given to the same general sequence frame without color information, so that the sequence frames with specific colors corresponding to the 2D animation requirements of the same virtual model and different colors of the virtual model are generated according to the game requirements in real time in the game running process, the sequence frames corresponding to various colors are not required to be set in the development stage, the multiplexing rate of the sequence frames is improved, the volume of a game inclusion of map resources in the virtual game is effectively reduced, and the storage space is released.
In an alternative embodiment, transparency information and illumination information are also characterized in the universal sequence frame; the transparency information is used for indicating the transparency of each pixel point in the virtual model, and the illumination information is used for indicating the illumination intensity received by each pixel point of the virtual model.
The transparency information can be set through a transparency channel (Alpha channel), the transparency channel is a channel for controlling the transparency and the opacity of an image, the Alpha channel only contains the transparency information and does not contain color information, the Alpha channel stores a transparency value, the appearance of a pixel point is transparent, the transparency value is between 0 and 1, when the transparency value of the transparency channel of a certain pixel point is 0, the pixel point is completely transparent, and when the transparency value of the transparency channel of a certain pixel point is 1, the pixel point is completely opaque. Typically, the transparency channel is an 8-bit grayscale channel that records transparency information in an image in 256 levels of grayscale, defining transparent, opaque, and translucent regions, where black represents transparent, white represents opaque, and gray represents translucent.
In practical application, when the transparency information is not set in the sequence frame, and the corresponding 2D animation is rendered and displayed on the terminal equipment through the sequence frame, the background part in the sequence frame cannot be fused with the game scene, so that the background of each frame map in the sequence frame is displayed on the terminal interface; when transparency information is set in the sequence frames, and when the corresponding 2D animation is rendered through the sequence frames and displayed on the terminal equipment, the background part in the sequence frames can be removed through the transparency channel, and only the virtual model part is reserved, so that the virtual model can be well fused with a game scene when displayed in the graphical interface of the terminal equipment, a user can not generate a split feeling when playing a virtual game, and the game impression of the user is ensured.
As shown in fig. 2, a schematic diagram of a terminal interface after rendering a sequence frame with transparency information and a sequence frame without transparency information in the sequence frame generating method provided by the embodiment of the disclosure includes an interface (2-a) and an interface (2-b). The interface (2-a) is a terminal interface schematic diagram after the sequence frames without transparency information are rendered, and it can be seen that the background of each frame of the sequence frames is presented on the game scene, so that the rendered virtual model and the game scene generate serious cracking sense and cannot be fused in the game background; the interface (2-b) is a terminal interface schematic diagram after the sequence frames with transparency information are rendered, and it can be seen that the background of each frame of the sequence frames is removed, so that the rendered virtual model and the game scene are well fused.
It can be understood that the brightness of the color displayed by the same pixel under different illumination intensities is different, so that the brightness of the color of each pixel can be adjusted by setting illumination information, so that each pixel presents different brightness effects. Based on the method, the pixel points in each frame of animation in the 2D animation obtained through the general sequence frame rendering with the illumination information can show different color darkness, so that the 2D animation is more real and natural.
In general, the illumination in the 2D game is fixed, and thus, the same pixel point of the virtual character is the same for the color change generated by the illumination no matter where the virtual character is in the game scene. Therefore, in the present disclosure, illumination information is set in the universal sequence frame, so that as long as the actions of the virtual model and the virtual model required for rendering the 2D animation match with the actions of the virtual model corresponding to the universal sequence frame and the indicated virtual model, the universal sequence frame with the illumination information set can be used for generation, so that the generated 2D animation is more realistic and natural.
In an alternative embodiment, the first UV information, the transparency information and the illumination information may indicate different information by different maps. The generic sequence frame may thus be generated by:
Acquiring a UV map, a transparency map and an illumination information map corresponding to the universal sequence frame;
and combining the UV mapping, the transparency mapping and the illumination information mapping to obtain the universal sequence frame.
The UV map is used for indicating the first UV information, the transparency map is used for indicating the transparency information, and the illumination information map is used for indicating the illumination information.
Thus, after generating the universal sequence frame by UV mapping, transparency mapping and illumination information mapping merging, the universal sequence frame may also characterize the first UV information, transparency information and illumination information. During game play, when there is a need to render 2D animation, then the corresponding universal sequence frames may be used for rendering.
Specifically, the color map is mapped onto each frame map in the universal sequence frame through the first UV information and the second UV information corresponding to the color map, so that the rendered 2D animation presents corresponding colors, and the universal sequence frame also characterizes transparency information and illumination information, so that the rendered 2D animation presents color darkness under the corresponding illumination information and is well fused with a game scene. Thereby enabling the rendered 2D animation to be real and natural.
In an alternative embodiment, the UV map, transparency map, and illumination information map may be generated by an image sampler. The image sampler may include at least one of an antialiasing and a global supersampler, and may also include other image samplers. After the rendering settings are turned on, one or both image samplers may be turned on under the renderer. The present disclosure is not particularly limited as to the type of image sampler.
The UV map, transparency map and illumination information map are described in detail below:
1. UV (ultraviolet) mapping
The UV map is used for indicating the positions of the pixel points in the virtual model in each frame map in the universal sequence frame. The color mapping method and the color mapping device enable the color mapping to map colors of all pixel points in the color mapping to a universal sequence frame according to the position mapping relation of the pixel points indicated by the UV mapping by generating the UV mapping, and further obtain the sequence frame with color information.
The UV map corresponding to the universal sequence frame may be specifically generated by: inputting the virtual model and animation data required by the virtual model to execute preset actions into an image sampler, carrying out parameter configuration on the image sampler, driving the virtual model to move through the animation data, and sampling the moving virtual model through the image sampler after parameter configuration to generate a UV map corresponding to a universal sequence frame.
Thus, the step of "obtaining a UV map corresponding to a universal sequence frame" may be implemented by:
inputting the virtual model and animation data required by the virtual model to execute preset actions into an image sampler;
configuring parameters of an image sampler;
and according to the virtual model and animation data required by the virtual model to execute the preset action, sampling according to parameters of the configuration image sampler to obtain the UV map corresponding to the universal sequence frame.
In practice, the image sampler may also support an image filter, which may be used to reduce the amount of noise in the image and enhance edges in the image, softening the texture of the image and the edges of the object. Therefore, when generating the mapping, the parameters of the image filter can be configured in the configuration page of the image sampler, and according to the configured sampling parameters of the image sampler and the configured filtering parameters of the image filter, the input data is sampled and filtered to obtain the corresponding mapping.
Typically, the image sampler will have an original UV map, which is composed of a red channel and a green channel, wherein the red channel is a horizontal gradation channel of 0-255, and the green channel is a vertical gradation channel of 0-255. And expanding the 3D virtual model to obtain a UV display, and mapping pixel points in the UV display to the original UV map to obtain the UV map of the 3D virtual model.
The step of configuring parameters of the image sampler may specifically comprise the steps of:
configuring subdivision antialiasing parameters and image sampling output type parameters of an image sampler; the subdivision antialiasing parameter is used for indicating the accuracy of eliminating concave-convex jaggies appearing at the image edge in the picture output by the display, and the image sampling output type parameter is used for indicating the output mapping type.
When generating a map in an image sampler, antialiasing is usually automatically set to generate an antialiased map. For a pixel, the antialiasing treatment samples the pixel within a certain range near the pixel, so as to achieve the effects of softening the appearance of the object and eliminating the aliasing.
The antialiasing setting is to eliminate concave-convex jaggies appearing at the edges of the image objects in the image output by the display, in the image sampler, the antialiasing setting can be realized by configuring subdivision antialiasing parameters, the subdivision antialiasing can be divided into maximum subdivision and minimum subdivision, and the main processing procedure of the subdivision antialiasing is as follows: starting from the initial control grid, a new point gradually encrypting control grid is recursively generated according to a certain rule, the control grid is gradually polished as subdivision is carried out in an antialiased way, and finally a smooth curve or curved surface with discrete point interpolation or approximation is generated.
The maximum subdivision can be understood as the maximum sampling frequency of each pixel point, the maximum subdivision is mainly used for sampling a flat place, and the higher the maximum subdivision value is, the higher the sampling quality is, and the slower the sampling speed is; the minimum subdivision is understood as the minimum sampling number of each pixel point, the minimum subdivision value is mainly used for sampling the diagonal landing place, the larger the minimum subdivision value is, the higher the sampling quality is, the better the antialiasing of the graph edge is, and the lower the rendering speed is.
In general, in order to satisfy the display effect in the antialiasing processing, the positions of the pixels in the map are changed. Based on this, when the antialiasing setting is turned on, the positions of the pixels on the universal sequence frame will change, so that when the color map is mapped onto the universal sequence frame, the positions of the pixels corresponding to the colors will shift/be misplaced, which will result in that the colors are mapped onto the wrong positions. Therefore, in order to enable mapping of color maps onto the universal sequence frame to the correct position, the antialiasing settings need to be canceled when rendering the UV map.
Thus, the step of configuring the subdivision antialiasing parameter and the image-sampling output type parameter of the image sampler may be achieved by:
Configuring subdivision antialiasing parameters of the image sampler to be parameters which maximize the degree of concave-convex aliasing of each pixel point;
the image sample output type parameter of the image sampler is configured to be a parameter indicating that the output type is UVW map.
In an alternative embodiment, when setting the minimum score value to 1 in the image sampler may represent sampling for each pixel, pixels near that pixel are not sampled, and very strong aliasing and noise will appear on the displayed picture. In this case, a parameter to maximize the degree of jaggy of each pixel point can be understood as setting the minimum fine value to 1. In other image samplers, the subdivision antialiasing parameters may also be configured with other specific parameters to maximize the degree of jagged per pixel.
The image sampling output type is configured to be a parameter indicating that the output type is a UVW map, and a corresponding UV map can be obtained. It will be appreciated that when other types of maps need to be generated, such as transparency maps, the image sample output type may be configured as a parameter indicative of the transparency map.
Fig. 3 is a schematic diagram of corresponding saw teeth under different minimum subdivision value settings in the method for generating a sequence frame according to the embodiment of the present disclosure, including an interface (3-a) and an interface (3-b). As shown in an interface (3-a), the display is a picture when the minimum score value is set to be 1, and strong saw teeth and noise points are formed on the picture in the interface (3-a); as shown in the interface (3-b), the minimum subdivision value is set to 6, and the jaggies and noise points on the picture in the interface (3-b) are weaker than those on the interface (3-a).
It will be appreciated that when rendering the UV map, antialiasing is eliminated to ensure accuracy of the positions of the pixels in the virtual model indicated in the generated UV map in the UV map, so that the positions of the pixels in the virtual model indicated in the universal sequence frame obtained from the UV map in the UV map are correct positions, and thus the color map can be mapped to the correct positions on the universal sequence frame.
As shown in fig. 4, which is a schematic diagram of a correct UV map and a wrong UV map in the method for generating a sequence frame provided by the embodiment of the present disclosure, as can be seen from fig. 4, since antialiasing processing is not set in the correct UV map during rendering, the correct UV map has no antialiasing effect, the color linearity of the map is relatively absolute, and there is no situation that adjacent pixels at edges are directly mixed with each other; while the wrong UV map has an antialiasing effect, there are cases where edge neighboring pixels are directly mixed with each other.
The following describes, with reference to fig. 5, rendering a UV map in an image sampler in a method for generating a sequence frame according to an embodiment of the present disclosure:
FIG. 5 is a display interface for rendering UV maps in an image sampler, specifically as follows:
Firstly, starting an anti-aliasing type image sampler, and inputting a virtual model into the image sampler;
setting the image sampling type of the image sampler as a rendering block;
thirdly, canceling subdivision antialiasing setting in a rendering block image sampler, setting the value of the minimum subdivision to be 1, and canceling the setting of the maximum subdivision; since the maximum subdivision may be selected by default in the antialiasing type image sampler, the maximum subdivision needs to be manually hooked to cancel the setting;
and fourthly, setting the type as UVW coordinates and the UVW mode as clamping in a setting page of the Vray (rendering plug-in) sampling information parameter. Wherein Vray may provide a rendering for a picture or animation, and the clip mode is used to indicate that the UV map output by the image sampler is output in a selected region.
In the present disclosure, since the antialiasing setting is not turned on when the UV map is rendered, when a diagonal line appears in the UV map, a broken line problem will occur at the place of the diagonal line. For example, when a virtual weapon held by a virtual model is tilted, the virtual weapon will appear discontinuous, which will make the generated UV map unreal and natural.
Therefore, in an alternative embodiment, pixel edge expansion may be performed on the generated correct UV map, and the method for generating a sequence frame provided by the embodiment of the present disclosure may further include the following steps:
and carrying out pixel edge expansion on the UV mapping to obtain the UV mapping after the pixel edge expansion, so that adjacent pixel points exist in the pixel points in the UV mapping after the pixel edge expansion.
The principle of pixel edge expansion is as follows: for each pixel in the image, checking whether the pixel around the pixel is adjacent to the pixel, if not, interpolating the value of the pixel and the nearest adjacent pixel, thereby obtaining a new pixel value. This process is repeated until the pixels at the edges of the image are all expanded to the desired width.
In a specific embodiment, the pixel edge expansion can be realized through a pixel edge expansion tool, a developer can manually set parameters in the pixel edge expansion, and the effect after the pixel edge expansion is checked in a visual mode, so that an image after the pixel edge expansion is more real and natural, and no broken line condition exists.
Fig. 6 is a schematic diagram of UV mapping without pixel edge expansion and UV mapping after pixel edge expansion in the method for generating a sequence frame according to the embodiment of the disclosure, including an interface (6-a) and an interface (6-b). As shown in the interface (6-a), the UV mapping is performed without pixel edge expansion, and a breakpoint exists in the interface (6-a) at the oblique line; as shown in the interface (6-b), the UV mapping after pixel edge expansion is performed, and compared with the interface (6-a), the interface (6-b) has no break point in the oblique line.
The pixel edge-spreading is mainly used to spread edge pixels in an image, and is not usually performed for internal pixels.
2. Transparency map
Similar to the UV map, the transparency map may also be generated by rendering with an image sampler, but unlike the UV map, when rendering the transparency map, antialiasing is required, and the image filter is turned on, and the desired image filter is selected.
3. Illumination information map
Similar to the transparency map, the illumination information map may also be generated by rendering by the image sampler and requires antialiasing settings and turning on the image filter.
It should be noted that the transparency map and the illumination information map may be a merged map, and the transparency information and the illumination information are included in the merged map. The merge map may be generated by one-time sampling in the image sampler, or may be generated separately as a transparency map and an illumination information map, which is not particularly limited in this disclosure.
The following describes the generation of a transparency map and an illumination information map in the method for generating a sequence frame according to the embodiment of the present disclosure through fig. 7 and fig. 8:
Fig. 7 is a schematic diagram of rendering a transparency map and an illumination information map in an image sampler in the method for generating a sequence frame according to the embodiment of the disclosure. The method comprises the following steps:
firstly, starting an anti-aliasing type image sampler, and inputting a virtual model into the image sampler;
setting the image sampling type of the image sampler as a rendering block;
third, subdivision antialiasing setting is carried out in a rendering block image sampler, the value of the minimum subdivision is set to be 1, and the maximum subdivision is set to be 100;
step four, selecting a filter 'Catmull-Rom' in the image filter; catmul-Rom is an antialiasing filter with significant edge enhancement, which can significantly increase the sharpness of the edge and sharpen the image.
And fifthly, setting the type as transparency information and illumination information in a setting page of the Vray sampling information parameter, and outputting a transparency map and an illumination information map.
Fig. 8 is a schematic diagram of a combined transparency map and illumination information map in the method for generating a sequence frame according to the embodiment of the disclosure. As can be seen from FIG. 8, due to the anti-aliasing arrangement which is turned on when rendering is performed, the noise of the merged map after merging the generated transparency map and the illumination information map is effectively reduced, the image details of the rendered merged map are optimized, and the layering of the images in the merged map is improved.
Therefore, the universal sequence frame generated through the transparency map and the illumination information map also has the transparency information and the illumination information, so that the image details of the universal sequence frame are more optimized, and the image is more real and natural. Based on the above, when the universal sequence frame acts on the virtual model, the virtual model overlapped with the universal sequence frame is endowed with corresponding transparency information and illumination information, so that the layering sense and the volume sense of the virtual model overlapped with the universal sequence frame have better effects, and the corresponding transparency and the color brightness under illumination are presented.
And combining the UV mapping, the illumination information mapping and the transparency mapping to obtain a universal sequence frame, and importing the universal sequence frame into a game engine so that when the 2D animation needs to be rendered, the corresponding universal sequence frame and the corresponding color mapping can be rendered.
Compared with the UV mapping, the illumination information mapping and the transparency mapping which are used for indicating the illumination information are independently arranged, the UV mapping, the illumination information mapping and the transparency mapping are combined into the universal sequence frame, so that the corresponding mapping quantity in the virtual game can be effectively reduced, the volume of a game inclusion is reduced, and the game performance of the virtual game is improved.
In addition, by the method for generating the sequence frames, the same universal sequence frame without color information can be used for rendering the 2D animation under the condition that the models corresponding to the 2D animation are the same and the actions are the same, so that the multiplexing rate of the universal sequence frames is improved. And mapping the needed color map to the universal sequence frame without color information according to the actual demand, and endowing the universal sequence frame without color information with the needed color. In addition, the method for generating the sequence frames does not need to generate the sequence frames with various colors, reduces labor cost and time cost, effectively reduces the volume of a game inclusion in a virtual game, and releases storage space.
In an alternative embodiment, when the UV map, the illumination information map and the transparency map are combined to obtain the universal sequence frame, the channels corresponding to the UV map, the illumination information map and the transparency map are substantially combined. Thus, the step of combining the UV map, the transparency map, and the illumination information map to obtain a universal sequence frame may be implemented by:
the R channel information of the UV map is used as the R channel information of the universal sequence frame, and the G channel information of the UV map is used as the G channel information of the universal sequence frame; the R channel information and the G channel information of the UV map are used for indicating the positions of all pixel points of the virtual model in the UV map;
Combining the transparency map and the illumination information map to obtain a combined map;
the B channel information of the merging map is used as the B channel information of the universal sequence frame, and the transparency channel information of the merging map is used as the transparency channel information of the universal sequence frame; the B channel information of the merging map is used for indicating the illumination intensity received by each pixel point of the virtual model; the transparency channel information of the merged map is used to indicate the transparency of each pixel point in the virtual model.
Specifically, transmitting R (Red) channel information of the UV map processed by the pixel edge-expanding algorithm to an R channel of a universal sequence frame, and transmitting G (Green) channel information of the UV map processed by the pixel edge-expanding algorithm to a G channel of the universal sequence frame; b (Blue) channel information in the merged map obtained by merging the illumination information map and the transparency map is transmitted to a B channel of the universal sequence frame, and transparency (Alpha, A) channel information in the merged map obtained by merging the illumination information map and the transparency map is transmitted to a transparency channel of the universal sequence frame.
Because the positions of the pixels in the virtual model in the R channel and the G channel of the UV map are stored, when the R channel information and the G channel information of the UV map are respectively transmitted to the R channel and the G channel of the universal sequence frame, the positions of the pixels in the virtual model in the universal sequence frame can be reflected, and thus, the color map can map the colors to the correct positions in the universal sequence frame through the R channel and the G channel of the universal sequence frame.
Because the B channel in the merged map after merging the illumination information map and the transparency map can be used for storing illumination information, the illumination information is used for adjusting the color darkness of the pixel point, and the transparency channel in the merged map after merging the illumination information map and the transparency map can be used for storing transparency information, and the transparency information is used for processing the transparency of the pixel point. In this way, when the B-channel information and the transparency channel information of the merged map obtained by merging the illumination information map and the transparency map are respectively transmitted to the B-channel and the transparency channel of the universal sequence frame, the pixel points in the universal sequence frame will have corresponding color darkness and transparency.
In an alternative embodiment, the mapping of the color map onto each frame map of the universal sequence frame according to the first UV information and the second UV information in step S103 may be implemented as follows:
obtaining a position mapping relation between a first mapping pixel point in the color mapping and a second mapping pixel point in each frame mapping in the universal sequence frame according to the first UV information and the second UV information; and mapping the color map to each frame map of the universal sequence frame according to the position mapping relation.
The position mapping relationship between the first mapping pixel point in the color mapping and the second mapping pixel point in each frame mapping in the universal sequence frame can be quickly obtained through the first UV information for indicating the position of the pixel point in the virtual model in each frame mapping in the universal sequence frame and the second UV information for indicating the position of the pixel point in the virtual model in the color mapping. In this way, the color map can be mapped onto each frame map of the universal sequence frame according to the obtained positional mapping relationship. Based on this, each frame map of the universal sequence frame is given a corresponding color.
Optionally, the method for generating the sequence frame provided in the embodiment of the present disclosure may further include the following steps:
each frame mapping in the sequence frames with color information is acted on the virtual model to obtain an animation frame of the 2D animation; and playing the animation frames according to the playing sequence so as to display the 2D animation on the terminal equipment.
By this step, each frame map of the sequential frames having colors is superimposed on the virtual model, so that the animation frame of the corresponding 2D animation can be obtained, and when the animation frame is played in the playing order, the 2D animation can be displayed on the graphical user interface of the terminal device.
Corresponding to the method for generating a sequence frame provided in the first embodiment of the present disclosure, the second embodiment of the present disclosure further provides a device for generating a sequence frame, as shown in fig. 9, where the device 900 for generating a sequence frame includes:
an acquisition unit 901 for acquiring a general sequence frame for rendering 2D animation without color information; the universal sequence frame represents first UV information, and the first UV information is used for indicating the position of each pixel point in a virtual model used for rendering the 2D animation in each frame map of the universal sequence frame;
a determining unit 902 for determining a color map for rendering the 2D animation; the color map represents second UV information, and the second UV information is used for indicating the position of each pixel point in the virtual model in the color map;
the mapping unit 903 is configured to map the color map onto each frame map of the universal sequence frame according to the first UV information and the second UV information, so as to obtain a sequence frame with color information.
Optionally, the generating device 900 of the sequence frame further includes a generating unit, where the universal sequence frame is generated by the generating unit; the generating unit is used for:
Acquiring a UV map, a transparency map and an illumination information map corresponding to the universal sequence frame;
and merging the UV map, the transparency map and the illumination information map to obtain the universal sequence frame.
Optionally, the generating unit is specifically configured to:
taking the R channel information of the UV map as the R channel information of the universal sequence frame and the G channel information of the UV map as the G channel information of the universal sequence frame; the R channel information and the G channel information of the UV map are used for indicating the position of each pixel point of the virtual model in the UV map;
combining the transparency map and the illumination information map to obtain a combined map;
taking the B channel information of the merging map as the B channel information of the universal sequence frame and the transparency channel information of the merging map as the transparency channel information of the universal sequence frame; the B channel information of the merging map is used for indicating the illumination intensity received by each pixel point of the virtual model; the transparency channel information of the merging map is used for indicating the transparency of each pixel point in the virtual model.
Optionally, the generating unit is specifically configured to:
inputting the virtual model and animation data required by the virtual model to execute preset actions into an image sampler;
configuring parameters of the image sampler;
and according to the virtual model and animation data required by the virtual model to execute preset actions, sampling to obtain the UV map corresponding to the universal sequence frame according to parameters configuring the image sampler.
Optionally, the generating unit is specifically configured to:
configuring subdivision antialiasing parameters and image sampling output type parameters of the image sampler; the subdivision antialiasing parameter is used for indicating the accuracy of eliminating concave-convex jaggies on the image edge in the picture output by the display, and the image sampling output type parameter is used for indicating the output mapping type.
Optionally, the generating unit is specifically configured to:
configuring the subdivision antialiasing parameter of the image sampler to be a parameter that maximizes the degree of concave-convex aliasing of each pixel point;
the image sample output type parameter of the image sampler is configured to be a parameter indicating that the output type is UVW map.
Optionally, the generating unit is further configured to:
And carrying out pixel edge expansion on the UV mapping to obtain a UV mapping after pixel edge expansion, so that adjacent pixel points exist in the pixel points in the UV mapping after pixel edge expansion.
Optionally, the mapping unit 903 is specifically configured to:
obtaining a position mapping relation between a first mapping pixel point in the color mapping and a second mapping pixel point in each frame mapping in the universal sequence frame according to the first UV information and the second UV information;
and mapping the color map to each frame map of the universal sequence frame according to the position mapping relation.
Optionally, the universal sequence frame corresponds to a playing sequence; the generating device 900 of the sequence frame further includes a playing unit, where the playing unit is configured to:
applying each frame map in the sequence frames with the color information to the virtual model to obtain an animation frame of the 2D animation;
and playing the animation frames according to the playing sequence so as to display the 2D animation on the terminal equipment.
Corresponding to the method for generating the sequence frame provided in the first embodiment of the present disclosure, the third embodiment of the present disclosure further provides an electronic device for generating the sequence frame. As shown in fig. 10, the electronic device 1000 includes: a processor 1001; and a memory 1002 for storing a program of a generation method of a sequence frame, the apparatus, after powering on and running the program of the generation method of a sequence frame by the processor, performing the steps of:
Acquiring a universal sequence frame which is used for rendering 2D animation and does not have color information; the universal sequence frame represents first UV information, and the first UV information is used for indicating the position of each pixel point in a virtual model used for rendering the 2D animation in each frame map of the universal sequence frame;
determining a color map for rendering the 2D animation; the color map represents second UV information, and the second UV information is used for indicating the position of each pixel point in the virtual model in the color map;
and mapping the color map to each frame map of the universal sequence frame according to the first UV information and the second UV information to obtain the sequence frame with the color information.
In correspondence with the method for generating a sequence frame provided by the first embodiment of the present disclosure, a fourth embodiment of the present disclosure provides a computer-readable storage medium storing a program of the method for generating a sequence frame, the program being executed by a processor to perform the steps of:
acquiring a universal sequence frame which is used for rendering 2D animation and does not have color information; the universal sequence frame represents first UV information, and the first UV information is used for indicating the position of each pixel point in a virtual model used for rendering the 2D animation in each frame map of the universal sequence frame;
Determining a color map for rendering the 2D animation; the color map represents second UV information, and the second UV information is used for indicating the position of each pixel point in the virtual model in the color map;
and mapping the color map to each frame map of the universal sequence frame according to the first UV information and the second UV information to obtain the sequence frame with the color information.
It should be noted that, for the detailed descriptions of the apparatus, the electronic device, and the computer readable storage medium provided in the second embodiment, the third embodiment, and the fourth embodiment of the present disclosure, reference may be made to the related descriptions of the first embodiment of the present disclosure, which are not repeated here.
While the present disclosure has been described in terms of the preferred embodiments, it is not intended to limit the disclosure, and any person skilled in the art can make variations and modifications without departing from the spirit and scope of the present disclosure, so that the scope of the present disclosure shall be defined by the claims of the present disclosure.
In one typical configuration, the node devices in the blockchain include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
1. Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), random Access Memory (RAM) of other nature, read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage media or any other non-transmission media that can be used to store information that can be accessed by a computing device. Computer readable media, as defined herein, does not include non-transitory computer readable media (transmission media), such as modulated data signals and carrier waves.
2. It will be appreciated by those skilled in the art that embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
While the present disclosure has been described in terms of the preferred embodiments, it is not intended to limit the disclosure, and any person skilled in the art can make variations and modifications without departing from the spirit and scope of the present disclosure, so that the scope of the present disclosure shall be defined by the claims of the present disclosure.

Claims (13)

1. A method for generating a sequence frame, the method comprising:
acquiring a universal sequence frame which is used for rendering 2D animation and does not have color information; the universal sequence frame represents first UV information, and the first UV information is used for indicating the position of each pixel point in a virtual model used for rendering the 2D animation in each frame map of the universal sequence frame;
Determining a color map for rendering the 2D animation; the color map represents second UV information, and the second UV information is used for indicating the position of each pixel point in the virtual model in the color map;
and mapping the color map to each frame map of the universal sequence frame according to the first UV information and the second UV information to obtain the sequence frame with the color information.
2. The method of claim 1, wherein the universal sequence frame further characterizes transparency information and illumination information; the transparency information is used for indicating the transparency of each pixel point in the virtual model, and the illumination information is used for indicating the illumination intensity received by each pixel point of the virtual model.
3. The method of claim 2, wherein the generic sequence frame is generated by:
acquiring a UV map, a transparency map and an illumination information map corresponding to the universal sequence frame;
and merging the UV map, the transparency map and the illumination information map to obtain the universal sequence frame.
4. The method of claim 3, wherein the merging the UV map, the transparency map, and the illumination information map to obtain the universal sequence frame comprises:
Taking the R channel information of the UV map as the R channel information of the universal sequence frame and the G channel information of the UV map as the G channel information of the universal sequence frame; the R channel information and the G channel information of the UV map are used for indicating the position of each pixel point of the virtual model in the UV map;
combining the transparency map and the illumination information map to obtain a combined map;
taking the B channel information of the merging map as the B channel information of the universal sequence frame and the transparency channel information of the merging map as the transparency channel information of the universal sequence frame; the B channel information of the merging map is used for indicating the illumination intensity received by each pixel point of the virtual model; the transparency channel information of the merging map is used for indicating the transparency of each pixel point in the virtual model.
5. A method according to claim 3, wherein said obtaining a UV map corresponding to said universal sequence frame comprises:
inputting the virtual model and animation data required by the virtual model to execute preset actions into an image sampler;
configuring parameters of the image sampler;
And according to the virtual model and animation data required by the virtual model to execute preset actions, sampling to obtain the UV map corresponding to the universal sequence frame according to parameters configuring the image sampler.
6. The method of claim 4, wherein configuring parameters of the image sampler comprises:
configuring subdivision antialiasing parameters and image sampling output type parameters of the image sampler; the subdivision antialiasing parameter is used for indicating the accuracy of eliminating concave-convex jaggies on the image edge in the picture output by the display, and the image sampling output type parameter is used for indicating the output mapping type.
7. The method of claim 6, wherein said configuring the sub-division antialiasing parameters and the image-sampling output type parameters of the image sampler comprises:
configuring the subdivision antialiasing parameter of the image sampler to be a parameter that maximizes the degree of concave-convex aliasing of each pixel point;
the image sample output type parameter of the image sampler is configured to be a parameter indicating that the output type is UVW map.
8. The method of claim 7, wherein the method further comprises:
And carrying out pixel edge expansion on the UV mapping to obtain a UV mapping after pixel edge expansion, so that adjacent pixel points exist in the pixel points in the UV mapping after pixel edge expansion.
9. The method of claim 1, wherein said mapping said color map onto each frame map of said universal sequence frame based on said first UV information and said second UV information comprises:
obtaining a position mapping relation between a first mapping pixel point in the color mapping and a second mapping pixel point in each frame mapping in the universal sequence frame according to the first UV information and the second UV information;
and mapping the color map to each frame map of the universal sequence frame according to the position mapping relation.
10. The method of claim 1, wherein the universal sequence frames correspond to a play order, the method further comprising:
applying each frame map in the sequence frames with the color information to the virtual model to obtain an animation frame of the 2D animation;
and playing the animation frames according to the playing sequence so as to display the 2D animation on the terminal equipment.
11. An apparatus for generating a sequence frame, the apparatus comprising:
an acquisition unit for acquiring a general sequence frame for rendering a 2D animation without color information; the universal sequence frame represents first UV information, and the first UV information is used for indicating the position of each pixel point in a virtual model used for rendering the 2D animation in each frame map of the universal sequence frame;
a determining unit for determining a color map for rendering the 2D animation; the color map represents second UV information, and the second UV information is used for indicating the position of each pixel point in the virtual model in the color map;
and the mapping unit is used for mapping the color mapping to each frame mapping of the universal sequence frame according to the first UV information and the second UV information to obtain the sequence frame with the color information.
12. An electronic device, comprising:
a processor; and
a memory for storing a data processing program, the electronic device being powered on and executing the program by the processor, for performing the method of any of claims 1-10.
13. A computer readable storage medium, characterized in that a data processing program is stored, which program is run by a processor, performing the method according to any of claims 1-10.
CN202310976150.5A 2023-08-03 2023-08-03 Method and device for generating sequence frame and electronic equipment Pending CN117292032A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310976150.5A CN117292032A (en) 2023-08-03 2023-08-03 Method and device for generating sequence frame and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310976150.5A CN117292032A (en) 2023-08-03 2023-08-03 Method and device for generating sequence frame and electronic equipment

Publications (1)

Publication Number Publication Date
CN117292032A true CN117292032A (en) 2023-12-26

Family

ID=89239770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310976150.5A Pending CN117292032A (en) 2023-08-03 2023-08-03 Method and device for generating sequence frame and electronic equipment

Country Status (1)

Country Link
CN (1) CN117292032A (en)

Similar Documents

Publication Publication Date Title
CN109448089B (en) Rendering method and device
CN111508052B (en) Rendering method and device of three-dimensional grid body
TWI634517B (en) Method for efficient construction of high resolution display buffers
US6947057B2 (en) Rendering lines with sample weighting
US7583264B2 (en) Apparatus and program for image generation
KR100896155B1 (en) Flexible antialiasing in embedded devices
US8035641B1 (en) Fast depth of field simulation
RU2427918C2 (en) Metaphor of 2d editing for 3d graphics
KR20190100194A (en) Forbidden Rendering in Tiled Architectures
TW201737207A (en) Method and system of graphics processing enhancement by tracking object and/or primitive identifiers, graphics processing unit and non-transitory computer readable medium
KR20010113730A (en) Method and apparatus for processing images
US8854392B2 (en) Circular scratch shader
US7508390B1 (en) Method and system for implementing real time soft shadows using penumbra maps and occluder maps
US10733793B2 (en) Indexed value blending for use in image rendering
CN113240783B (en) Stylized rendering method and device, readable storage medium and electronic equipment
US20020190997A1 (en) Methods and apparatus for radiometrically accurate texture-based graphic primitive rendering technique
CN114494570A (en) Rendering method and device of three-dimensional model, storage medium and computer equipment
US11804008B2 (en) Systems and methods of texture super sampling for low-rate shading
CN111402385B (en) Model processing method and device, electronic equipment and storage medium
Eicke et al. Stable dynamic webshadows in the X3DOM framework
US20180005432A1 (en) Shading Using Multiple Texture Maps
JP2003168130A (en) System for previewing photorealistic rendering of synthetic scene in real-time
CN117292032A (en) Method and device for generating sequence frame and electronic equipment
CN114288649A (en) Shadow edge determination and rendering method, storage medium and electronic device
CN113936080A (en) Rendering method and device of virtual model, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination