CN116188724A - Animation rendering method, device, equipment and storage medium - Google Patents

Animation rendering method, device, equipment and storage medium Download PDF

Info

Publication number
CN116188724A
CN116188724A CN202310004882.8A CN202310004882A CN116188724A CN 116188724 A CN116188724 A CN 116188724A CN 202310004882 A CN202310004882 A CN 202310004882A CN 116188724 A CN116188724 A CN 116188724A
Authority
CN
China
Prior art keywords
transformation
animation
target
mesh
transformation information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310004882.8A
Other languages
Chinese (zh)
Inventor
计澔文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202310004882.8A priority Critical patent/CN116188724A/en
Publication of CN116188724A publication Critical patent/CN116188724A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure relates to a method, a device, equipment and a storage medium for rendering animation, which relate to the technical field of networks, and can reduce occupied space of electronic equipment and improve space utilization rate of the electronic equipment. The specific scheme comprises the following steps: acquiring first animation data, wherein the first animation data comprises: the first mesh object and the first transformation information, the first transformation information being used to transform the first mesh object. A target bone object is created. And performing skin binding on the first grid object according to the target skeleton object to generate a second grid object, wherein the transformation of the second grid object corresponds to the transformation of the target skeleton object. And performing matrix transformation processing based on the first transformation information to obtain second transformation information, wherein the second transformation information is used for adjusting the target bone object. And rendering the target skeleton object and the second grid object according to the second transformation information to generate a target animation.

Description

Animation rendering method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of network technologies, and in particular, to a method, an apparatus, a device, and a storage medium for animation rendering.
Background
Attribute animation is an animation form that can expand an animation object and increase animation effects, and can add animation effects to various elements in an interface. The attribute animation may consist of an animation sequence that includes a key frame animation, which is a more key frame animation in the animation sequence. The key frame animation can be classified into three types of animation, namely, skin animation, transform animation, and morphing animation.
Currently, for the three types of animations described above, three different animation systems may be employed by an electronic device to render a keyframe animation. For example, the electronic device may render the skinned animation using a skinned animation system. For another example, the electronic device may render the Transform animation using a Transform animation system. In this way, a large number of animation systems may be deployed in the electronic device, which occupies space of the electronic device and reduces space utilization of the electronic device.
Disclosure of Invention
The present disclosure provides a method, an apparatus, a device, and a storage medium for rendering an animation, which can reduce occupied space of an electronic device and improve space utilization of the electronic device. The technical scheme of the present disclosure is as follows:
According to a first aspect of the present disclosure, there is provided a method of animation rendering, the method comprising:
acquiring first animation data, wherein the first animation data comprises: the first mesh object and the first transformation information, the first transformation information being used to transform the first mesh object. A target bone object is created. And performing skin binding on the first grid object according to the target skeleton object to generate a second grid object, wherein the transformation of the second grid object corresponds to the transformation of the target skeleton object. And performing matrix transformation processing based on the first transformation information to obtain second transformation information, wherein the second transformation information is used for adjusting the target bone object. And rendering the target skeleton object and the second grid object according to the second transformation information to generate a target animation.
Optionally, the method for rendering the animation may further include: it is determined whether the first mesh object is a mesh object that has been skin bound. If the first grid object is a grid object which is not subjected to skin binding, creating a target skeleton object.
Optionally, the method for rendering the animation may further include: and if the first grid object is the grid object which is already skinned and bound, skinning the preset skeleton object according to the first transformation information to generate a target animation. Wherein, under the circumstance that the first mesh object is a mesh object which is already covered and bound, the first animation data further comprises: the method comprises the steps that a bone object is preset, the transformation of the first grid object corresponds to the transformation of the bone object, and the first transformation information is used for adjusting the bone object.
Optionally, the animation rendering method is applied to electronic equipment, a plurality of data nodes are stored in the electronic equipment, the plurality of data nodes comprise a first type of data nodes, and the first type of data nodes are used for storing mesh objects and transformation information which are not subjected to skin binding. The method for rendering the animation can further comprise the following steps: a first type of data node is determined from a plurality of data nodes. First animation data is acquired from a first class data node.
Optionally, the method for rendering the animation may further include: the second transformation information, the target skeletal object, and the second mesh object are input to a rendering engine, which includes a skinning animation system. And calling a skin animation system to drive the target skeleton object according to the second transformation information, and generating a target animation.
Optionally, the method for rendering the animation may further include: if the first grid object has a parent node, a transformation matrix of a target parent node is obtained, a second transformation matrix is determined according to the first transformation matrix, the transformation matrix of the target parent node and preset weights, the target parent node is the parent node of the first grid object, the first transformation matrix is used for indicating first transformation information, the second transformation matrix is used for indicating second transformation information, and the preset weights are used for reflecting the influence degree of the second transformation information on transformation of the second grid object. If the first grid object does not have the father node, determining a second transformation matrix according to the first transformation matrix and the preset weight.
According to a second aspect of the present disclosure, there is provided an apparatus for animation rendering, the apparatus comprising: an acquisition unit and a processing unit.
An acquisition unit configured to perform acquisition of first animation data including: the first mesh object and the first transformation information, the first transformation information being used to transform the first mesh object. And a processing unit configured to execute the creation of the target bone object. The processing unit is further configured to perform skin binding of the first mesh object according to the target bone object, generating a second mesh object, the transformation of the second mesh object corresponding to the transformation of the target bone object. And the processing unit is further configured to perform matrix transformation processing based on the first transformation information to obtain second transformation information, wherein the second transformation information is used for adjusting the target bone object. And the processing unit is further configured to perform rendering processing on the target skeleton object and the second grid object according to the second transformation information to generate a target animation.
Optionally, the processing unit is further configured to perform determining whether the first mesh object is a mesh object that has been skin bound. The processing unit is further configured to execute creating the target bone object if the first mesh object is a mesh object that is not skin bound.
Optionally, the processing unit is further configured to perform skin rendering on the preset skeleton object according to the first transformation information if the first mesh object is the mesh object that has been skin-bound, and generate the target animation. Wherein, under the circumstance that the first mesh object is a mesh object which is already covered and bound, the first animation data further comprises: the method comprises the steps that a bone object is preset, the transformation of the first grid object corresponds to the transformation of the bone object, and the first transformation information is used for adjusting the bone object.
Optionally, the animation rendering device may be applied to an electronic device, where a plurality of data nodes are stored in the electronic device, where the plurality of data nodes includes a first type of data node, and the first type of data node is used to store mesh objects and transformation information that are not skin bound. The processing unit is further configured to perform determining a first type of data node from the plurality of data nodes. The acquisition unit is further configured to perform acquisition of the first animation data from the first class data node.
Optionally, the processing unit is further configured to perform inputting the second transformation information, the target skeleton object, and the second mesh object into a rendering engine, the rendering engine comprising a skinning animation system. And the processing unit is further configured to execute the calling of the skin animation system to drive the target skeleton object according to the second transformation information so as to generate the target animation.
Optionally, the processing unit is further configured to execute the step of acquiring a transformation matrix of the target parent node if the first mesh object has a parent node, and determining a second transformation matrix according to the first transformation matrix, the transformation matrix of the target parent node and a preset weight, where the target parent node is the parent node of the first mesh object, the first transformation matrix is used for indicating the first transformation information, the second transformation matrix is used for indicating the second transformation information, and the preset weight is used for reflecting the influence degree of the second transformation information on the transformation of the second mesh object. The processing unit is further configured to determine a second transformation matrix according to the first transformation matrix and the preset weight if the first mesh object does not have a parent node.
According to a third aspect of the present disclosure, there is provided an electronic device, comprising:
a processor. A memory for storing processor-executable instructions. Wherein the processor is configured to execute instructions to implement the method of any of the above-described first aspects, optionally, animated rendering.
According to a fourth aspect of the present disclosure there is provided a computer readable storage medium having instructions stored thereon which, when executed by a processor of a computer, enable the computer to perform the method of any one of the above-described optional animated rendering.
According to a fifth aspect of the present disclosure there is provided a computer program product comprising a computer program which when executed by a processor implements the method of optionally animated rendering as in any of the first aspects.
According to a sixth aspect of the present disclosure, there is provided a chip comprising a processor and a communication interface, the communication interface and the processor being coupled, the processor being for running a computer program or instructions to implement a method of animated rendering as described in any of the possible implementations of the first aspect and the first aspect.
The technical scheme provided by the disclosure at least brings the following beneficial effects: the electronic device obtains first animation data, the first animation data comprising: the first mesh object and the first transformation information, the first transformation information being used to transform the first mesh object. The electronic device may then create a target bone object and skin bind the first mesh object according to the target bone object, generating a second mesh object, the transformation of the second mesh object corresponding to the transformation of the target bone object. In this way, the electronic device can obtain a mesh object that matches the skinned animation system. Then, the electronic device may generate second transformation information based on the first transformation information, and perform rendering processing on the target skeleton object and the second mesh object according to the second transformation information to generate a target animation, where the second transformation information is used to adjust the target skeleton object. Therefore, the electronic equipment can process the second grid object after the skin binding in the skin binding mode without processing in other modes, a large number of animation systems are prevented from being deployed in the electronic equipment, occupied electronic equipment space can be reduced, and the space utilization rate of the electronic equipment is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a schematic diagram of an electronic device, shown in accordance with an exemplary embodiment;
FIG. 2 is a flowchart illustrating a method of animation rendering, according to an exemplary embodiment;
FIG. 3 is an example diagram of a grid object, according to an example embodiment;
FIG. 4 is an example diagram of an animation data structure, according to an exemplary embodiment;
FIG. 5 is an example diagram of a skin-bound mesh object, according to an example embodiment;
FIG. 6 is a flowchart illustrating another method of animation rendering, according to an example embodiment;
FIG. 7 is a flowchart illustrating another method of animation rendering, according to an example embodiment;
FIG. 8 is a schematic diagram illustrating an apparatus for animation rendering according to an exemplary embodiment;
Fig. 9 is a schematic structural view of another animation rendering apparatus according to an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to animation data, transformation information, etc.) related to the present disclosure are information and data authorized by the user or sufficiently authorized by each party.
For ease of understanding, the terms involved in the embodiments of the present disclosure are first described below.
The key frame animation can comprise key action data of objects contained in the animation, and the animation data is rendered through the key action data so as to achieve a smooth animation effect.
The application scenario of the embodiments of the present disclosure is described below.
The animation rendering method of the embodiment of the disclosure is applied to rendering the scene of the animation. In the related art, attribute animation is an animation form capable of expanding an animation object and increasing an animation effect, and an animation effect can be added to various elements in an interface. The attribute animation may consist of an animation sequence that includes a key frame animation, which is a more key frame animation in the animation sequence. The key frame animation can be classified into three types of animation, namely, skin animation, transition animation, and morphing animation.
Currently, for the three types of animations described above, three-dimensional (3 d) engines in electronic devices can employ three different animation systems to render key frame animations. For example, the electronic device may render the skinned animation using a skinned animation system. For another example, the electronic device may render the Transform animation using a Transform animation system. In this way, a large number of animation systems may be deployed in the electronic device, which occupies space of the electronic device and reduces space utilization of the electronic device.
In order to solve the above-mentioned problems, an embodiment of the present disclosure provides a method for rendering an animation, in which a sub-device acquires first animation data, the first animation data including: the first mesh object and the first transformation information, the first transformation information being used to transform the first mesh object. The electronic device may then create a target bone object and skin bind the first mesh object according to the target bone object, generating a second mesh object, the transformation of the second mesh object corresponding to the transformation of the target bone object. In this way, the electronic device can obtain a mesh object that matches the skinned animation system. Then, the electronic device may generate second transformation information based on the first transformation information, and perform rendering processing on the target skeleton object and the second mesh object according to the second transformation information to generate a target animation, where the second transformation information is used to adjust the target skeleton object. Therefore, the electronic equipment can process the second grid object after the skin binding in the skin binding mode without processing in other modes, a large number of animation systems are prevented from being deployed in the electronic equipment, occupied electronic equipment space can be reduced, and the space utilization rate of the electronic equipment is improved.
Fig. 1 is a schematic structural diagram of an electronic device applying a method provided by the present disclosure according to an embodiment of the present disclosure. The electronic device (e.g., terminal device 10) includes a processor 101 and a memory 102.
Processor 101 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc. The processor 101 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
Memory 102 may include one or more computer-readable storage media, which may be non-transitory. Memory 102 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In one implementation, a non-transitory computer readable storage medium in memory 102 is used to store at least one instruction for execution by processor 101 to implement the method of animation rendering provided by the method embodiments of the present disclosure.
In an embodiment, the terminal device 10 may further include: a peripheral interface 103 and at least one peripheral. The processor 101, memory 102, and peripheral interface 103 may be connected via buses or signal lines. The individual peripheral devices may be connected to the peripheral device interface 103 via buses, signal lines, or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 104, display screen 105, camera assembly 106, audio circuitry 107, positioning assembly 108, and power supply 109.
Peripheral interface 103 may be used to connect at least one Input/Output (I/O) related peripheral device to processor 101 and memory 102. In one implementation, processor 101, memory 102, and peripheral interface 103 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 101, memory 102, and peripheral interface 103 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 104 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 104 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 104 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 104 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 104 may communicate with other terminal devices via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or Wi-Fi (Wireless Fidelity ) networks. In one implementation, the radio frequency circuitry 104 may also include NFC (Near Field Communication ) related circuitry, which is not limited by the present disclosure.
The display screen 105 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 105 is a touch display screen, the display screen 105 also has the ability to collect touch signals at or above the surface of the display screen 105. The touch signal may be input as a control signal to the processor 101 for processing. At this time, the display screen 105 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In one embodiment, the display screen 105 may be one, providing a front panel of the terminal device 10; the display screen 105 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 106 is used to capture images or video. Optionally, the camera assembly 106 includes a front camera and a rear camera. Typically, the front camera is disposed on a front panel of the terminal device, and the rear camera is disposed on a rear surface of the terminal device. The audio circuit 107 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 101 for processing, or inputting the electric signals to the radio frequency circuit 104 for voice communication. For the purpose of stereo acquisition or noise reduction, a plurality of microphones may be provided at different portions of the terminal device 10, respectively. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 101 or the radio frequency circuit 104 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In one embodiment, the audio circuit 107 may also include a headphone jack.
The location component 108 is used to locate the current geographic location of the terminal device 10 to enable navigation or LBS (Location Based Service, location based services). The positioning component 108 may be a positioning component based on the United states GPS (Global Positioning System ), the Beidou system of China, the Granati system of Russia, or the Galileo system of the European Union.
The power supply 109 is used to power the various components in the terminal device 10. The power source 109 may be alternating current, direct current, disposable or rechargeable. When the power supply 109 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In one embodiment, the terminal device 10 further includes one or more sensors 1010. The one or more sensors 1010 include, but are not limited to: acceleration sensor, gyroscope sensor, pressure sensor, fingerprint sensor, optical sensor, and proximity sensor.
The acceleration sensor may detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal device 10. The gyro sensor may detect the body direction and the rotation angle of the terminal device 10, and the gyro sensor may cooperate with the acceleration sensor to collect the 3D motion of the user to the terminal device 10. The pressure sensor may be provided at a side frame of the terminal device 10 and/or at a lower layer of the display screen 105. When the pressure sensor is provided at the side frame of the terminal device 10, a grip signal of the user to the terminal device 10 can be detected. The fingerprint sensor is used for collecting fingerprints of a user. The optical sensor is used to collect the ambient light intensity. A proximity sensor, also called a distance sensor, is typically provided on the front panel of the terminal device 10. The proximity sensor is used to collect the distance between the user and the front face of the terminal device 10.
In one embodiment, the terminal device is used to provide voice and/or data connectivity services to the user. The terminal devices may be named differently, such as UE end, terminal unit, terminal station, mobile station, remote terminal, mobile device, wireless communication device, vehicle user equipment, terminal agent or terminal equipment, etc.
Alternatively, the terminal device may be various handheld devices, vehicle-mounted devices, wearable devices, or computers with communication functions, which are not limited in any way by the embodiments of the present disclosure. For example, the handheld device may be a smart phone. The in-vehicle device may be an in-vehicle navigation system. The wearable device may be a smart bracelet. The computer may be a personal digital assistant (personal digital assistant, PDA) computer, a tablet computer, or a laptop computer (laptop computer).
Having described the application scenario and implementation environment of the embodiments of the present disclosure, a detailed description of the method for rendering an animation provided by the embodiments of the present disclosure is provided below in connection with the implementation environment shown in fig. 1.
FIG. 2 is a flow chart illustrating a method of animation rendering according to an exemplary embodiment. As shown in fig. 2, the method may include steps 201-205.
201. The electronic device obtains first animation data.
Wherein the first animation data includes: the first mesh object and the first transformation information, the first transformation information being used to transform the first mesh object.
In an embodiment of the present disclosure, the first mesh object may be mesh data composed of a plurality of vertices.
Illustratively, as shown in FIG. 3, a mesh object 301 may include a vertex 302, a vertex 303, and a vertex 304.
In an embodiment of the present disclosure, the attributes of the first mesh object include: first transformation information and animation time. The first transformation information may include a plurality of transformation attribute information, and the plurality of transformation attribute information may include: transparency transformation information, displacement transformation information, rotation transformation information, scaling transformation information, and the like. The animation time is used to indicate the moment at which the mesh object is adjusted according to the transformation information.
It should be noted that, the transparency transformation information may adjust the transparency of the first mesh object, the displacement transformation information may adjust the position of the first mesh object, the rotation transformation information may adjust the rotation angle of the first mesh object, and the scaling transformation information may adjust the size of the first mesh object.
In one possible design, the first transformation information may be attribute information transformed by the first mesh object.
For example, if the transparency conversion information is 50%, the electronic device sets the transparency attribute information of the first mesh object to 50%. If the displacement conversion information is the position a, the electronic device sets the position attribute information of the first mesh object to the position a (i.e., moves the first mesh object to the position a). If the rotation conversion information is 203 degrees, the electronic device sets the rotation attribute information of the first mesh object to 203 degrees. If the scaling transformation information is 0.5, the electronic device sets the scaling attribute information of the first mesh object to 0.5.
In another possible design, the first transformation information may be a variation of an attribute of the first mesh object. The first mesh object includes attributes, which may include: location attribute information of the first mesh object, transparency attribute information of the first mesh object, rotation attribute information of the first mesh object, scaling attribute information of the first mesh object, and the like.
That is, the transformed attribute of the first mesh object may be determined from the first transformation information and the attribute of the first mesh object before transformation.
By way of example, if the transparency attribute information before the first mesh object transformation is 10% and the transparency transformation information is +30%, the transparency attribute information after the first mesh object transformation is 40%. If the scaling attribute information before the first mesh object transformation is 0.5 and the scaling transformation information is +0.3, the scaling attribute information after the first mesh object transformation is 0.8.
It should be noted that, the mesh object may be a set of multiple vertices, where the vertices also have attributes (such as location attribute information, transparency attribute information, etc.). The transformation information of the mesh object may act on each vertex constituting the mesh object and produce the same transformation effect for each vertex.
It should be noted that the first mesh object may include one or more mesh objects, and the first transformation information may include a plurality of transformation information, where one transformation information corresponds to one mesh object. In the following embodiments, a mesh object is taken as an example, and the disclosed embodiments are described.
In one possible implementation, the electronic device may obtain first animation data from an animation file that is used to store the mesh object and transformation information for the mesh object.
The animation file may be a Film Box (FBX) file, for example.
In one possible design, the Tranform animation data structure in the FBX may include: animation stack, animation curve, animation object, animation layer, animation curve node, animation value.
For example, as shown in fig. 4, the entry Fbx is a scene (e.g., scene) node, and the scene node includes a plurality of animation stacks (e.g., animation stacks) and a plurality of animation objects (e.g., mesh objects, e.g., objects), where the animation stacks may include a plurality of animation curves (e.g., animation layers), and each animation layer is at least one animation. The animation may have various information such as x-direction translation, y-direction translation, z-direction translation, x-direction scaling, etc., each of which corresponds to a segment of an animation curve node (e.g., an animation curve node). The electronic equipment can acquire a section of animation curve corresponding to a certain attribute of the grid through an animation layer and a node, and acquire a corresponding animation value (property) by taking the value of the section of animation curve.
For example, the Tranform animation data stored in one key frame includes: float time (i.e., animation time, real time), float value (i.e., animation value, e.g., scaled X component), float leftTangent =0.0 f (i.e., left tangent point of the curve for controlling the trend of the previous and current frames), float rightTangent =0.0 f (i.e., right tangent point of the curve for controlling the trend of the current and subsequent frames), interpolationType type (i.e., interpolation type), tangentMode tangentMode (i.e., tangential transformation mode, e.g., spline, linearity, clip, etc., and tangent point together control the trend of the curve before the frame).
It should be noted that, the Tranform animation data (i.e., the mesh object not subjected to skin binding) and the skin animation data (i.e., the mesh object already subjected to skin binding) each include an animation time and an animation value, i.e., the data structure of the Tranform animation data and the skin animation data has high similarity. Accordingly, the electronic device may render the Tranform animation data through the skin animation system.
202. The electronic device creates a target bone object.
In one possible implementation, the electronic device may create a preset node object. And then, the electronic equipment can adjust the attribute of the preset node object to be second information.
The second information may be a skin, which is used to indicate that the preset node object is a bone object.
203. And the electronic equipment carries out skin binding on the first grid object according to the target skeleton object to generate a second grid object.
Wherein the transformation of the second mesh object corresponds to the transformation of the target bone object.
That is, the second mesh object may transform as the target bone object transforms.
Illustratively, if the transformation information of the target bone object is translated 10 to the left, the second mesh object may also be translated 10 to the left. If the transformation information of the target bone object is rotated 90 degrees clockwise, the second mesh object may also be rotated 90 degrees clockwise.
In one possible implementation, the electronic device may associate some or all of the vertices in the first mesh object with the target bone object.
The properties of the vertices after skin binding are affected by the transformation information of the vertices themselves and the transformation information of the mesh object, and also by the transformation information of the bound bone object (i.e., joint).
Illustratively, as shown in FIG. 5, the grid object 501 includes: vertex 502, vertex 503, and vertex 504, a target bone object 505 may be associated with vertex 502 and vertex 503, respectively.
204. And the electronic equipment performs matrix transformation processing based on the first transformation information to obtain second transformation information.
Wherein the second transformation information is used to adjust the target bone object. In one possible implementation, the electronic device may determine the second transformation information according to the first transformation information and the preset coefficient.
For example, if the preset coefficient is 0.9, the first transformation information is moved to the left by 5, and the second transformation information is moved to the left by 4.5. If the first transformation information is rotated by 90 degrees, the second transformation information is rotated by 81 degrees.
In one embodiment, the transformation information may be represented by a transformation matrix. The electronic device may generate a second transformation matrix based on the first transformation matrix, the first transformation matrix being used to indicate the first transformation information, the second transformation matrix being used to indicate the second transformation information.
In an embodiment of the present disclosure, an electronic device may determine whether a parent node exists for a first mesh object. Wherein the transformation of the parent node can affect the transformation of the first mesh object.
Illustratively, if the parent node moves 1 to the left, then the child node of the parent node will also move 1 to the left.
In one possible implementation, the electronic device may determine whether there is a mesh object having an association with the first mesh object. If the grid object with the association relation with the first grid object exists, the electronic equipment determines that the first grid object has a father node. If the grid object with the association relation with the first grid object does not exist, the electronic device determines that the first grid object does not exist a father node.
If the first grid object has a father node, the electronic device acquires a transformation matrix of a target father node, and determines a second transformation matrix according to the first transformation matrix, the transformation matrix of the target father node and preset weights, wherein the target father node is the father node of the first grid object. The preset weight is used for reflecting the influence degree of the second transformation information on the transformation of the second grid object.
In one possible design, the second transformation matrix may be represented by equation one.
P son-parent =P son-model ×(P parent-model ) -1 X N formula one.
Wherein P is son-parent For representing a second transformation matrix (i.e. transformation information of the first mesh object with respect to the target parent node), P son-model For representing a first transformation matrix, i.e. transformation information of a first mesh object with respect to the world coordinate system, P parent-model Transformation matrix for representing target parent node (i.e., transformation information of target parent node with respect to world coordinate system), (P parent-model ) -1 The inverse matrix of the transformation matrix used for representing the target father node, and N is used for representing the preset weight.
It should be noted that, the preset weight is used to represent the influence degree of the transformation information (i.e., the second transformation information) of the bone object on the first mesh object. The embodiment of the disclosure does not limit the preset weight. For example, the preset weight may be 1. For another example, the preset weight may be 0.8. For another example, the preset weight may be 0.3. Typically, in the disclosed embodiment, the first mesh object binds only one bone object (i.e., the target bone object), with a preset weight of 1.
For example, if the transformation information of the first mesh object with respect to the world coordinate system is moved to the left 5, the transformation information of the target parent node with respect to the world coordinate system is moved to the left 1, and the preset weight is 1, the transformation information of the first mesh object with respect to the target parent node is moved to the left 6.
It may be appreciated that in the case where the first mesh object has a parent node, the electronic device may determine the second transformation matrix according to the first transformation matrix, the transformation matrix of the target parent node, and the preset weight. Therefore, the interference of the transformation of the target father node on the first grid object can be eliminated, and the actual transformation information of the target skeleton object can be accurately determined, so that the animation rendering effect is ensured.
If the first grid object does not have the father node, determining a second transformation matrix according to the first transformation matrix and the preset weight.
In one possible design, the second transformation matrix may be represented by equation two.
P son-parent =P son-model X N formula two.
For example, if the transformation information of the first mesh object with respect to the world coordinate system is moved to the left 5, the preset weight is 1, the second transformation information is moved to the left 5.
It is understood that in the case where the first mesh object does not have a parent node, transformation information of the first mesh object is not disturbed. The electronic device may determine the second transformation matrix based on the first transformation matrix and the preset weights. Therefore, the actual transformation information of the target skeleton object can be accurately determined, so that the animation rendering effect is ensured.
205. And the electronic equipment performs rendering processing on the target skeleton object and the second grid object according to the second transformation information to generate a target animation.
In one possible implementation, the electronic device may process the target skeletal object and the second mesh object in accordance with the second transformation information through skin rendering to generate the target animation.
In one possible design, a skin mesh renderer is deployed in an electronic device. The electronic device may import the second transformation information, the target bone object, and the second mesh object into the skin mesh renderer. Then, the electronic device can call the skin mesh renderer to process the target skeleton object and the second mesh object according to the second transformation information to generate a target animation.
It should be noted that a skin mesh (skin mesh) may render a character. Roles are drawn using bones, each of which affects a portion of the mesh. Multiple bones may affect the same vertex, and the bones are weighted. Skeletal animation can change the shape of the mesh, bending the character limbs at the joints and achieving other similar effects.
In another possible design, a rendering engine (e.g., a 3D engine) is deployed in an electronic device, the rendering engine including a skin animation system. The electronic device may input the second transformation information, the target bone object, and the second mesh object into a rendering engine. And in the rendering engine, the electronic equipment calls a skin animation system to drive the target skeleton object according to the second transformation information, and a target animation is generated.
Illustratively, if the transformation information of the target bone object is translated 10 to the left, the second mesh object may also be translated 10 to the left. If the transformation information of the target bone object is rotated 90 degrees clockwise, the second mesh object may also be rotated 90 degrees clockwise.
It is understood that the electronic device processes the target skeletal object and the second mesh object according to the second transformation information through skin rendering to generate the target animation. Therefore, only the application (such as a skin animation system) related to the skin rendering mode is deployed in the electronic equipment, the electronic equipment can be ensured to process the converted grid object in the skin rendering mode, the occupation of storage space in the electronic equipment is reduced, and the space utilization rate is improved.
The technical scheme provided by the embodiment at least brings the following beneficial effects: the electronic device obtains first animation data, the first animation data comprising: the first mesh object and the first transformation information, the first transformation information being used to transform the first mesh object. The electronic device may then create a target bone object and skin bind the first mesh object according to the target bone object, generating a second mesh object, the transformation of the second mesh object corresponding to the transformation of the target bone object. In this way, the electronic device can obtain a mesh object that matches the skinned animation system. Then, the electronic device may generate second transformation information based on the first transformation information, and perform rendering processing on the target skeleton object and the second mesh object according to the second transformation information to generate a target animation, where the second transformation information is used to adjust the target skeleton object. Therefore, the electronic equipment can process the second grid object after the skin binding in the skin binding mode without processing in other modes, a large number of animation systems are prevented from being deployed in the electronic equipment, occupied electronic equipment space can be reduced, and the space utilization rate of the electronic equipment is improved.
The animation data includes mesh objects that have been skinned and mesh objects that have not been skinned. When the electronic device re-performs skin binding on the mesh object already skin-bound (i.e. step 202 and step 203), the electronic device may repeat the operations, which wastes the resources of the electronic device.
In one embodiment, the animation file stores a plurality of data nodes, where the plurality of data nodes includes a first type of data node for storing mesh objects that are not skin bound and a second type of data node for storing mesh objects that are already skin bound.
Illustratively, the first class of data nodes is used to store transform animation data and the second class of data nodes is used to store skin animation data. The transformation animation data comprises mesh objects which are not subjected to skin binding and transformation information acting on the mesh objects, and the skin animation data comprises mesh objects which are subjected to skin binding and transformation information acting on skeleton objects.
In one possible implementation, the electronic device may determine a first type of data node from a plurality of data nodes before the electronic device obtains the first animation data.
In one possible design, each data node in the plurality of data nodes corresponds to a type information, which is used to indicate a type of the data node. The type information may be first information or second information, where the first information is used to indicate that the data node is a first type data node, and the second information is used to indicate that the data node is a second type data node. The electronic device may obtain type information for each data node. The electronic device may then use the data node with the type information being the first information as the first type data node.
Illustratively, if the first information is identification a, the second information is identification B. The plurality of data nodes comprise data nodes A, data nodes B and data nodes C, wherein the type information of the data nodes A is an identifier A, the type information of the data nodes B is an identifier B, and the type information of the data nodes C is an identifier B, so that the data nodes A are the first type data nodes.
Thereafter, the electronic device may obtain first animation data from the first class data node. Wherein the first animation data comprises mesh objects that are not skinned bound.
In one possible design, the first type of data node has stored therein attributes of the mesh object, which may include: transformation information, control point positions, smooth groups, three-dimensional modeling uv coordinate information, and the like. The electronic device may create a first mesh object in the 3D engine by attributes of the mesh object.
It can be understood that the electronic device can ensure that the acquired data are all mesh objects which are not subjected to skin binding by acquiring the first animation data from the first class data nodes. Therefore, repeated operation on the mesh object bound by the skin can be avoided, and resources of the electronic equipment are saved.
Optionally, the electronic device may further determine a second class of data node from the plurality of data nodes. And then, the electronic equipment can acquire the grid object from the second class data node, process the grid object acquired from the second class data node through skin rendering, and generate the animation corresponding to the grid object acquired from the second class data node.
It will be appreciated that the electronic device obtains different mesh objects from different data nodes, respectively. And then, the electronic equipment carries out skin binding on the grid object acquired from the first class data node. Finally, the electronic equipment can process the grid object of each data node in a skin rendering mode to generate animation. Therefore, the electronic equipment can process the grid objects in different modes so as to ensure that the grid objects can be processed through skin rendering, avoid the electronic equipment from deploying other animation systems (such as a transform animation system), reduce the occupation of the storage space of the electronic equipment and improve the space utilization rate.
In one implementation, as shown in fig. 6, the method of animation rendering may further include step 601 before step 202.
601. The electronic device determines whether the first mesh object is a mesh object that has been skin bound.
In one possible implementation, the electronic device may detect whether a vertex associated with a bone object exists in the first mesh object. If the first grid object has vertices associated with the skeleton object, the electronic device determines that the first grid object is a grid object that has been skinned bound. If the vertex associated with the skeleton object does not exist in the first grid object, the electronic device determines that the first grid object is an uncollapsed grid object.
Alternatively, the electronic device may detect whether a bone object is present in the first animation data. And if the skeleton object exists in the first animation data, determining that the first grid object is the grid object which is covered and bound. And if the skeleton object does not exist in the first animation data, determining the first grid object as a grid object which is not subjected to skin binding.
In one implementation, if the first mesh object is a mesh object that is not skin-bound, the electronic device performs step 202.
In an embodiment of the disclosure, if the first mesh object is a mesh object that is not skin bound, the electronic device creates a target bone object.
It will be appreciated that if the first mesh object is a mesh object that is not skin bound, the electronic device creates a target bone object. In this way, the electronic device can perform skin binding on the mesh object which is not subjected to skin binding, so that the mesh object subjected to skin binding is rendered through the skin animation system. Therefore, other rendering systems deployed by the electronic equipment can be reduced, so that the occupation of storage space in the electronic equipment is reduced, and the space utilization rate is improved.
In one implementation, if the first mesh object is a mesh object that has been skin bound, the electronic device performs step 602.
In one implementation, if the first mesh object is a mesh object that is not skin-bound, the electronic device performs step 202. If the first mesh object is a mesh object already skin-bound, the electronic device performs step 602.
602. And the electronic equipment performs skin rendering on the preset skeleton object according to the first transformation information to generate a target animation.
Wherein, under the circumstance that the first mesh object is a mesh object which is already covered and bound, the first animation data further comprises: the method comprises the steps that a bone object is preset, the transformation of the first grid object corresponds to the transformation of the bone object, and the first transformation information is used for adjusting the bone object.
In one possible implementation, the electronic device may process the preset skeleton object and the first mesh object according to the first transformation information through skin rendering to generate the target animation.
Based on the technical scheme, if the first grid object is the grid object which is already subjected to skin binding, performing skin rendering on the preset skeleton object according to the first transformation information to generate the target animation. Wherein, under the circumstance that the first mesh object is a mesh object which is already covered and bound, the first animation data further comprises: the method comprises the steps that a bone object is preset, the transformation of the first grid object corresponds to the transformation of the bone object, and the first transformation information is used for adjusting the bone object. Therefore, repeated operation on the mesh object bound by the skin can be avoided, and resources of the electronic equipment are saved.
Embodiments of the present disclosure are described below in conjunction with specific examples. As shown in fig. 7, the electronic device may obtain static mesh data (i.e., mesh objects that are not skin bound, such as first mesh objects) from an animal file (e.g., fbx file) and create virtual skeleton objects. The electronic device may then perform skinning binding via the static mesh data and the virtual bone object to obtain dynamic mesh data (i.e., mesh objects that have been skinned bound, such as a second mesh object). And, the electronic device may acquire the transformation information of the mesh object from the action file, and then convert the transformation information of the mesh object. Then, the electronic device may repair the application object of the transformation information, and apply the transformed transformation information to the bone object to generate the skin animation data. The electronic device may then input the dynamic mesh data and the skinning animation data into the skinning animation system, such that the skinning animation system drives the animation.
It will be appreciated that the above method may be implemented by an apparatus for animated rendering. In order to achieve the above functions, the animation rendering device includes a hardware structure and/or a software module for executing each function. Those of skill in the art will readily appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present disclosure.
The embodiments of the present disclosure may divide functional modules of the apparatus for rendering an animation and the like according to the above method examples, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present disclosure, the division of the modules is merely a logic function division, and other division manners may be implemented in actual practice.
Fig. 8 is a block diagram illustrating a structure of an apparatus for animation rendering according to an exemplary embodiment. Referring to fig. 8, the apparatus 80 for animation rendering includes an acquisition unit 801 and a processing unit 802.
An acquisition unit 801 configured to perform acquisition of first animation data including: the first mesh object and the first transformation information, the first transformation information being used to transform the first mesh object. A processing unit 802 configured to perform creating a target bone object. The processing unit 802 is further configured to perform skin binding of the first mesh object according to the target bone object, generating a second mesh object, the transformation of the second mesh object corresponding to the transformation of the target bone object. The processing unit 802 is further configured to perform a matrix transformation process based on the first transformation information, resulting in second transformation information, where the second transformation information is used to adjust the target bone object. The processing unit 802 is further configured to perform rendering processing on the target skeleton object and the second mesh object according to the second transformation information, and generate a target animation.
Optionally, the processing unit 802 is further configured to perform determining whether the first mesh object is a mesh object that has been skin bound. The processing unit 802 is further configured to execute creating the target bone object if the first mesh object is a mesh object that is not skin bound.
Optionally, the processing unit 802 is further configured to perform skin rendering on the preset skeleton object according to the first transformation information if the first mesh object is a mesh object that has been skin-bound, and generate the target animation. Wherein, under the circumstance that the first mesh object is a mesh object which is already covered and bound, the first animation data further comprises: the method comprises the steps that a bone object is preset, the transformation of the first grid object corresponds to the transformation of the bone object, and the first transformation information is used for adjusting the bone object.
Optionally, the animation rendering device may be applied to an electronic device, where a plurality of data nodes are stored in the electronic device, where the plurality of data nodes includes a first type of data node, and the first type of data node is used to store mesh objects and transformation information that are not skin bound. The processing unit 802 is further configured to perform determining a first type of data node from the plurality of data nodes. The acquisition unit 801 is further configured to perform acquisition of first animation data from a first class data node.
Optionally, the processing unit 802 is further configured to perform inputting the second transformation information, the target skeleton object, and the second mesh object into a rendering engine, the rendering engine comprising a skinning animation system. The processing unit 802 is further configured to execute the calling skin animation system to drive the target skeleton object according to the second transformation information, and generate the target animation.
Optionally, the processing unit 802 is further configured to execute the step of obtaining a transformation matrix of the target parent node if the first mesh object has a parent node, and determining a second transformation matrix according to the first transformation matrix, the transformation matrix of the target parent node, and a preset weight, where the target parent node is the parent node of the first mesh object, the first transformation matrix is used for indicating the first transformation information, the second transformation matrix is used for indicating the second transformation information, and the preset weight is used for reflecting the influence degree of the second transformation information on the transformation of the second mesh object. The processing unit 802 is further configured to determine a second transformation matrix according to the first transformation matrix and the preset weight if the first mesh object does not have a parent node.
The specific manner in which the respective modules perform the operations in the apparatus for animation rendering in the above-described embodiments has been described in detail in the embodiments related to the method, and will not be described in detail herein.
Fig. 9 is a schematic structural diagram of an apparatus 90 for animation rendering provided in the present disclosure. As shown in fig. 9, the animation rendering device 90 may include at least one processor 901 and a memory 903 for storing instructions executable by the processor 901. Wherein the processor 901 is configured to execute instructions in the memory 903 to implement the method of animation rendering in the above-described embodiments.
In addition, the animation rendering device 90 may also include a communication bus 902 and at least one communication interface 904.
Processor 901 may be a GPU, a micro-processing unit, an ASIC, or one or more integrated circuits for controlling the execution of programs in accordance with aspects of the present disclosure.
Communication bus 902 may include a path to transfer information between the aforementioned components.
The communication interface 904, uses any transceiver-like means for communicating with other devices or communication networks, such as ethernet, radio access network (radio access network, RAN), wireless local area network (wireless local area networks, WLAN), etc.
The memory 903 may be, but is not limited to, a read-only memory (ROM) or other type of first storage device that can store first information and instructions, a random access memory (random access memory, RAM) or other type of second storage device that can store information and instructions, or an electrically erasable programmable read-only memory (electrically erasable programmable read-only memory, EEPROM), a compact disc read-only memory (compact disc read-only memory) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be stand alone and be connected to the processing unit by a bus. The memory may also be integrated with the processing unit as a volatile storage medium in the GPU.
The memory 903 is used for storing instructions for executing the disclosed solution, and is controlled by the processor 901 to execute. The processor 901 is configured to execute instructions stored in the memory 903 to implement the functions in the methods of the present disclosure.
In a particular implementation, as one embodiment, processor 901 may include one or more GPUs, such as GPU0 and GPU1 in fig. 9.
In a particular implementation, as one embodiment, the apparatus 90 for animation rendering may include a plurality of processors, such as processor 901 and processor 907 in FIG. 9. Each of these processors may be a single-core (single-CPU) processor or may be a multi-core (multi-GPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In a specific implementation, as an embodiment, the apparatus for animation rendering 90 may further include an output device 905 and an input device 906. The output device 905 communicates with the processor 901 and may display information in a variety of ways. For example, the output device 905 may be a liquid crystal display (liquid crystal display, LCD), a light emitting diode (light emitting diode, LED) display device, a Cathode Ray Tube (CRT) display device, or a projector (projector), or the like. The input device 906 communicates with the processor 901 and may accept user input in a variety of ways. For example, the input device 906 may be a mouse, a keyboard, a touch screen device, a sensing device, or the like.
Those skilled in the art will appreciate that the structure shown in FIG. 9 does not constitute a limitation of the animation rendering apparatus 90, and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
The present disclosure also provides an electronic device including a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the method of animation rendering provided by the embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium having instructions stored thereon that, when executed by a processor of a computer, enable the computer to perform the method of animation rendering provided by the embodiments of the present disclosure described above.
The embodiments of the present disclosure also provide a computer program product, which contains a computer program, and when the computer program is executed by a processor, the method for rendering the animation provided by the embodiments of the present disclosure is implemented.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method of animation rendering, the method comprising:
acquiring first animation data, wherein the first animation data comprises: a first mesh object and first transformation information for transforming the first mesh object;
creating a target bone object;
performing skin binding on the first grid object according to the target skeleton object to generate a second grid object, wherein the transformation of the second grid object corresponds to the transformation of the target skeleton object;
performing matrix transformation processing based on the first transformation information to obtain second transformation information, wherein the second transformation information is used for adjusting the target bone object;
and rendering the target skeleton object and the second grid object according to the second transformation information to generate a target animation.
2. The method of claim 1, wherein prior to said creating the target bone object, the method further comprises:
Determining whether the first mesh object is a mesh object already skin-bound;
the creating a target bone object comprising:
and if the first grid object is a grid object which is not subjected to skin binding, creating the target skeleton object.
3. The method according to claim 2, wherein the method further comprises:
if the first grid object is the grid object which is already subjected to skin binding, performing skin rendering on a preset skeleton object according to the first transformation information to generate the target animation;
wherein, when the first mesh object is a mesh object that has been skin-bound, the first animation data further includes: the transformation of the first grid object corresponds to the transformation of the preset bone object, and the first transformation information is used for adjusting the preset bone object.
4. The method according to claim 1, applied to an electronic device, wherein a plurality of data nodes are stored in the electronic device, the plurality of data nodes including a first type of data nodes, the first type of data nodes being used for storing mesh objects and transformation information that are not skin bound; before the acquiring the first animation data, the method further includes:
Determining the first type of data node from the plurality of data nodes;
the acquiring the first animation data includes:
the first animation data is obtained from the first class data node.
5. The method of any of claims 1-4, wherein rendering the target skeletal object and the second mesh object according to the second transformation information to generate a target animation comprises:
inputting the second transformation information, the target skeletal object and the second mesh object into a rendering engine, the rendering engine comprising a skinning animation system;
and calling the skin animation system to drive the target skeleton object according to the second transformation information, and generating the target animation.
6. The method according to any one of claims 1-4, wherein performing a matrix transformation process based on the first transformation information to obtain second transformation information includes:
if the first grid object has a father node, a transformation matrix of a target father node is obtained, and a second transformation matrix is determined according to the first transformation matrix, the transformation matrix of the target father node and preset weights, wherein the target father node is the father node of the first grid object, the first transformation matrix is used for indicating the first transformation information, the second transformation matrix is used for indicating the second transformation information, and the preset weights are used for reflecting the influence degree of the second transformation information on the transformation of the second grid object;
And if the first grid object does not have a father node, determining the second transformation matrix according to the first transformation matrix and the preset weight.
7. An apparatus for animated rendering, comprising:
an acquisition unit configured to perform acquisition of first animation data including: a first mesh object and first transformation information for transforming the first mesh object;
a processing unit configured to perform creating a target bone object;
the processing unit is further configured to perform skin binding of the first mesh object according to the target bone object, and generate a second mesh object, a transformation of which corresponds to the transformation of the target bone object;
the processing unit is further configured to perform matrix transformation processing based on the first transformation information to obtain second transformation information, wherein the second transformation information is used for adjusting the target bone object;
the processing unit is further configured to perform rendering processing on the target skeleton object and the second mesh object according to the second transformation information, and generate a target animation.
8. The apparatus of claim 7, wherein the device comprises a plurality of sensors,
the processing unit is further configured to perform determining whether the first mesh object is a mesh object that has been skin bound;
the processing unit is further configured to execute creating the target bone object if the first mesh object is a mesh object that is not skin bound.
9. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of animation rendering of any of claims 1-6.
10. A computer readable storage medium having instructions stored thereon, which when executed by a processor of a computer, cause the computer to perform the method of animated rendering of any of claims 1-6.
CN202310004882.8A 2023-01-03 2023-01-03 Animation rendering method, device, equipment and storage medium Pending CN116188724A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310004882.8A CN116188724A (en) 2023-01-03 2023-01-03 Animation rendering method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310004882.8A CN116188724A (en) 2023-01-03 2023-01-03 Animation rendering method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116188724A true CN116188724A (en) 2023-05-30

Family

ID=86443644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310004882.8A Pending CN116188724A (en) 2023-01-03 2023-01-03 Animation rendering method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116188724A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116894893A (en) * 2023-09-11 2023-10-17 山东捷瑞数字科技股份有限公司 Nonlinear animation regulation and control method and system based on three-dimensional engine

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116894893A (en) * 2023-09-11 2023-10-17 山东捷瑞数字科技股份有限公司 Nonlinear animation regulation and control method and system based on three-dimensional engine

Similar Documents

Publication Publication Date Title
CN110599593B (en) Data synthesis method, device, equipment and storage medium
CN109754464B (en) Method and apparatus for generating information
CN111292427B (en) Bone displacement information acquisition method, device, equipment and storage medium
CN111738914B (en) Image processing method, device, computer equipment and storage medium
CN109753892B (en) Face wrinkle generation method and device, computer storage medium and terminal
CN111710035B (en) Face reconstruction method, device, computer equipment and storage medium
CN111210485A (en) Image processing method and device, readable medium and electronic equipment
CN113706440A (en) Image processing method, image processing device, computer equipment and storage medium
CN111325220B (en) Image generation method, device, equipment and storage medium
CN116188724A (en) Animation rendering method, device, equipment and storage medium
CN110956571B (en) SLAM-based virtual-real fusion method and electronic equipment
CN110717964B (en) Scene modeling method, terminal and readable storage medium
CN114416723B (en) Data processing method, device, equipment and storage medium
CN113436348B (en) Three-dimensional model processing method and device, electronic equipment and storage medium
CN114677350A (en) Connection point extraction method and device, computer equipment and storage medium
CN112714263B (en) Video generation method, device, equipment and storage medium
CN114332118A (en) Image processing method, device, equipment and storage medium
CN114723600A (en) Method, device, equipment, storage medium and program product for generating cosmetic special effect
CN114897688A (en) Video processing method, video processing device, computer equipment and medium
CN114462580A (en) Training method of text recognition model, text recognition method, device and equipment
CN113362260A (en) Image optimization method and device, storage medium and electronic equipment
CN112183217A (en) Gesture recognition method, interaction method based on gesture recognition and mixed reality glasses
CN116704101B (en) Pixel filling method and terminal based on ray tracing rendering
CN115714888B (en) Video generation method, device, equipment and computer readable storage medium
CN113658283B (en) Image processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination