CN108320322B - Animation data processing method, animation data processing device, computer equipment and storage medium - Google Patents

Animation data processing method, animation data processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN108320322B
CN108320322B CN201810141715.7A CN201810141715A CN108320322B CN 108320322 B CN108320322 B CN 108320322B CN 201810141715 A CN201810141715 A CN 201810141715A CN 108320322 B CN108320322 B CN 108320322B
Authority
CN
China
Prior art keywords
current
key frame
motion data
motion
dimension
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810141715.7A
Other languages
Chinese (zh)
Other versions
CN108320322A (en
Inventor
梁家斌
凌飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Chengdu Co Ltd
Original Assignee
Tencent Technology Chengdu Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Chengdu Co Ltd filed Critical Tencent Technology Chengdu Co Ltd
Priority to CN201810141715.7A priority Critical patent/CN108320322B/en
Publication of CN108320322A publication Critical patent/CN108320322A/en
Application granted granted Critical
Publication of CN108320322B publication Critical patent/CN108320322B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to an animation data processing method, an animation data processing device, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring an original key frame data set corresponding to a current animation object to be subjected to data processing; acquiring first motion data of a forward adjacent key frame in a current transformation dimension, and acquiring second motion data of a backward adjacent key frame in the current transformation dimension; comparing the current motion data with the first motion data and the second motion data respectively; deleting the current motion data according to the current comparison result; returning to the step of acquiring the current key frame to be processed until the key frame to be processed in the original key frame data set is processed, and obtaining a current key frame data set; and deleting the current dimension motion data according to the current motion type, wherein the current dimension motion data is the motion data of each key frame in the current key frame data set in the current transformation dimension. The method reduces the occupation rate of the animation motion data on computer resources.

Description

Animation data processing method, animation data processing device, computer equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for processing animation data, a computer device, and a storage medium.
Background
With the rapid development of computer technology, animation has been widely used in many fields, for example, in the field of games, game characters and actions are generally shown through animation.
At present, a large number of animation characters are used in a game, and in order to improve the fidelity of animation, each animation character contains a large number of animation data, so that a large number of computer resources are required to be occupied when the animation data are stored or corresponding clients are operated, and the speed of computer equipment is slow.
Disclosure of Invention
Based on this, it is necessary to provide an animation data processing method, apparatus, computer device and storage medium for solving the above-mentioned problems, which can compare the motion data of each to-be-processed key frame of an animation object with the motion data of the front and rear adjacent key frames in the same transformation dimension according to the transformation dimension, delete the motion data of the to-be-processed key frame in the transformation dimension according to the comparison result to obtain a current key frame data set, and also delete the motion data of each key frame in the current key frame data set in the transformation dimension according to the motion type of the transformation dimension, so that invalid motion data of the key frame can be deleted, the occupation rate of computer resources is reduced while the animation precision is maintained, and the operating speed of the computer device is increased.
A method of animation data processing, the method comprising: acquiring an original key frame data set corresponding to a current animation object to be subjected to data processing, wherein the original key frame data set comprises motion data of each key frame in a current transformation dimension; acquiring a current key frame to be processed, and acquiring a forward adjacent key frame and a backward adjacent key frame of the current key frame to be processed in the current transformation dimension; acquiring first motion data of the forward adjacent key frame in the current transformation dimension, and acquiring second motion data of the backward adjacent key frame in the current transformation dimension; acquiring current motion data of the current to-be-processed key frame in the current transformation dimension, and comparing the current motion data with the first motion data and the second motion data respectively to obtain a current comparison result; deleting the current motion data according to the current comparison result; returning to the step of acquiring the current key frame to be processed until the key frame to be processed in the original key frame data set is processed, and obtaining a current key frame data set; and acquiring a current motion type corresponding to the current transformation dimension, and deleting current dimension motion data according to the current motion type, wherein the current dimension motion data is the motion data of each key frame in the current key frame data set in the current transformation dimension.
An animation data processing apparatus, the apparatus comprising: the system comprises an original set acquisition module, a data processing module and a data processing module, wherein the original set acquisition module is used for acquiring an original key frame data set corresponding to a current animation object to be subjected to data processing, and the original key frame data set comprises motion data of each key frame in a current transformation dimension; the frame acquisition module is used for acquiring a current key frame to be processed and acquiring a forward adjacent key frame and a backward adjacent key frame of the current key frame to be processed in the current transformation dimension; a motion data obtaining module, configured to obtain first motion data of the forward adjacent keyframe in the current transformation dimension, and obtain second motion data of the backward adjacent keyframe in the current transformation dimension; the comparison module is used for acquiring current motion data of the current to-be-processed key frame in the current transformation dimension, and comparing the current motion data with the first motion data and the second motion data respectively to obtain a current comparison result; the first deleting module is used for deleting the current motion data according to the current comparison result; a returning module, configured to return to the step of obtaining the current key frame to be processed until the key frame to be processed in the original key frame data set is processed, so as to obtain a current key frame data set; and the second deleting module is used for acquiring a current motion type corresponding to the current conversion dimension, and deleting current dimension motion data according to the current motion type, wherein the current dimension motion data is the motion data of each key frame in the current key frame data set in the current conversion dimension.
In an embodiment, the apparatus further includes a first skipping module, configured to skip obtaining a current motion type corresponding to the current transform dimension when the current motion type is a rotational motion type, and delete current dimension motion data according to the current motion type.
In one embodiment, the current animation object is a skeletal animation object, and when the current motion type is a displacement motion type and/or a rotation motion type, motion data of each key frame in the original key frame data set in the current transformation dimension is relative motion data, and the relative motion data is motion data of the current animation object moving relative to a parent object of the current animation object.
In one embodiment, the apparatus further comprises a second skipping module to: determining whether the current animation object is in a white list; and when the current animation object is in the white list, skipping the acquisition of the current motion type corresponding to the current transformation dimension, and deleting the current dimension motion data according to the current motion type.
In one embodiment, the apparatus further comprises: a decimal digit obtaining module, configured to obtain a decimal digit number of motion data of each key frame in the original key frame data set in the current transformation dimension; and the simplification module is used for simplifying the motion data of the current transformation dimension to obtain the simplified motion data when the decimal place number of the motion data of the current transformation dimension exceeds a decimal place number threshold value.
In one embodiment, the current comparison result includes a match or a mismatch, and the comparison module is configured to: comparing the current motion data with the first motion number to obtain a first comparison result; comparing the current motion data with the second motion number to obtain a second comparison result; when the first comparison result is consistent and the second comparison result is consistent, judging that the current comparison result is consistent; the first deletion module is configured to: and deleting the current motion data when the current comparison result is consistent.
A computer device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to carry out the steps of the above animation data processing method.
A computer-readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to execute the steps of the above-described animation data processing method.
According to the animation data processing method, the animation data processing device, the computer equipment and the storage medium, motion data of each key frame to be processed of the animation object can be compared with motion data of adjacent key frames in the same transformation dimension according to the transformation dimension, the motion data of the key frames to be processed in the transformation dimension can be deleted according to the current comparison result, a current key frame data set can be obtained, and the motion data of each key frame in the current key frame data set in the transformation dimension can be deleted according to the motion type of the transformation dimension, so that the invalid motion data of the key frames can be deleted, the occupation rate of computer resources is reduced while the animation precision is kept, and the running speed of the computer equipment is improved.
Drawings
FIG. 1 is a diagram of an application environment of a method for processing animation data provided in one embodiment;
FIG. 2 is a flowchart of a method of processing animation data in one embodiment;
FIG. 3 is a schematic illustration of a skeletal structure in one embodiment;
FIG. 4A is a diagram illustrating motion data for a key frame in one embodiment;
FIG. 4B is a diagram illustrating a comparison of animated images obtained using different animation data processing methods according to an embodiment;
FIG. 5 is a flowchart illustrating an embodiment of obtaining a current motion type corresponding to a current transform dimension and deleting motion data of the current dimension according to the current motion type;
FIG. 6 is a flowchart illustrating an embodiment of obtaining a current motion type corresponding to a current transform dimension and deleting motion data of the current dimension according to the current motion type;
FIG. 7 is a flowchart of an animation data processing method in one embodiment;
FIG. 8 is a flowchart of an animation data processing method in one embodiment;
FIG. 9 is a block diagram showing the construction of an animation data processing apparatus according to an embodiment;
FIG. 10 is a block diagram of a second deletion module in one embodiment;
FIG. 11 is a block diagram of a second deletion module in one embodiment;
FIG. 12 is a block diagram showing the construction of an animation data processing apparatus according to an embodiment;
FIG. 13 is a block diagram showing the construction of an animation data processing apparatus according to an embodiment;
FIG. 14 is a block diagram showing the construction of an animation data processing apparatus according to an embodiment;
FIG. 15 is a block diagram showing an internal configuration of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms unless otherwise specified. These terms are only used to distinguish one element from another. For example, the first motion data may be referred to as second motion data, and similarly, the second motion data may be referred to as first motion data, without departing from the scope of the present application.
Fig. 1 is a diagram of an application environment of an animation data processing method provided in an embodiment, as shown in fig. 1, in the application environment, including a terminal 110 and a server 120. When animation motion data needs to be processed, for example, imported skeletal animation data needs to be reduced through a Unity engine of the terminal, the terminal 110 may obtain, from the server 120, an original key frame data set of a current animation object to be subjected to data processing, where the original key frame data set includes motion data of each key frame in a current transformation dimension, and then process the animation data by using the animation data processing method provided by the embodiment of the present invention, so as to reduce the data volume of the original animation data.
The Unity engine is a specialized game engine developed by Unity Technologies, and has good cross-platform performance, and animation can be created or animation data can be processed through the Unity engine. The terminal 110 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, and the like. The terminal 110 and the computer device 120 may be connected through communication connection manners such as bluetooth, USB (Universal Serial Bus), or network, which is not limited herein. The server 120 may be an independent physical server, or may be a server cluster formed by a plurality of physical servers, and may be a cloud server that provides basic cloud computing services such as a cloud server, a cloud database, a cloud storage, and a CDN. In addition, it is understood that the animation data method provided by the embodiment of the invention can also be executed in a server. The raw key frame data set may also be stored in the terminal 110, and therefore, when the animation data needs to be processed, the terminal 110 may obtain the raw key frame data set from a local storage.
As shown in fig. 2, in an embodiment, an animation data processing method is provided, and this embodiment is mainly illustrated by applying the method to the terminal 110 in fig. 1. The method specifically comprises the following steps:
step S202, an original key frame data set corresponding to the current animation object to be subjected to data processing is obtained, wherein the original key frame data set comprises motion data of each key frame in the current transformation dimension.
In particular, animation may provide a visual effect to a viewer that the animated character changes continuously by playing a series of images continuously. The current animated object may be determined according to the particular animation. For example, for a character in an animation, the current animation object may be a hand of the animated character. For a tree in an animation, an animated object may be a branch of the tree. The key frame is a picture for representing a key action of the animated character in the animation. The locations of the keyframes in the animation may be set as desired. For example, the animation designer may design the pictures of the 1 st frame, the 5 th frame and the 15 th frame of each second animation according to the animation scene, and the 1 st frame, the 5 th frame and the 15 th frame are key frames, while the pictures corresponding to other non-key frames may be interpolated according to the positions, action states and time of the key frames by using a computer to obtain animation data of the non-key frames between the key frames. The motion data may include one or more of displacement motion data, rotational motion data, and zoom motion data. The displacement motion data may be used to represent the distance that the animated object moves over time. Rotational motion data may be used to represent the angle at which an animated object rotates over time. The scaling motion data may be used to represent the multiple by which the animated object is zoomed in or out over time. The transformation dimension is used to represent the direction of motion transformation of the animated object. The motion of the animation is directional, and a transformation dimension may represent a motion transformation direction for a type of motion. For example, in the case of a displacement motion, if the motion is a three-dimensional animation, the motion is described by a coordinate system and includes three motion directions: x-direction, Y-direction, and Z-direction, i.e., three transformation dimensions. The scaling motion is described in a coordinate system, and includes scaling in X, Y and Z different transformation directions. The motion data in the raw keyframe data set may include one or more transform dimensions. When a plurality of transformation dimensions exist, the motion data of only one of the transformation dimensions can be processed, namely, one of the transformation dimensions is used as the current transformation dimension. Motion data for multiple transform dimensions may also be processed. For example, the current transformation dimension may be an X dimension in the displacement motion, or may be an X dimension in the displacement motion and a Y dimension in the scaling motion. The motion data may include specific motion values and corresponding motion trends for the keyframes. The motion trend can be represented by the curvature of the motion curve, and the curvature can be further divided into a forward curvature and a backward curvature. Curvature refers to the rotation rate of the tangential angle of the points of the curve versus the arc length.
In one embodiment, the current animation object is a skeletal animation object, and when the current motion type is a displacement motion type and/or a rotation motion type, the motion data of each key frame in the original key frame data set in the current transformation dimension is relative motion data, and the relative motion data is motion data of the current animation object moving relative to a parent object of the current animation object. In the skeleton animation, the animation role of the skeleton animation has a skeleton structure formed by interconnected skeletons, each skeleton in the skeleton structure is an animation object, the animation objects have a hierarchical relationship, the motion data of the skeletons at the sub-level is the motion data when moving relative to the skeleton at the parent level, and the skeletons can move by controlling the displacement motion data, the rotation motion data and the scaling motion data of the skeletons to generate the animation.
For example, when the animated character is an animation, such as a puppy, the skeleton structure of the puppy is shown in fig. 3, wherein the torso is at a first level, and the head, left arm, right arm, left leg, and right leg are at a second level, and the torso is at a parent level of the head, left arm, right arm, left leg, and right leg. And a third level may be included below the second level. The displacement motion data and the rotation motion data corresponding to the animation object are data which perform relative motion with respect to the parent hierarchy. For example, the displacement of the left arm is (68, -30) in millimeters and the rotation angle is-30, meaning that the left arm is translated 68 millimeters along the X-axis, -30 millimeters along the Y-axis, and-30 degrees on the torso.
Step S204, acquiring the current key frame to be processed, and acquiring the forward adjacent key frame and the backward adjacent key frame of the current key frame to be processed in the current transformation dimension.
Specifically, the forward adjacent key frame refers to a key frame before the current key frame to be processed in the current transformation dimension, and the backward adjacent key frame refers to a key frame after the current key frame to be processed in the current transformation dimension. The original key frame data set comprises a plurality of key frames, and the key frames in the original key frame data set can be sequentially used as current key frames to be processed according to the sequence of the key frames. In some embodiments, the first key frame and the last key frame may not be considered as current pending key frames, since the first key frame has no forward neighboring key frame and the last key frame has no backward neighboring key frame.
Step S206, acquiring first motion data of the forward adjacent key frame in the current transformation dimension, and acquiring second motion data of the backward adjacent key frame in the current transformation dimension.
Specifically, after a current key frame to be processed, a forward adjacent key frame and a backward adjacent key frame of the current key frame to be processed are obtained, motion data of the forward adjacent key frame and the backward adjacent key frame in a current transformation dimension are obtained from an original key frame data set, the motion data of the forward adjacent key frame in the current transformation dimension are used as first motion data, and the motion data of the backward adjacent key frame in the current transformation dimension are used as second motion data.
Step S208, obtaining the current motion data of the current to-be-processed key frame in the current transformation dimension, and comparing the current motion data with the first motion data and the second motion data respectively to obtain a current comparison result.
Specifically, the alignment results may include agreement or disagreement. The criterion of whether to be consistent may be set as required, for example, the criterion may be completely the same as the criterion of consistency, or the criterion may be consistent when the difference is smaller than a certain threshold. For example, in a unity game engine, the motion data is generally reserved to 9 bits after decimal point by default, but the animation precision can reach the required precision after the motion data is reserved to 3 bits after decimal point through demonstration. Therefore, if the integer of the motion data is the same as the first 3 decimal places after the decimal place, it can be determined that the motion data is identical. And after the first motion data and the second motion data are obtained, comparing the current motion data with the first motion data to obtain a first comparison result, and comparing the current motion data with the second motion data to obtain a second comparison result. And then the first comparison result and the second comparison result are integrated to obtain the current comparison result. In one embodiment, the current comparison result is consistent when the first comparison result is consistent and the second comparison result is consistent. When either or both of the first comparison result and the second comparison result are inconsistent, the current comparison result is inconsistent.
In one embodiment, the motion data may be reduced before being compared. For example, data of 3 bits after the decimal point of the motion data is retained.
And step S210, deleting the current motion data according to the current comparison result.
Specifically, after the current comparison result is obtained, whether the current motion data is deleted is determined according to the current comparison result. And if the current comparison result is consistent, deleting the current motion data. And if the current comparison result is inconsistent, keeping the current motion data.
For example, as shown in fig. 4A, the abscissa represents the playing time of the animation, the ordinate represents the displacement motion value, the transformation dimension is the displacement motion data in the X-axis direction, and a1 to a4 are key frames as an example. As shown in fig. 4A, when the current to-be-processed key frame is a2, the motion data of a2 and the forward neighboring key frame a1 in the current transformation dimension are inconsistent, and the motion data of a2 and the backward neighboring key frame a3 in the current transformation dimension are consistent, so that the current comparison result of a2 is inconsistent, and the motion data of a2 key frame in the current transformation dimension is retained. When the current key frame to be processed is a3, the motion data of a3 and the forward adjacent key frame a2 in the current transformation dimension are consistent, and the motion data of a3 and the backward adjacent key frame a4 in the current transformation dimension are consistent, so that the current comparison result of a3 is consistent, and the motion data of a3 key frame in the current transformation dimension are deleted.
Step S212, the step of obtaining the current key frame to be processed is returned until the key frame to be processed in the original key frame data set is processed, and the current key frame data set is obtained.
Specifically, the original key frame data set may include a plurality of to-be-processed key frames, and if the to-be-processed key frames are not processed completely, the step of obtaining the current to-be-processed key frame is returned, the next to-be-processed key frame is obtained as the current to-be-processed key frame, and steps S204 to S210 are repeated. And obtaining the current key frame data set until the key frame to be processed in the original key frame data set is processed. For example, taking fig. 4A as an example, the key frames of the original key frame data set are a 1-a 4, wherein the middle key frames a2 and a3 are to-be-processed key frames, and first, a2 is used as the current to-be-processed key frame, and steps S204-S210 are performed, and since the current comparison result corresponding to a2 is inconsistent, the motion data of a2 in the current transformation dimension is retained. Then, the a3 is used as the current key frame to be processed, and the steps S204 to S210 are executed, because the current comparison result corresponding to the a3 is consistent. Therefore, the motion data of a3 in the current transformation dimension is deleted from the original key frame data set, and the current key frame data set is obtained.
Step S214, obtaining a current motion type corresponding to the current transformation dimension, and deleting current dimension motion data according to the current motion type, wherein the current dimension motion data is motion data of each key frame in the current key frame data set in the current transformation dimension.
Specifically, the motion types may include a displacement motion type, a rotation motion type, and a zoom motion type. And after the current key frame data set is obtained, determining whether to delete the motion data of each key frame in the current key frame data set in the current transformation dimension according to the current motion type by taking the motion type corresponding to the current transformation dimension as the current motion type, namely determining whether to delete the motion data of the current dimension according to the current motion type.
In some embodiments, if the motion data of the current keyframe data set in the current transformation dimension includes motion data of three or more keyframes, it indicates that there is motion data of a keyframe in the current transformation dimension that is different from the motion data of the forward neighboring keyframe or the backward neighboring keyframe. The motion data for each key frame in the current set of key frame data in the current transform dimension is not deleted. For example, in fig. 4A, after the motion data of the a3 key frame in the current transformation dimension is deleted, the motion data of the current transformation dimension further includes the motion data of three key frames, i.e., a1, a2 and a4, so that the motion data of three key frames, i.e., a1, a2 and a4, in the current transformation dimension is retained.
In one embodiment, the step of deleting the motion data of the current dimension according to the current motion type comprises: and when the current dimension motion data are only the current dimension motion data corresponding to the first key frame and the current dimension motion data corresponding to the last key frame of the current animation object, deleting the current dimension motion data corresponding to the first key frame and/or the current dimension motion data corresponding to the last key frame according to the current motion type.
Specifically, when the current dimensional motion data only includes the current dimensional motion data corresponding to the first key frame and the current dimensional motion data corresponding to the last key frame, that is, when the current dimensional motion data only includes the motion data of the first key frame and the motion data of the last key frame, it is necessary to determine whether to delete the current dimensional motion data corresponding to the first key frame and the current dimensional motion data corresponding to the last key frame according to the current motion type. For the skeleton animation, when the current motion type is a rotation motion type, current dimension motion data corresponding to a first key frame and current dimension motion data corresponding to a last key frame can be reserved, so that the current animation object is controlled to rotate according to the rotation motion data of the first key frame and the rotation motion data of the last key frame. When the current motion type is a displacement motion type, when the current dimension motion data corresponding to the first key frame and the current dimension motion data corresponding to the last key frame are both 0, it is indicated that the current animation object does not perform relative motion relative to the parent level in the current transformation dimension, so that the current dimension motion data corresponding to the first key frame and the current dimension motion data corresponding to the last key frame can be deleted. When the current motion type is a zooming motion type, if the current dimensional motion data corresponding to the first key frame and the current dimensional motion data corresponding to the last key frame are the same, that is, the zooming multiples are the same, the current dimensional motion data corresponding to the last key frame can be deleted, and the current dimensional motion data corresponding to the first key frame is retained, so that the animation object can be zoomed according to the zooming multiples of the first key frame. In the implementation of the present invention, taking an animation game as an example, because the data volume of animation data in the animation game is very large, after the current key data set is obtained, if the current dimensional motion data is only the current dimensional motion data corresponding to the first key frame and the current dimensional motion data corresponding to the last key frame of the current animation object, whether to delete the current dimensional motion data corresponding to the first key frame and/or the current dimensional motion data corresponding to the last key frame in the current key frame data set can be further determined according to the current motion type, so that the data volume of the animation data can be further reduced, the occupied memory during the running of the game application is small, and the speed during the running of the animation game application is improved.
According to the animation data processing method, the motion data of each key frame to be processed of the animation object can be compared with the motion data of the key frames adjacent in the front and back in the same transformation dimension according to the transformation dimension, the motion data of the key frames to be processed in the transformation dimension is deleted according to the current comparison result, the current key frame data set is obtained, the motion data of each key frame in the current key frame data set in the transformation dimension can be deleted according to the motion type of the transformation dimension, therefore, the invalid motion data of the key frames can be deleted, the occupation rate of computer resources is reduced while the animation precision is kept, and the operation speed of computer equipment is improved.
For example, as shown in fig. 4B, the left image is an animation image schematic diagram obtained by using the animation data processing method provided by the embodiment of the present invention, and the right image is an animation image schematic diagram obtained by using an animation data processing method in the prior art.
In one embodiment, the animation data processing method may further include: and when the current motion type is the rotary motion type, skipping the step of acquiring the current motion type corresponding to the current conversion dimension and deleting the current dimension motion data according to the current motion type. I.e. when it is rotational motion data, step S214 is not executed after the current key frame data set is obtained.
In one embodiment, the animation data processing method may further include: and determining whether the current animation object is in a white list, and when the current animation object is in the white list, skipping the step of acquiring the current motion type corresponding to the current transformation dimension and deleting the motion data of the current dimension according to the current motion type.
Specifically, the animation objects in the white list may be preset, and may be specifically set according to actual needs. For example, the animated objects in the whitelist may include the left upper arm and the left leg. And if the current animation object is in the white list, skipping the step of acquiring the current motion type corresponding to the current transformation dimension and deleting the motion data of the current dimension according to the current motion type. That is, after the current key frame data set is obtained, step S214 is not executed.
In an embodiment, as shown in fig. 5, step S214 is to obtain a current motion type corresponding to a current transform dimension, and the step of deleting the motion data of the current dimension according to the current motion type may specifically include the following steps:
step S502, when the current motion type is a displacement motion type, determining the relative displacement state of the current animation object according to the current dimension motion data corresponding to the first key frame and the current dimension motion data corresponding to the last key frame.
Specifically, the first key frame refers to the first key frame, and the last key frame refers to the last key frame. When the current motion type is a displacement motion type, the current transformation dimension may be a dimension corresponding to movement in an X-axis direction, a Y-axis direction, or a Z-axis direction. The displacement state may be a stationary state or a moving state. The displacement state may include a relative displacement state as well as an absolute displacement state. Relative displacement state refers to a displacement state relative to a parent hierarchy. When the relative displacement state is a relatively stationary state, it means that the current animation object is stationary with respect to the animation object of the parent hierarchy, and if the animation object of the parent hierarchy is moving, the current animation object is also moving as the animation object of the parent hierarchy, so that the current animation object is stationary with respect to the animation object of the parent hierarchy. The relative displacement state of the current animation object can be determined according to the current dimension motion data corresponding to the first key frame and the current dimension motion data corresponding to the last key frame. If the displacement motion value of the current dimension motion data corresponding to the first key frame is 0 and the curvature of the corresponding displacement curve is 0, the displacement motion value of the current dimension motion data corresponding to the last key frame is 0 and the curvature of the corresponding displacement curve is 0, and the relative displacement state of the current animation object is a static state.
In step S504, when the relative displacement state is a static state, the current dimensional motion data corresponding to the first key frame and the current dimensional motion data corresponding to the last key frame are deleted.
Specifically, when the relative displacement state is a static state, it indicates that the current animation object does not move in the current transformation dimension relative to the animation object of the parent level, and therefore, the current dimension motion data corresponding to the first key frame and the current dimension motion data corresponding to the last key frame may be deleted.
In an embodiment, as shown in fig. 6, step S214 is to obtain a current motion type corresponding to a current transform dimension, and deleting the current dimension motion data according to the current motion type may specifically include the following steps:
step S602, when the current motion type is a zoom motion type, determining a relative zoom state of the current animation object according to the current dimension motion data corresponding to the first key frame and the current dimension motion data corresponding to the last key frame.
In particular, zooming may include zooming out or zooming in. The zoom state includes no zoom or zoom. The zoom state may include a relative zoom state, which refers to a zoom state of a key frame of the current animation object from the head key frame to the tail key frame with respect to the head key frame, and an absolute zoom state, which refers to a zoom state with respect to the original current animation object. For example, if the scaling value of the first key frame is 2, the curvature of the corresponding scaling curve is 0, the scaling value of the last key frame is 2, and the curvature of the corresponding scaling curve is 0, the relative scaling state is a no scaling state, and the absolute scaling state is a scaling state because the current animation object is twice as large as the original current animation object.
Step S604, when the relative zoom state is no zoom, deleting the current dimensional motion data corresponding to the last key frame, and retaining the current dimensional motion data corresponding to the first key frame.
Specifically, when the relative scaling state is no scaling, it indicates that the current animation object is not scaled in the current transformation dimension relative to the animation object of the first key frame, so that the current dimension motion data corresponding to the last key frame may be deleted, and the current dimension motion data corresponding to the first key frame may be retained.
In one embodiment, as shown in fig. 7, the step of obtaining the first motion data of the forward neighboring key frame in the current transformation dimension and obtaining the second motion data of the backward neighboring key frame in the current transformation dimension at step S206 further includes:
step S702, acquiring the decimal number of the motion data of each key frame in the original key frame data set in the current transformation dimension.
Specifically, after the original key frame data set is obtained, the decimal place number of the motion data of the current transformation dimension is obtained. For example, if the motion data is 2.123456789 meters, the decimal place is 9 digits.
Step S704, when the decimal place number of the motion data of the current transformation dimension exceeds the decimal place number threshold, the motion data of the current transformation dimension is reduced to obtain reduced motion data.
In particular, the decimal place threshold may be set according to the accuracy requirement of the animation. For example, it has been demonstrated that the accuracy requirement of the game can be satisfied if the decimal place number is 3 digits in the standard unit, and therefore, the decimal place threshold value may be 3. When the simplification is performed, the simplification may be performed by using a rounding method, or may be performed by using a four-round six-into five-even method, which is not limited herein. The decimal digit of the simplified motion data is consistent with the decimal digit threshold. It can be understood that the current motion data, the first motion data, and the second motion data of the current to-be-processed keyframe in the current transformation dimension may be motion data obtained by reducing the motion data of the original keyframe data set.
The following describes an animation data processing method provided by an embodiment of the present invention, taking a game animation as an example, as shown in fig. 8, specifically including the following steps:
in step S802, animation motion data of the game is imported into the unity game animation engine. Suppose that the game animation data includes motion data of three types of motions of 4 key frames in displacement, rotation, and scaling. The displacement motion data may include motion data in three transformation dimensions, such as an X-axis direction, a Y-axis direction, and a Z-axis direction. The scaling motion data may include motion data in three transform dimensions, an X-axis direction, a Y-axis direction, and a Z-axis direction.
Step S804, the animation motion data of the game is simplified. And rounding off the reserved motion data decimal 3 bits to obtain the simplified motion data.
Step S806, acquiring the current key frame to be processed, and acquiring the forward adjacent key frame and the backward adjacent key frame of the current key frame to be processed. And when the first time of acquisition is carried out, the second key frame is taken as the current key frame to be processed according to the key frame sequence, and the first key frame and the third key frame are respectively the forward adjacent key frame and the backward adjacent key frame of the current key frame to be processed.
Step S808, acquiring first motion data of a forward adjacent key frame in the current transformation dimension, and acquiring second motion data of a backward adjacent key frame in the current transformation dimension. The next motion direction in the displacement, rotation and scaling motion types can be used as the current transformation dimension, and the data of each current transformation dimension is processed respectively. Taking the zoom motion data on the Y axis as the motion data of the current transformation dimension as an example, the zoom motion data of the first key frame on the Y axis is the first motion data, the zoom motion data of the third key frame on the Y axis is the second motion data,
step S810, obtaining current motion data of the current to-be-processed key frame in the current transformation dimension, and comparing the current motion data with the first motion data and the second motion data respectively to obtain a current comparison result. The zoom motion data of the second key frame on the Y axis is compared with the zoom motion data of the first key frame on the Y axis and the zoom motion data of the third key frame on the Y axis, and if the zoom motion data of the second key frame on the Y axis is the same as the zoom motion data of the first key frame on the Y axis, the zoom motion data of the second key frame on the Y axis is the same as the zoom motion data of the third key frame on the Y axis, so the comparison results are consistent.
And step S812, deleting the current motion data according to the current comparison result. And deleting the zooming motion data of the second key frame on the Y axis to obtain a current key frame data set because the comparison result is consistent.
Step S814, determining whether the processing of the to-be-processed key frame of the current dimension conversion is completed, if yes, entering step S816, and if not, returning to step S806. Since two pending key frames are included: the second key frame and the third key frame, so it is necessary to return to step S806. And taking the third key frame as the current key frame to be processed, wherein the forward adjacent key frame of the third key frame in the current transformation dimension is the first key frame and the backward adjacent key frame is the fourth key frame because the zooming motion data of the second key frame on the Y axis is deleted. It is assumed that the zoom motion data of the third key frame in the Y-axis is the same as the zoom motion data of the first key frame and the zoom motion data of the fourth key frame, and therefore, the zoom motion data of the third key frame in the Y-axis can be deleted.
Step S816, obtaining a current motion type corresponding to the current transform dimension, and deleting the current dimension motion data according to the current motion type. For the zooming motion data in the Y-axis direction, the current key frame data set includes the motion data of the first key frame and the motion data of the fourth key frame, and it is assumed that the zooming motion data of the first key frame in the Y-axis direction is the same as the zooming motion data of the fourth key frame in the Y-axis direction, so the zooming motion data of the fourth key frame in the Y-axis direction can be deleted, and the zooming motion data of the first key frame in the Y-axis direction is retained.
It should be understood that, although the steps in the flowcharts of the embodiments of the present invention are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in various embodiments may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
As shown in fig. 9, in an embodiment, an animation data processing apparatus is provided, which may be integrated in the terminal 110 or the server 120, and specifically may include an original set obtaining module 902, a frame obtaining module 904, a motion data obtaining module 906, a comparing module 908, a first deleting module 910, a returning module 912, and a second deleting module 914.
An original set obtaining module 902, configured to obtain an original key frame data set corresponding to a current animation object to be subjected to data processing, where the original key frame data set includes motion data of each key frame in a current transformation dimension.
A frame obtaining module 904, configured to obtain a current key frame to be processed, and obtain a forward neighboring key frame and a backward neighboring key frame of the current key frame to be processed in a current transformation dimension.
A motion data obtaining module 906, configured to obtain first motion data of a forward neighboring key frame in the current transformation dimension, and obtain second motion data of a backward neighboring key frame in the current transformation dimension.
The comparison module 908 is configured to obtain current motion data of the current to-be-processed keyframe in the current transformation dimension, and compare the current motion data with the first motion data and the second motion data, respectively, to obtain a current comparison result.
A first deleting module 910, configured to delete the current motion data according to the current comparison result.
The returning module 914 is configured to return to the step of obtaining the current key frame to be processed until the processing of the key frame to be processed in the original key frame data set is completed, so as to obtain the current key frame data set.
The second deleting module 916 is configured to obtain a current motion type corresponding to a current transformation dimension, and delete current dimension motion data according to the current motion type, where the current dimension motion data is motion data of each key frame in a current key frame data set in the current transformation dimension.
In one embodiment, the second deletion module 916 is configured to: and when the current dimension motion data are only the current dimension motion data corresponding to the first key frame and the current dimension motion data corresponding to the last key frame of the current animation object, deleting the current dimension motion data corresponding to the first key frame and/or the current dimension motion data corresponding to the last key frame according to the current motion type.
In one embodiment, the current comparison result includes a match or a mismatch, and the comparison module 908 is configured to: and comparing the current motion data with the first motion number to obtain a first comparison result. And comparing the current motion data with the second motion number to obtain a second comparison result. And when the first comparison result is consistent and the second comparison result is consistent, judging that the current comparison result is consistent. The first deleting module 910 is configured to: and deleting the current motion data when the current comparison result is consistent.
In one embodiment, as shown in FIG. 10, the second deletion module 914 includes:
a displacement state determining unit 1002, configured to determine, when the current motion type is a displacement motion type, a relative displacement state of the current animation object according to current dimensional motion data corresponding to the first key frame and current dimensional motion data corresponding to the last key frame.
The displacement data deleting unit 1004 deletes the current dimensional motion data corresponding to the first key frame and the current dimensional motion data corresponding to the last key frame when the relative displacement state is a static state.
In one embodiment, as shown in FIG. 11, the second deletion module 914 includes:
a scaling state determining unit 1102, configured to determine, when the current motion type is a scaling motion type, a relative scaling state of the current animation object according to current dimension motion data corresponding to the first key frame and current dimension motion data corresponding to the last key frame.
And a scaling data deleting unit 1104, configured to delete the current dimension motion data corresponding to the last key frame and retain the current dimension motion data corresponding to the first key frame when the relative scaling state is no scaling.
In one embodiment, as shown in fig. 12, the animation data processing apparatus further includes a first skipping module 1202, configured to skip obtaining a current motion type corresponding to a current transform dimension and delete current dimension motion data according to the current motion type when the current motion type is a rotational motion type.
In one embodiment, the current animation object is a skeletal animation object, when the current motion type is a displacement motion type and/or a rotation motion type, the motion data of each key frame in the original key frame data set in the current transformation dimension is relative motion data, and the relative motion data is motion data of the current animation object moving relative to a parent object of the current animation object.
In one embodiment, as shown in fig. 13, the animation data processing apparatus further comprises a second skipping module 1302 for: and determining whether the current animation object is in a white list, skipping to obtain a current motion type corresponding to the current transformation dimension when the current animation object is in the white list, and deleting the current dimension motion data according to the current motion type.
In one embodiment, as shown in fig. 14, the animation data processing apparatus further includes:
a decimal number obtaining module 1402, configured to obtain a decimal number of motion data of a current transform dimension of a key frame in the original key frame data set.
The reducing module 1404 is configured to reduce the motion data of the current transform dimension to obtain reduced motion data when the decimal place number of the motion data of the current transform dimension exceeds the decimal place number threshold.
FIG. 15 is a diagram showing an internal structure of a computer device in one embodiment. The computer device may specifically be the terminal 110 or the server 120 in fig. 1. Taking a terminal as an example, as shown in fig. 15, the computer apparatus includes a processor, a memory, a network interface, an input device, and a display screen connected through a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may further store a computer program that, when executed by the processor, causes the processor to implement the animation data processing method. The internal memory may also have a computer program stored therein, which, when executed by the processor, causes the processor to perform the animation data processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 15 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the animation data processing apparatus provided in the present application may be implemented in the form of a computer program that is executable on a computer device as shown in fig. 15. The memory of the computer device may store therein various program modules constituting the animation data processing apparatus, such as an original set acquisition module 902, a frame acquisition module 904, a motion data acquisition module 906, a comparison module 908, a first deletion module 910, a return module 914, and a second deletion module 916 shown in fig. 9. The respective program modules constitute computer programs that cause the processors to execute the steps in the animation data processing methods of the embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 15 may obtain, through the original set obtaining module 902 in the animation data processing apparatus shown in fig. 9, an original key frame data set corresponding to the current animation object to be subjected to data processing, where the original key frame data set includes motion data of each key frame in the current transformation dimension. The frame obtaining module 904 obtains the current key frame to be processed, and obtains the forward adjacent key frame and the backward adjacent key frame of the current key frame to be processed in the current transformation dimension. Acquiring first motion data of a forward adjacent key frame in a current transformation dimension and acquiring second motion data of a backward adjacent key frame in the current transformation dimension through a motion data acquisition module 906; obtaining current motion data of the current to-be-processed key frame in the current transformation dimension through a comparison module 908, and comparing the current motion data with the first motion data and the second motion data respectively to obtain a current comparison result; deleting the current motion data according to the current comparison result through a first deleting module 910; returning the step of obtaining the current key frame to be processed through a returning module 914 until the key frame to be processed in the original key frame data set is processed, so as to obtain a current key frame data set; the second deleting module 916 obtains a current motion type corresponding to the current transformation dimension, and deletes current dimension motion data according to the current motion type, where the current dimension motion data is motion data of each key frame in the current key frame data set in the current transformation dimension.
In one embodiment, a computer device is proposed, the computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program: acquiring an original key frame data set corresponding to a current animation object to be subjected to data processing, wherein the original key frame data set comprises motion data of each key frame in a current transformation dimension; acquiring a current key frame to be processed, and acquiring a forward adjacent key frame and a backward adjacent key frame of the current key frame to be processed in a current transformation dimension; acquiring first motion data of a forward adjacent key frame in a current transformation dimension, and acquiring second motion data of a backward adjacent key frame in the current transformation dimension; acquiring current motion data of a current key frame to be processed in a current transformation dimension, and comparing the current motion data with first motion data and second motion data respectively to obtain a current comparison result; deleting the current motion data according to the current comparison result; returning to the step of acquiring the current key frame to be processed until the key frame to be processed in the original key frame data set is processed, and obtaining a current key frame data set; and acquiring a current motion type corresponding to the current transformation dimension, and deleting current dimension motion data according to the current motion type, wherein the current dimension motion data is the motion data of each key frame in the current key frame data set in the current transformation dimension.
In one embodiment, the step of acquiring a current motion type corresponding to a current transformation dimension, which is executed by a processor, and deleting motion data of the current dimension according to the current motion type includes: and when the current dimension motion data are only the current dimension motion data corresponding to the first key frame and the current dimension motion data corresponding to the last key frame of the current animation object, deleting the current dimension motion data corresponding to the first key frame and/or the current dimension motion data corresponding to the last key frame according to the current motion type.
In one embodiment, when the current dimensional motion data is only the current dimensional motion data corresponding to the first key frame and the current dimensional motion data corresponding to the last key frame of the current animation object, the step of deleting the current dimensional motion data corresponding to the first key frame and/or the current dimensional motion data corresponding to the last key frame according to the current motion type, which is executed by the processor, includes: when the current motion type is a displacement motion type, determining the relative displacement state of the current animation object according to the current dimension motion data corresponding to the first key frame and the current dimension motion data corresponding to the last key frame; and when the relative displacement state is a static state, deleting the current dimensional motion data corresponding to the first key frame and the current dimensional motion data corresponding to the last key frame.
In one embodiment, when the current dimensional motion data is only the current dimensional motion data corresponding to the first key frame and the current dimensional motion data corresponding to the last key frame of the current animation object, the step of deleting the current dimensional motion data corresponding to the first key frame and/or the current dimensional motion data corresponding to the last key frame according to the current motion type, which is executed by the processor, includes: when the current motion type is a zooming motion type, determining the relative zooming state of the current animation object according to the current dimension motion data corresponding to the first key frame and the current dimension motion data corresponding to the last key frame; and when the relative zooming state is no zooming, deleting the current dimension motion data corresponding to the tail key frame and keeping the current dimension motion data corresponding to the first key frame.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and when the current motion type is the rotary motion type, skipping the step of acquiring the current motion type corresponding to the current conversion dimension and deleting the current dimension motion data according to the current motion type.
In one embodiment, the current animation object is a skeletal animation object, when the current motion type is a displacement motion type and/or a rotation motion type, the motion data of each key frame in the original key frame data set in the current transformation dimension is relative motion data, and the relative motion data is motion data of the current animation object moving relative to a parent object of the current animation object.
In one embodiment, the processor, when executing the computer program, further performs the steps of: determining whether the current animation object is in a white list; and when the current animation object is in the white list, skipping the step of acquiring the current motion type corresponding to the current transformation dimension and deleting the motion data of the current dimension according to the current motion type.
In one embodiment, the step of obtaining first motion data of a forward neighboring keyframe in the current transform dimension and obtaining second motion data of a backward neighboring keyframe in the current transform dimension performed by the processor further comprises: acquiring decimal numbers of motion data of key frames in the current transformation dimension in an original key frame data set; and when the decimal place number of the motion data of the current transformation dimension exceeds the decimal place number threshold value, simplifying the motion data of the current transformation dimension to obtain simplified motion data.
In one embodiment, the current comparison result executed by the processor includes consistency or inconsistency, and the step of comparing the current motion data with the first motion data and the second motion data respectively to obtain the current comparison result includes: comparing the current motion data with the first motion number to obtain a first comparison result; comparing the current motion data with the second motion number to obtain a second comparison result; when the first comparison result is consistent and the second comparison result is consistent, judging that the current comparison result is consistent; the step of deleting the current motion data according to the current comparison result executed by the processor comprises the following steps: and deleting the current motion data when the current comparison result is consistent.
In one embodiment, a computer readable storage medium is provided, having a computer program stored thereon, which, when executed by a processor, causes the processor to perform the steps of: acquiring an original key frame data set corresponding to a current animation object to be subjected to data processing, wherein the original key frame data set comprises motion data of each key frame in a current transformation dimension; acquiring a current key frame to be processed, and acquiring a forward adjacent key frame and a backward adjacent key frame of the current key frame to be processed in a current transformation dimension; acquiring first motion data of a forward adjacent key frame in a current transformation dimension, and acquiring second motion data of a backward adjacent key frame in the current transformation dimension; acquiring current motion data of a current key frame to be processed in a current transformation dimension, and comparing the current motion data with first motion data and second motion data respectively to obtain a current comparison result; deleting the current motion data according to the current comparison result; returning to the step of acquiring the current key frame to be processed until the key frame to be processed in the original key frame data set is processed, and obtaining a current key frame data set; and acquiring a current motion type corresponding to the current transformation dimension, and deleting current dimension motion data according to the current motion type, wherein the current dimension motion data is the motion data of each key frame in the current key frame data set in the current transformation dimension.
In one embodiment, the step performed by the processor of obtaining a current motion type corresponding to a current transformation dimension, and deleting current dimension motion data according to the current motion type, where the current dimension motion data is motion data of each key frame in a current key frame data set in the current transformation dimension, includes: and when the current dimension motion data are only the current dimension motion data corresponding to the first key frame and the current dimension motion data corresponding to the last key frame of the current animation object, deleting the current dimension motion data corresponding to the first key frame and/or the current dimension motion data corresponding to the last key frame according to the current motion type.
In one embodiment, when the current dimensional motion data is only the current dimensional motion data corresponding to the head key frame and the current dimensional motion data corresponding to the tail key frame of the current animation object, the step of deleting the current dimensional motion data corresponding to the head key frame and/or the current dimensional motion data corresponding to the tail key frame according to the current motion type, which is executed by the processor, includes: when the current motion type is a displacement motion type, determining the relative displacement state of the current animation object according to the current dimension motion data corresponding to the first key frame and the current dimension motion data corresponding to the last key frame; and when the relative displacement state is a static state, deleting the current dimensional motion data corresponding to the first key frame and the current dimensional motion data corresponding to the last key frame.
In one embodiment, when the current dimensional motion data is only the current dimensional motion data corresponding to the first key frame and the current dimensional motion data corresponding to the last key frame of the current animation object, the step of deleting the current dimensional motion data corresponding to the first key frame and/or the current dimensional motion data corresponding to the last key frame according to the current motion type, which is executed by the processor, includes: when the current motion type is a zooming motion type, determining the relative zooming state of the current animation object according to the current dimension motion data corresponding to the first key frame and the current dimension motion data corresponding to the last key frame; and when the relative zooming state is no zooming, deleting the current dimension motion data corresponding to the tail key frame and keeping the current dimension motion data corresponding to the first key frame.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and when the current motion type is the rotary motion type, skipping the step of acquiring the current motion type corresponding to the current transformation dimension, and deleting the current dimension motion data according to the current motion type, wherein the current dimension motion data is the motion data of each key frame in the current key frame data set in the current transformation dimension.
In one embodiment, the current animation object is a skeletal animation object, when the current motion type is a displacement motion type and/or a rotation motion type, the motion data of each key frame in the original key frame data set in the current transformation dimension is relative motion data, and the relative motion data is motion data of the current animation object moving relative to a parent object of the current animation object.
In one embodiment, the processor, when executing the computer program, further performs the steps of: determining whether the current animation object is in a white list; and when the current animation object is in the white list, skipping the step of acquiring the current motion type corresponding to the current transformation dimension and deleting the motion data of the current dimension according to the current motion type.
In one embodiment, the step of obtaining first motion data of a forward neighboring keyframe in the current transform dimension and obtaining second motion data of a backward neighboring keyframe in the current transform dimension performed by the processor further comprises: acquiring decimal numbers of motion data of key frames in the current transformation dimension in an original key frame data set; and when the decimal place number of the motion data of the current transformation dimension exceeds the decimal place number threshold value, simplifying the motion data of the current transformation dimension to obtain simplified motion data.
In one embodiment, the current comparison result executed by the processor includes consistency or inconsistency, and the step of comparing the current motion data with the first motion data and the second motion data respectively to obtain the current comparison result includes: comparing the current motion data with the first motion number to obtain a first comparison result; comparing the current motion data with the second motion number to obtain a second comparison result; when the first comparison result is consistent and the second comparison result is consistent, judging that the current comparison result is consistent; the step of deleting the current motion data according to the current comparison result executed by the processor comprises the following steps: and deleting the current motion data when the current comparison result is consistent.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, and the program can be stored in a non-volatile computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (20)

1. A method of animation data processing, the method comprising:
acquiring an original key frame data set corresponding to a current animation object to be subjected to data processing, wherein the original key frame data set comprises motion data of each key frame in a current transformation dimension;
acquiring a current key frame to be processed, and acquiring a forward adjacent key frame and a backward adjacent key frame of the current key frame to be processed in the current transformation dimension;
acquiring first motion data of the forward adjacent key frame in the current transformation dimension, and acquiring second motion data of the backward adjacent key frame in the current transformation dimension;
acquiring current motion data of the current to-be-processed key frame in the current transformation dimension, and comparing the current motion data with the first motion data and the second motion data respectively to obtain a current comparison result;
deleting the current motion data according to the current comparison result;
returning to the step of acquiring the current key frame to be processed until the key frame to be processed in the original key frame data set is processed, and obtaining a current key frame data set;
and acquiring a current motion type corresponding to the current transformation dimension, and deleting current dimension motion data according to the current motion type, wherein the current dimension motion data is the motion data of each key frame in the current key frame data set in the current transformation dimension.
2. The method of claim 1, wherein the step of deleting the current dimension motion data according to the current motion type comprises:
and when the current dimension motion data are only the current dimension motion data corresponding to the head key frame and the current dimension motion data corresponding to the tail key frame of the current animation object, deleting the current dimension motion data corresponding to the head key frame and/or the current dimension motion data corresponding to the tail key frame according to the current motion type.
3. The method according to claim 2, wherein, when the current dimensional motion data is only the current dimensional motion data corresponding to the head key frame and the current dimensional motion data corresponding to the tail key frame of the current animation object, the step of deleting the current dimensional motion data corresponding to the head key frame and/or the current dimensional motion data corresponding to the tail key frame according to the current motion type comprises:
when the current motion type is a displacement motion type, determining the relative displacement state of the current animation object according to the current dimension motion data corresponding to the first key frame and the current dimension motion data corresponding to the tail key frame;
and when the relative displacement state is a static state, deleting the current dimension motion data corresponding to the first key frame and the current dimension motion data corresponding to the tail key frame.
4. The method according to claim 2, wherein, when the current dimensional motion data is only the current dimensional motion data corresponding to the head key frame and the current dimensional motion data corresponding to the tail key frame of the current animation object, the step of deleting the current dimensional motion data corresponding to the head key frame and/or the current dimensional motion data corresponding to the tail key frame according to the current motion type comprises:
when the current motion type is a zooming motion type, determining the relative zooming state of the current animation object according to the current dimension motion data corresponding to the first key frame and the current dimension motion data corresponding to the last key frame;
and when the relative zooming state is no zooming, deleting the current dimension motion data corresponding to the tail key frame and reserving the current dimension motion data corresponding to the head key frame.
5. The method of claim 1, further comprising:
and when the current motion type is a rotary motion type, skipping the step of deleting the current dimension motion data according to the current motion type.
6. The method according to any one of claims 1 to 4, wherein the current animated object is a skeletal animated object, and when the current motion type is a displacement motion type and/or a rotation motion type, the motion data of each key frame in the original key frame data set in the current transformation dimension is relative motion data, and the relative motion data is motion data of the current animated object moving relative to a parent object of the current animated object.
7. The method according to any one of claims 1 to 4, further comprising:
determining whether the current animation object is in a white list;
and when the current animation object is in the white list, skipping the step of acquiring the current motion type corresponding to the current transformation dimension and deleting the current dimension motion data according to the current motion type.
8. The method of claim 1, wherein the step of obtaining the first motion data of the forward neighboring keyframe in the current transform dimension and obtaining the second motion data of the backward neighboring keyframe in the current transform dimension further comprises:
acquiring decimal numbers of motion data of each key frame in the original key frame data set in the current transformation dimension;
and when the decimal place number of the motion data of the current transformation dimension exceeds a decimal place number threshold value, simplifying the motion data of the current transformation dimension to obtain simplified motion data.
9. The method of claim 1, wherein the current comparison result comprises consistency or inconsistency, and the step of comparing the current motion data with the first motion data and the second motion data respectively to obtain the current comparison result comprises:
comparing the current motion data with the first motion number to obtain a first comparison result;
comparing the current motion data with the second motion number to obtain a second comparison result;
when the first comparison result is consistent and the second comparison result is consistent, judging that the current comparison result is consistent;
the step of deleting the current motion data according to the current comparison result comprises:
and deleting the current motion data when the current comparison result is consistent.
10. An animation data processing apparatus, the apparatus comprising:
the system comprises an original set acquisition module, a data processing module and a data processing module, wherein the original set acquisition module is used for acquiring an original key frame data set corresponding to a current animation object to be subjected to data processing, and the original key frame data set comprises motion data of each key frame in a current transformation dimension;
the frame acquisition module is used for acquiring a current key frame to be processed and acquiring a forward adjacent key frame and a backward adjacent key frame of the current key frame to be processed in the current transformation dimension;
a motion data obtaining module, configured to obtain first motion data of the forward adjacent keyframe in the current transformation dimension, and obtain second motion data of the backward adjacent keyframe in the current transformation dimension;
the comparison module is used for acquiring current motion data of the current to-be-processed key frame in the current transformation dimension, and comparing the current motion data with the first motion data and the second motion data respectively to obtain a current comparison result;
the first deleting module is used for deleting the current motion data according to the current comparison result;
a returning module, configured to return to the step of obtaining the current key frame to be processed until the key frame to be processed in the original key frame data set is processed, so as to obtain a current key frame data set;
and the second deleting module is used for acquiring a current motion type corresponding to the current conversion dimension, and deleting current dimension motion data according to the current motion type, wherein the current dimension motion data is the motion data of each key frame in the current key frame data set in the current conversion dimension.
11. The apparatus of claim 10, wherein the second deletion module is configured to:
and when the current dimension motion data are only the current dimension motion data corresponding to the head key frame and the current dimension motion data corresponding to the tail key frame of the current animation object, deleting the current dimension motion data corresponding to the head key frame and/or the current dimension motion data corresponding to the tail key frame according to the current motion type.
12. The apparatus of claim 11, wherein the second deletion module comprises:
a displacement state determination unit, configured to determine, when the current motion type is a displacement motion type, a relative displacement state of the current animation object according to current dimensional motion data corresponding to the first key frame and current dimensional motion data corresponding to the last key frame;
and the displacement data deleting unit is used for deleting the current dimensional motion data corresponding to the first key frame and the current dimensional motion data corresponding to the tail key frame when the relative displacement state is a static state.
13. The apparatus of claim 11, wherein the second deletion module comprises:
a scaling state determining unit, configured to determine, when the current motion type is a scaling motion type, a relative scaling state of the current animation object according to current dimensional motion data corresponding to the first key frame and current dimensional motion data corresponding to the last key frame;
and the zooming data deleting unit is used for deleting the current dimension motion data corresponding to the tail key frame and reserving the current dimension motion data corresponding to the head key frame when the relative zooming state is no zooming.
14. The apparatus of claim 10, further comprising:
and the first skipping module is used for skipping the step of deleting the current dimension motion data according to the current motion type when the current motion type is the rotary motion type.
15. The apparatus according to any one of claims 10 to 13, wherein the current animated object is a skeletal animated object, and when the current motion type is a displacement motion type and/or a rotation motion type, the motion data of each key frame in the original key frame data set in the current transformation dimension is relative motion data, and the relative motion data is motion data of the current animated object moving relative to a parent object of the current animated object.
16. The apparatus according to any one of claims 10 to 13, further comprising a second skipping module for:
determining whether the current animation object is in a white list;
and when the current animation object is in the white list, skipping the step of acquiring the current motion type corresponding to the current transformation dimension and deleting the current dimension motion data according to the current motion type.
17. The apparatus of claim 10, further comprising:
a decimal digit obtaining module, configured to obtain a decimal digit number of motion data of each key frame in the original key frame data set in the current transformation dimension;
and the simplification module is used for simplifying the motion data of the current transformation dimension to obtain the simplified motion data when the decimal place number of the motion data of the current transformation dimension exceeds a decimal place number threshold value.
18. The apparatus of claim 10, wherein the current comparison result comprises a match or a mismatch, and wherein the comparison module is configured to:
comparing the current motion data with the first motion number to obtain a first comparison result;
comparing the current motion data with the second motion number to obtain a second comparison result;
when the first comparison result is consistent and the second comparison result is consistent, judging that the current comparison result is consistent;
the first deletion module is configured to:
and deleting the current motion data when the current comparison result is consistent.
19. A computer device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to carry out the steps of the animation data processing method as claimed in any one of claims 1 to 9.
20. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, causes the processor to carry out the steps of the animation data processing method as claimed in any one of claims 1 to 9.
CN201810141715.7A 2018-02-11 2018-02-11 Animation data processing method, animation data processing device, computer equipment and storage medium Active CN108320322B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810141715.7A CN108320322B (en) 2018-02-11 2018-02-11 Animation data processing method, animation data processing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810141715.7A CN108320322B (en) 2018-02-11 2018-02-11 Animation data processing method, animation data processing device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN108320322A CN108320322A (en) 2018-07-24
CN108320322B true CN108320322B (en) 2021-06-08

Family

ID=62902910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810141715.7A Active CN108320322B (en) 2018-02-11 2018-02-11 Animation data processing method, animation data processing device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN108320322B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544664B (en) * 2018-11-21 2023-03-28 北京像素软件科技股份有限公司 Animation data processing method and device, electronic equipment and readable storage medium
CN109725948B (en) * 2018-12-11 2021-09-21 麒麟合盛网络技术股份有限公司 Animation resource configuration method and device
CN113392163B (en) * 2020-03-12 2024-02-06 广东博智林机器人有限公司 Data processing method, action simulation method, device, equipment and medium
CN111589145B (en) * 2020-04-22 2023-03-24 腾讯科技(深圳)有限公司 Virtual article display method, device, terminal and storage medium
CN112354186A (en) * 2020-11-10 2021-02-12 网易(杭州)网络有限公司 Game animation model control method, device, electronic equipment and storage medium
CN114866802B (en) * 2022-04-14 2024-04-19 青岛海尔科技有限公司 Video stream sending method and device, storage medium and electronic device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1369862A (en) * 2001-02-15 2002-09-18 英业达股份有限公司 Method and system for creating animation
WO2014006786A1 (en) * 2012-07-03 2014-01-09 国立大学法人大阪大学 Characteristic value extraction device and characteristic value extraction method
CN103927776A (en) * 2014-03-28 2014-07-16 浙江中南卡通股份有限公司 Animation curve optimization method
CN104881869A (en) * 2015-05-15 2015-09-02 浙江大学 Real time panorama tracing and splicing method for mobile platform
CN106097296A (en) * 2015-05-01 2016-11-09 佳能株式会社 Video generation device and image generating method
CN106504267A (en) * 2016-10-19 2017-03-15 东南大学 A kind of motion of virtual human data critical frame abstracting method
US9734615B1 (en) * 2013-03-14 2017-08-15 Lucasfilm Entertainment Company Ltd. Adaptive temporal sampling
CN107430773A (en) * 2015-03-20 2017-12-01 高通股份有限公司 Strengthen the system and method for the depth map retrieval of mobile object using active detection technology
CN107610212A (en) * 2017-07-25 2018-01-19 深圳大学 Scene reconstruction method, device, computer equipment and computer-readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10616552B2 (en) * 2016-03-25 2020-04-07 Intel Corporation Multi-modal real-time camera localization and environment mapping

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1369862A (en) * 2001-02-15 2002-09-18 英业达股份有限公司 Method and system for creating animation
WO2014006786A1 (en) * 2012-07-03 2014-01-09 国立大学法人大阪大学 Characteristic value extraction device and characteristic value extraction method
US9734615B1 (en) * 2013-03-14 2017-08-15 Lucasfilm Entertainment Company Ltd. Adaptive temporal sampling
CN103927776A (en) * 2014-03-28 2014-07-16 浙江中南卡通股份有限公司 Animation curve optimization method
CN107430773A (en) * 2015-03-20 2017-12-01 高通股份有限公司 Strengthen the system and method for the depth map retrieval of mobile object using active detection technology
CN106097296A (en) * 2015-05-01 2016-11-09 佳能株式会社 Video generation device and image generating method
CN104881869A (en) * 2015-05-15 2015-09-02 浙江大学 Real time panorama tracing and splicing method for mobile platform
CN106504267A (en) * 2016-10-19 2017-03-15 东南大学 A kind of motion of virtual human data critical frame abstracting method
CN107610212A (en) * 2017-07-25 2018-01-19 深圳大学 Scene reconstruction method, device, computer equipment and computer-readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A C3D-Based Convolutional Neural Network for Frame Dropping Detection in a Single Video Shot;C. Long 等;《2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)》;20170824;第1898-1906页 *
三维运动控制技术的研究与实现;路海涛 等;《现代防御技术》;20120630;第40卷(第3期);第172-177页 *
预选策略和重建误差优化的运动捕获数据关键帧提取;蔡美玲 等;《计算机辅助设计与图形学学报》;20121130;第24卷(第11期);第1485-1492页 *

Also Published As

Publication number Publication date
CN108320322A (en) 2018-07-24

Similar Documents

Publication Publication Date Title
CN108320322B (en) Animation data processing method, animation data processing device, computer equipment and storage medium
JP7258400B6 (en) Video data processing method, video data processing device, computer equipment, and computer program
CN107890671B (en) Three-dimensional model rendering method and device for WEB side, computer equipment and storage medium
CN109493417B (en) Three-dimensional object reconstruction method, device, equipment and storage medium
US11954828B2 (en) Portrait stylization framework using a two-path image stylization and blending
CN112967381B (en) Three-dimensional reconstruction method, apparatus and medium
CN113096249B (en) Method for training vertex reconstruction model, image reconstruction method and electronic equipment
Cao et al. Ciaosr: Continuous implicit attention-in-attention network for arbitrary-scale image super-resolution
KR20230035385A (en) Animation migration method and device, apparatus, storage medium and computer program product
CN111798545A (en) Method and device for playing skeleton animation, electronic equipment and readable storage medium
CN113426112A (en) Game picture display method and device, storage medium and electronic equipment
KR20210040305A (en) Method and apparatus for generating images
CN112419183A (en) Method and device for reducing zoomed image, computer equipment and storage medium
CN112819687B (en) Cross-domain image conversion method, device, computer equipment and storage medium based on unsupervised neural network
CN110431838B (en) Method and system for providing dynamic content of face recognition camera
CN108986031B (en) Image processing method, device, computer equipment and storage medium
Somraj et al. Temporal view synthesis of dynamic scenes through 3D object motion estimation with multi-plane images
CN113419806B (en) Image processing method, device, computer equipment and storage medium
CN106548501B (en) Image drawing method and device
CN116503262A (en) Vectorization processing method and device for house type diagram and electronic equipment
Saidi et al. Implementation of a real‐time stereo vision algorithm on a cost‐effective heterogeneous multicore platform
CN114820908B (en) Virtual image generation method and device, electronic equipment and storage medium
Yu et al. Image deformation based on contour using moving integral least squares
WO2023174355A1 (en) Video super-resolution method and device
US20230364783A1 (en) Inverse kinematics computational solver system, method, and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant