CN111773700A - Animation data processing method and device - Google Patents

Animation data processing method and device Download PDF

Info

Publication number
CN111773700A
CN111773700A CN202010725151.9A CN202010725151A CN111773700A CN 111773700 A CN111773700 A CN 111773700A CN 202010725151 A CN202010725151 A CN 202010725151A CN 111773700 A CN111773700 A CN 111773700A
Authority
CN
China
Prior art keywords
animation
data
source matrix
bone
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010725151.9A
Other languages
Chinese (zh)
Other versions
CN111773700B (en
Inventor
黄振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010725151.9A priority Critical patent/CN111773700B/en
Publication of CN111773700A publication Critical patent/CN111773700A/en
Application granted granted Critical
Publication of CN111773700B publication Critical patent/CN111773700B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6607Methods for processing data by generating or executing the game program for rendering three dimensional images for animating game characters, e.g. skeleton kinematics

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a method and a device for processing animation data, which are characterized in that original animation data are obtained, corresponding full-frame-number animations are generated according to the original animation data, then a source matrix corresponding to the original animation data is generated according to bone data of each bone in the original animation data and the full-frame-number animations, each element in the source matrix can be used for representing the bone data of each bone corresponding to each frame of animation in the full-frame-number animations, then the original animation data can be compressed according to the source matrix to generate an animation file, so that the original animation data are subjected to feature extraction in a mode of baking the full-frame-number animations and comparing the full-frame-number animations with the data to obtain the source matrix, the original animation data are compressed according to the source matrix to realize a feature extraction mode of reducing the dimension of the data and compress the animation data according to the obtained source matrix, the method has the advantages that the compression rate of the data is obviously improved while the compression of the animation data is realized, and the file memory of the compressed file is greatly reduced.

Description

Animation data processing method and device
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method and an apparatus for processing animation data.
Background
In a game, an artist is often required to produce an animation file, and all data of the animation, for example, frame data, tangent data, time data, and the like of all animation frames in the animation file may be included in the animation file. The memory of the animation file which is manufactured is large, the animation file needs to be compressed and then is introduced into the game engine for rendering, otherwise, the problems of high memory occupation and low rendering efficiency of the game engine are easily caused.
Currently, the compression scheme for animation data may include the following ways:
1. reducing key frames without data change in the animation data;
2. reducing data of the animation curve access of the animation data;
3. reducing the time or value to the maximum precision number of bits of the floating point data.
However, although the above-described scheme can compress moving image data, the compression ratio is effective, and the effect is poor when compressing moving image data having a large moving image data amount. Therefore, a more efficient way of compressing animation data is needed.
Disclosure of Invention
The embodiment of the invention provides a method for processing animation data, which aims to solve the problems of low compression ratio and poor compression effect of the animation data in the prior art.
Correspondingly, the embodiment of the invention also provides a processing device of the animation data, which is used for ensuring the realization and the application of the method.
In order to solve the above problem, an embodiment of the present invention discloses a method for processing animation data, including:
acquiring original animation data, wherein the original animation data comprises skeleton data of each skeleton;
generating full-frame number animation corresponding to the original animation data according to the original animation data;
generating a source matrix corresponding to the original animation data according to the full frame number animation and the bone data of each bone, wherein each element in the source matrix is used for representing the bone data of each bone corresponding to each frame animation in the full frame number animation;
and compressing the original animation data according to the source matrix to generate an animation file.
Optionally, the generating a source matrix corresponding to the original animation data according to the full frame number animation and the bone data of each bone includes:
acquiring the frame number of the full frame number animation and the number of the skeletons;
comparing the frame number with the number of the strips to obtain a comparison result;
and generating a source matrix corresponding to the original animation data according to the full frame number animation, the bone data of each bone and the comparison result.
Optionally, the generating a source matrix corresponding to the original animation data according to the full frame number animation, the bone data of each bone, and the comparison result includes:
and if the comparison result shows that the frame number is smaller than the number, taking each frame animation of the animation with the full frame number as the column of the source matrix, and taking each skeleton as the row of the source matrix.
Optionally, the generating a source matrix corresponding to the original animation data according to the full frame number animation, the bone data of each bone, and the comparison result includes:
and if the comparison result shows that the frame number is greater than the number, taking each frame animation of the animation with the full frame number as the row of the source matrix, and taking each skeleton as the column of the source matrix.
Optionally, compressing the original animation data according to the source matrix to generate an animation file, including:
determining a target matrix aiming at the source matrix according to the row data and the column data of the source matrix, wherein the target matrix is formed according to the characteristic vector corresponding to the source matrix;
obtaining target animation data after the original animation data are compressed according to the source matrix and the target matrix;
and generating the animation file matched with the original animation data by adopting the target matrix and the target animation data.
Optionally, the determining a target matrix for the source matrix according to the row data and the column data of the source matrix includes:
determining the mean value of each column element or each row element by adopting the column data;
determining a covariance matrix matched with the source matrix by using the mean value;
determining an eigenvalue of the source matrix and an eigenvector corresponding to the eigenvalue according to the covariance matrix;
extracting a preset number of feature vectors from the feature vectors as target feature vectors according to the size of the feature values;
and generating a target matrix aiming at the source matrix by adopting the target characteristic vector.
Optionally, the generating, according to the original animation data, a full frame number animation corresponding to the original animation data includes:
acquiring a frame number threshold value aiming at the original animation data;
and baking the original animation data according to the frame number threshold value to generate a full frame number animation corresponding to the original animation data.
The embodiment of the invention also discloses a device for processing the animation data, which comprises:
the system comprises an animation data acquisition module, a data processing module and a data processing module, wherein the animation data acquisition module is used for acquiring original animation data which comprise skeleton data of each skeleton;
the animation generation module is used for generating full frame number animation corresponding to the original animation data according to the original animation data;
a source matrix generation module, configured to generate a source matrix corresponding to the original animation data according to the full-frame-number animation and the bone data of each bone, where each element in the source matrix is used to represent the bone data of each bone corresponding to each frame of animation in the full-frame-number animation;
and the animation file generation module is used for compressing the original animation data according to the source matrix to generate an animation file.
Optionally, the source matrix generating module includes:
the frame number and number acquisition submodule is used for acquiring the frame number of the full-frame animation and the number of the bones;
the data comparison submodule is used for comparing the frame number with the number of the strips to obtain a comparison result;
and the source matrix generation submodule is used for generating a source matrix corresponding to the original animation data according to the full frame number animation, the bone data of each bone and the comparison result.
Optionally, the source matrix generation submodule is specifically configured to:
and if the comparison result shows that the frame number is smaller than the number, taking each frame animation of the animation with the full frame number as the column of the source matrix, and taking each skeleton as the row of the source matrix.
Optionally, the source matrix generation submodule is specifically configured to:
and if the comparison result shows that the frame number is greater than the number, taking each frame animation of the animation with the full frame number as the row of the source matrix, and taking each skeleton as the column of the source matrix.
Optionally, the animation file generation module includes:
the target matrix determining submodule is used for determining a target matrix aiming at the source matrix according to the row data and the column data of the source matrix, and the target matrix is formed according to the characteristic vector corresponding to the source matrix;
the target animation data obtaining submodule is used for obtaining target animation data after the original animation data are compressed according to the source matrix and the target matrix;
and the animation file generation submodule is used for generating the animation file matched with the original animation data by adopting the target matrix and the target animation data.
Optionally, the target matrix determination submodule is specifically configured to:
determining the mean value of each column element or each row element by adopting the column data;
determining a covariance matrix matched with the source matrix by using the mean value;
determining an eigenvalue of the source matrix and an eigenvector corresponding to the eigenvalue according to the covariance matrix;
extracting a preset number of feature vectors from the feature vectors as target feature vectors according to the size of the feature values;
optionally, the animation generation module comprises:
a frame number threshold acquisition submodule for acquiring a frame number threshold for the original animation data;
and the animation generation submodule is used for baking the original animation data according to the frame number threshold value to generate full-frame-number animation corresponding to the original animation data.
The embodiment of the invention also discloses an electronic device, which comprises:
one or more processors; and
one or more machine-readable media having instructions stored thereon, which when executed by the one or more processors, cause the electronic device to perform the method as described above.
Embodiments of the invention also disclose one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause the processors to perform the methods as described above.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, original animation data can be obtained, corresponding full-frame-number animation can be generated according to the original animation data, then a source matrix corresponding to the original animation data can be generated according to the bone data of each bone in the original animation data and the full-frame-number animation, wherein each element in the source matrix can be used for representing the bone data of each bone corresponding to each frame of animation in the full-frame-number animation, then the original animation data can be compressed according to the source matrix to generate an animation file, so that the original animation data is subjected to feature extraction by baking the full-frame-number animation and comparing the full-frame-number animation with the data to obtain the source matrix, the original animation data is compressed according to the source matrix, a feature extraction mode of data dimension reduction is realized, the animation data is compressed according to the obtained source matrix, and when the animation data is compressed, the compression rate of data is obviously improved, and the file memory of the compressed file is greatly reduced.
Drawings
FIG. 1 is a flow chart of the steps of one embodiment of a method for processing animation data according to the present invention;
FIG. 2 is a block diagram of an embodiment of an animation data processing apparatus according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Referring to fig. 1, a flowchart illustrating steps of an embodiment of a method for processing animation data according to the present invention is shown, which may specifically include the following steps:
step 101, obtaining original animation data, wherein the original animation data comprises skeleton data of each skeleton;
in a game, animations may include skeletal animations, vertex animations, and frame animations. The frame animation is an animation formed by a plurality of continuous pictures; in skeletal animation, a game model has a skeletal structure composed of interconnected "bones", and animation can be generated for the model by changing the orientation and position of skeletal joint points; in the vertex animation, each key frame holds vertex position data at a specific time, and animation is formed by changing the vertex position data.
As an example, after the artists complete animation production, animation data needs to be imported into a game for rendering, and if an animation file is too large, a game client is likely to render a game picture and a game role, which not only occupies a large memory, but also brings a large load to system overhead, and affects game experience. Therefore, the animation data needs to be subjected to lossy compression, that is, some detailed data are discarded, and a compression mode of main data is reserved, so that the integrity of the animation data is ensured, and meanwhile, the animation data is subjected to lossy compression, and the memory of the animation data is reduced.
In a specific implementation, the original animation data may include skeleton data of each skeleton of the virtual character model in a period of motion, and a key frame animation corresponding to each skeleton, specifically, for the virtual character model, the virtual character model may include skeleton joint points such as a head, a neck, left and right shoulders, elbows, wrists, knees, ankles, and the like, each skeleton joint point may correspond to a unique identifier, and then the same skeleton joint point may change in each key frame, may also change in some key frames, and may not change in some frames, so that the skeleton data needs to be recorded according to the change of the skeleton joint points of the virtual character model in a period of motion.
For example, the original animation data is a walking animation of the virtual character model, which includes 10 frames of key frame animation, wherein for a leg and a hand of the virtual character model changing in each frame of key frame animation, the bone data (such as position coordinates) of the bone key points corresponding to the leg and the hand can be recorded in the corresponding key frame, and for a head of the virtual character model, if the head of the virtual character model has "nodding" or "shaking" in the walking process, the bone data of the bone key points of the head can be recorded in the corresponding key frame, and the invention is not limited thereto.
102, generating full-frame animation corresponding to the original animation data according to the original animation data;
in the embodiment of the present invention, a frame number threshold for the original animation data may be obtained, and the original animation data may be baked according to the frame number threshold, so as to generate a full frame number animation corresponding to the original animation data. The frame number threshold may be a threshold set according to the length of the animation playing time, and may be the same as or greater than the animation frame number of the original animation data.
In a specific implementation, for the virtual character model in the original animation data, since not all bones in each frame of the key frame are changed, that is, some bones of the virtual character model are not changed in a period of motion, in order to simplify the processing of the data, the bone data of the unchanged bones are not recorded in the key frame, or the bone data of the unchanged bones in the key frame is set to 0 to indicate that the bones are not changed, and so on. Therefore, the animation of the corresponding full frame number can be generated according to the original animation data.
Specifically, for the bone data of each bone in the virtual character model, the full-frame-number baking mode is adopted to bake the original animation data, so that full-frame-number animations corresponding to the original animation data are obtained, and the full-frame-number animations can avoid that due to non-full-frame animations, corresponding value defaults occur on key frames, effective characteristic data are obtained during subsequent data dimension reduction processing, and animation data compression is influenced.
In one example, the original animation data is assumed to be a walking animation of a virtual character model, and the walking animation comprises 10 frames of key frame animation, and each frame of key frame animation records the bone data of the corresponding bone of the virtual character model. In this example, the skeletal joint points such as the head bone, the knee, and the elbow of the virtual character model are taken as examples, wherein the skeletal data of the head bone is recorded in the key frame (c) and the key frame (c), and for the skeletal joint points such as the knee and the elbow, the corresponding skeletal data is recorded in each frame of the key frame. If the original animation data is baked in the mode of the non-full frame number, only 2 frames of animations are obtained after baking because the bone data of the head bone is only recorded in the key frame and the key frame, or 10 frames of animations should be baked for the bone joint points such as knee, elbow and the like, and due to reasons such as default occurrence values, the animation frames finally baked are incomplete, so that the related effective feature data is lacked during subsequent data dimension reduction processing, and the compression of the animation data is influenced.
Therefore, in the embodiment of the invention, the original animation data is baked frame by frame in the form of the full frame number, for example, 10 frames of animations corresponding to bone joint points such as knee, elbow and head skeleton can be obtained after the full frame number is baked, so that the integrity of the animation data is ensured, and meanwhile, whether all key frame numbers are consistent or not can be verified, and the effectiveness of subsequent animation data compression is ensured. It should be noted that, for the head bone, only the keyframe, # and # are recorded, the animation of 10 frames is baked, and for the keyframe where no bone data is recorded, the animation is processed as a lost motion animation, which is not limited by the present invention.
103, generating a source matrix corresponding to the original animation data according to the full frame number animation and the bone data of each bone, wherein each element in the source matrix is used for representing the bone data of each bone corresponding to each frame animation in the full frame number animation;
in the embodiment of the invention, after the animation frames with the full frame number are baked, the frame number of the animation with the full frame number and the number of the bones can be obtained, then the frame number and the number of the bones are compared to obtain a comparison result, and then the source matrix corresponding to the original animation data is generated according to the animation with the full frame number, the bone data of each bone and the comparison result.
In a specific implementation, each element in the source matrix may be used to represent skeleton data of each skeleton corresponding to each frame in the full-frame-number animation, and then the frame number of the full-frame-number animation may be compared with the skeleton number to determine row and column data of the source matrix. Specifically, if the frame number is less than the number of frames as a comparison result, taking each frame of animation of the animation with the full frame number as a column of the source matrix, and taking each skeleton as a row of the source matrix; and if the frame number is larger than the number of the frames according to the comparison result, taking each frame animation of the animation with the full frame number as the row of the source matrix, and taking each skeleton as the column of the source matrix.
In one example, a multi-dimensional matrix array may be created at the code level, and row and column data of the multi-dimensional matrix data may then be determined based on the number of frames and the number of skeletal bars of the full frame animation. Specifically, the frame number of the full-frame animation may be compared with the number of bones, when the frame number is greater than the number of bones, the frame number is used as the row number of the source matrix, each frame animation of the full-frame animation is used as the row data of the source matrix, the total number of the bones is used as the column number of the source matrix, and each bone is used as the column data of the source matrix; if the number of frames is less than the number of bars, the total number of bars is taken as the number of rows of the source matrix, each bar is taken as the row data of the source matrix, the number of frames is taken as the number of columns of the source matrix, and each frame animation of the animation with the full number of frames is taken as the column data of the source matrix, so that the side with the larger value is taken as the row (namely data) of the multi-dimensional matrix array, and the other side is taken as the column (namely dimension) of the multi-dimensional matrix array, so that the animation data can be compressed through the multi-dimensional matrix array.
It should be noted that, for a matrix element in the source matrix, it may be used to indicate a position coordinate of a certain bone in which frame, for example, an animation frame with animation of full frame number is used as a row data of the source matrix, a bone identifier is used as a column data of the source matrix, the column data may be ordered as a head bone, a neck bone, a shoulder bone, a tail bone, etc., an animation frame is represented by X, and a bone identifier is represented by Y, then the source matrix may be as shown in table 1:
first frame Second frame Third frame Fourth frame Frame N
Head skeleton (X1,Y1) (X1,Y2) (X1,Y3) (X1,Y4) (X1,YN)
Neck skeleton (X2,Y1) (X2,Y2) (X2,Y3) (X2,Y4) (X2,YN)
Shoulder skeleton (X3,Y1) (X3,Y2) (X3,Y3) (X3,Y4) (X1,YN)
Coccyx skeleton (X4,Y1) (X4,Y2) (X4,Y3) (X4,Y4) (X1,YN)
M skeleton (XM,Y1) (XM,Y2) (XM,Y3) (XM,Y4) (XM,YN)
TABLE 1
Wherein each column may represent bone data of a respective bone in a single frame animation and each row may represent bone data of a corresponding bone in each frame. Specifically, (X)1,Y1) Can represent the bone data (such as position coordinates, displacement information, etc.) of the head bone in the first frame, if the head bone has not changed in motion (X)1,Y1) May be (0, 0) to indicate no motion change of the head bone.
It should be noted that the embodiment of the present invention includes but is not limited to the above examples, and it is understood that, under the guidance of the idea of the embodiment of the present invention, a person skilled in the art can set the method according to practical situations, and the present invention is not limited to this.
And 104, compressing the original animation data according to the source matrix to generate an animation file.
In the embodiment of the present invention, a target matrix for a source matrix may be determined according to row data and column data of the source matrix, original animation data may be compressed according to the source matrix and the target matrix, compressed target animation data may be generated, and a generated animation file matched with the original animation data may be generated by using the target matrix and the target animation data.
In a specific implementation, rows and columns of the multi-dimensional matrix array are determined by the frame number and the skeleton number of the full-frame animation, and original animation data can be converted into the multi-dimensional matrix array according to row values and column values, so that a source matrix for the animation data is determined. And then, calculating a target matrix aiming at the source matrix according to the row data and the column data of the source matrix, compressing the original animation data through the source matrix and the target matrix, realizing the data dimension reduction processing of the animation data, obtaining the compressed target animation data, and generating an animation file by taking the target matrix and the target animation data as output. The target matrix may be formed according to the eigenvector corresponding to the source matrix.
When animation restoration is needed, an animation file can be loaded, inverse operation is carried out on compressed animation data to obtain a source matrix, row values or column values of the source matrix are obtained, frame transmission time per second of the animation data is combined, and actual frame time is obtained, so that restoration of the animation data is achieved.
In an optional embodiment of the present invention, after the row data and the column data of the source matrix are determined, the row data and the column data may be used to determine an average value of each column element or each row element, and a covariance matrix matched with the source data is determined using the average value, then an eigenvalue of the source matrix and an eigenvector corresponding to the eigenvalue are determined according to the covariance matrix, and then a preset number of eigenvectors are extracted from the eigenvectors according to the magnitude of the eigenvalue to be used as a target eigenvector, and a target matrix for the source matrix is generated using the target eigenvector, so that data dimension reduction processing may be performed on animation data according to the source matrix and the target matrix, thereby significantly improving a compression ratio of the data and greatly reducing a file memory of a compressed file while realizing animation data compression.
In one example, after the animation data is baked into animation frames of full frames, a multi-dimensional matrix M array is created at the code level, the number of frames of all full frame number animations is compared with the number of skeleton frames of the original animation data, one side with a larger value is used as a row M of the matrix M, and the other side is used as a column n of the matrix M.
Then, the mean value of the M values under each n dimension of the matrix M is calculated, the mean value is subtracted from the M values, then, the covariance matrix C is calculated by adopting each mean value, and the eigenvalue and the eigenvector of the matrix C are calculated according to the covariance matrix C. Then, the eigenvectors are arranged into a matrix from top to bottom according to the size of the eigenvalue, and the eigenvectors with the preset row number k are taken to form a target matrix P. The preset number k is the dimension of the target matrix, for example, n is 3, k is 1, that is, the array is reduced from the 3-dimensional matrix M to the 1-dimensional matrix P.
And performing matrix multiplication on the source matrix M and the target matrix P to obtain animation data reduced from n dimension to k dimension, outputting the matrix P and the compressed animation data as data, and generating an animation file, so that when the animation data is subsequently restored, the matrix P and the compressed animation data can be subjected to inverse operation and backward push to obtain the source matrix M by loading the animation file. After the source matrix M is obtained, determining the frame number of the animation frame according to the size relationship between the frame number of the full-frame animation and the number of the skeleton bars of the original animation data, namely when the frame number is greater than the number of the skeleton bars, taking the row value of the source matrix; and when the frame number is less than the skeleton number, taking the column value of the source matrix so as to restore the animation data according to the row value or the column value. Meanwhile, the actual FPS (Frames Per Second, Frames Per Second transmission) of animation data is obtained, then the animation data corresponding to the row attribute value or the column attribute value is extracted from the source matrix M, and then the positions of the time of all correct animation Frames under the corresponding PFS are determined by adopting the animation data and the PFS, so that the original animation data are obtained by reduction, and the animation rendering is performed.
For example, assuming that the PFS is 30, the number of frames transmitted per second is 30, that is, the time of 1 frame is 0.033333 … seconds, after the source matrix M is obtained through inverse operation, the relationship between the number of frames of the animation frame and the number of skeleton frames is first determined, when the number of frames is greater than the number of skeleton frames, it indicates that the animation frame is taken as a row element of the source matrix M in the previous compression process, the ID of each row element is extracted from the source matrix M, and the ID is multiplied by 0.033333 …, so that the positions of the time of all correct frames under 30PFS can be obtained; when the frame number is less than the number of bones, the animation frame is taken as the column element of the source matrix M in the previous compression process, the ID identification of each column element is extracted from the source matrix M and multiplied by 0.033333 …, so that the positions of the time of all correct frames under 30PFS can be obtained, and the original animation data is restored to be convenient for animation rendering.
It should be noted that the embodiment of the present invention includes but is not limited to the above examples, and it is understood that, under the guidance of the idea of the embodiment of the present invention, a person skilled in the art can set the method according to practical situations, and the present invention is not limited to this.
In the embodiment of the invention, original animation data can be obtained, corresponding full-frame-number animation can be generated according to the original animation data, then a source matrix corresponding to the original animation data can be generated according to the bone data of each bone in the original animation data and the full-frame-number animation, wherein each element in the source matrix can be used for representing the bone data of each bone corresponding to each frame of animation in the full-frame-number animation, then the original animation data can be compressed according to the source matrix to generate an animation file, so that the original animation data is subjected to feature extraction by baking the full-frame-number animation and comparing the full-frame-number animation with the data to obtain the source matrix, the original animation data is compressed according to the source matrix, a feature extraction mode of data dimension reduction is realized, the animation data is compressed according to the obtained source matrix, and when the animation data is compressed, the compression rate of data is obviously improved, and the file memory of the compressed file is greatly reduced.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 2, a block diagram of an embodiment of an animation data processing apparatus according to the present invention is shown, and may specifically include the following modules:
an animation data obtaining module 201, configured to obtain original animation data, where the original animation data includes bone data of each bone;
an animation generation module 202, configured to generate, according to the original animation data, a full-frame-number animation corresponding to the original animation data;
a source matrix generating module 203, configured to generate a source matrix corresponding to the original animation data according to the full frame number animation and the bone data of each bone, where each element in the source matrix is used to represent the bone data of each bone corresponding to each frame in the full frame number animation;
and the animation file generation module 204 is configured to compress the original animation data according to the source matrix to generate an animation file.
In an optional embodiment of the present invention, the source matrix generating module 203 comprises:
the frame number and number acquisition submodule is used for acquiring the frame number of the full-frame animation and the number of the bones;
the data comparison submodule is used for comparing the frame number with the number of the strips to obtain a comparison result;
and the source matrix generation submodule is used for generating a source matrix corresponding to the original animation data according to the full frame number animation, the bone data of each bone and the comparison result.
In an optional embodiment of the present invention, the source matrix generation submodule is specifically configured to:
and if the comparison result shows that the frame number is smaller than the number, taking each frame animation of the animation with the full frame number as the column of the source matrix, and taking each skeleton as the row of the source matrix.
In an optional embodiment of the present invention, the source matrix generation submodule is specifically configured to:
and if the comparison result shows that the frame number is greater than the number, taking each frame animation of the animation with the full frame number as the row of the source matrix, and taking each skeleton as the column of the source matrix.
In an optional embodiment of the present invention, the animation file generating module 204 includes:
the target matrix determining submodule is used for determining a target matrix aiming at the source matrix according to the row data and the column data of the source matrix, and the target matrix is formed according to the characteristic vector corresponding to the source matrix;
the target animation data obtaining submodule is used for obtaining target animation data after the original animation data are compressed according to the source matrix and the target matrix;
and the animation file generation submodule is used for generating the animation file matched with the original animation data by adopting the target matrix and the target animation data.
In an optional embodiment of the present invention, the target matrix determination submodule is specifically configured to:
determining the mean value of each column element or each row element by adopting the column data;
determining a covariance matrix matched with the source matrix by using the mean value;
determining an eigenvalue of the source matrix and an eigenvector corresponding to the eigenvalue according to the covariance matrix;
extracting a preset number of feature vectors from the feature vectors as target feature vectors according to the size of the feature values;
in an optional embodiment of the present invention, the animation generation module 202 comprises:
a frame number threshold acquisition submodule for acquiring a frame number threshold for the original animation data;
and the animation generation submodule is used for baking the original animation data according to the frame number threshold value to generate full-frame-number animation corresponding to the original animation data.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
An embodiment of the present invention further provides an electronic device, including:
one or more processors; and
one or more machine-readable media having instructions stored thereon, which when executed by the one or more processors, cause the electronic device to perform methods as described in embodiments of the invention.
Embodiments of the invention also provide one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause the processors to perform the methods described in embodiments of the invention.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The above detailed description is provided for a method and a device for processing animation data provided by the present invention, and a specific example is applied in the present document to explain the principle and the implementation of the present invention, and the above description of the embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A method for processing animation data, comprising:
acquiring original animation data, wherein the original animation data comprises skeleton data of each skeleton;
generating full-frame number animation corresponding to the original animation data according to the original animation data;
generating a source matrix corresponding to the original animation data according to the full frame number animation and the bone data of each bone, wherein each element in the source matrix is used for representing the bone data of each bone corresponding to each frame animation in the full frame number animation;
and compressing the original animation data according to the source matrix to generate an animation file.
2. The method of claim 1, wherein generating a source matrix corresponding to the original animation data from the full frame number animation and the bone data of each bone comprises:
acquiring the frame number of the full frame number animation and the number of the skeletons;
comparing the frame number with the number of the strips to obtain a comparison result;
and generating a source matrix corresponding to the original animation data according to the full frame number animation, the bone data of each bone and the comparison result.
3. The method of claim 2, wherein generating a source matrix corresponding to the original animation data according to the full frame number animation, the bone data of each bone, and the comparison result comprises:
and if the comparison result shows that the frame number is smaller than the number, taking each frame animation of the animation with the full frame number as the column of the source matrix, and taking each skeleton as the row of the source matrix.
4. The method of claim 2, wherein generating a source matrix corresponding to the original animation data according to the full frame number animation, the bone data of each bone, and the comparison result comprises:
and if the comparison result shows that the frame number is greater than the number, taking each frame animation of the animation with the full frame number as the row of the source matrix, and taking each skeleton as the column of the source matrix.
5. The method of claim 1, wherein compressing the raw animation data according to the source matrix to generate an animation file comprises:
determining a target matrix aiming at the source matrix according to the row data and the column data of the source matrix, wherein the target matrix is formed according to the characteristic vector corresponding to the source matrix;
obtaining target animation data after the original animation data are compressed according to the source matrix and the target matrix;
and generating the animation file matched with the original animation data by adopting the target matrix and the target animation data.
6. The method of claim 5, wherein determining the target matrix for the source matrix according to the row and column data of the source matrix comprises:
determining the mean value of each column element or each row element by adopting the column data;
determining a covariance matrix matched with the source matrix by using the mean value;
determining an eigenvalue of the source matrix and an eigenvector corresponding to the eigenvalue according to the covariance matrix;
extracting a preset number of feature vectors from the feature vectors as target feature vectors according to the size of the feature values;
and generating a target matrix aiming at the source matrix by adopting the target characteristic vector.
7. The method of claim 1, wherein generating a full frame number animation corresponding to the original animation data from the original animation data comprises:
acquiring a frame number threshold value aiming at the original animation data;
and baking the original animation data according to the frame number threshold value to generate a full frame number animation corresponding to the original animation data.
8. An animation data processing apparatus, comprising:
the system comprises an animation data acquisition module, a data processing module and a data processing module, wherein the animation data acquisition module is used for acquiring original animation data which comprise skeleton data of each skeleton;
the animation generation module is used for generating full frame number animation corresponding to the original animation data according to the original animation data;
a source matrix generation module, configured to generate a source matrix corresponding to the original animation data according to the full-frame-number animation and the bone data of each bone, where each element in the source matrix is used to represent the bone data of each bone corresponding to each frame of animation in the full-frame-number animation;
and the animation file generation module is used for compressing the original animation data according to the source matrix to generate an animation file.
9. An electronic device, comprising:
one or more processors; and
one or more machine-readable media having instructions stored thereon that, when executed by the one or more processors, cause the electronic device to perform the method of any of claims 1-7.
10. One or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause the processors to perform the method of any of claims 1-7.
CN202010725151.9A 2020-07-24 2020-07-24 Animation data processing method and device Active CN111773700B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010725151.9A CN111773700B (en) 2020-07-24 2020-07-24 Animation data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010725151.9A CN111773700B (en) 2020-07-24 2020-07-24 Animation data processing method and device

Publications (2)

Publication Number Publication Date
CN111773700A true CN111773700A (en) 2020-10-16
CN111773700B CN111773700B (en) 2024-05-10

Family

ID=72764893

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010725151.9A Active CN111773700B (en) 2020-07-24 2020-07-24 Animation data processing method and device

Country Status (1)

Country Link
CN (1) CN111773700B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112354186A (en) * 2020-11-10 2021-02-12 网易(杭州)网络有限公司 Game animation model control method, device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1040418A (en) * 1996-04-25 1998-02-13 Matsushita Electric Ind Co Ltd Transmitter/receiver for movement of three-dimensional skeletal structure and its method
CN106492460A (en) * 2016-12-08 2017-03-15 搜游网络科技(北京)有限公司 A kind of compression method of data and equipment
CN107886560A (en) * 2017-11-09 2018-04-06 网易(杭州)网络有限公司 The processing method and processing device of animation resource
US10096133B1 (en) * 2017-03-31 2018-10-09 Electronic Arts Inc. Blendshape compression system
CN108635849A (en) * 2018-05-15 2018-10-12 深圳市腾讯网络信息技术有限公司 A kind of compression of animation data, decompression method and device
JP6526775B1 (en) * 2017-12-08 2019-06-05 株式会社スクウェア・エニックス Animation data compression program, animation data recovery program, animation data compression device, and animation data compression method
CN110263720A (en) * 2019-06-21 2019-09-20 中国民航大学 Action identification method based on depth image and bone information

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1040418A (en) * 1996-04-25 1998-02-13 Matsushita Electric Ind Co Ltd Transmitter/receiver for movement of three-dimensional skeletal structure and its method
CN106492460A (en) * 2016-12-08 2017-03-15 搜游网络科技(北京)有限公司 A kind of compression method of data and equipment
US10096133B1 (en) * 2017-03-31 2018-10-09 Electronic Arts Inc. Blendshape compression system
CN107886560A (en) * 2017-11-09 2018-04-06 网易(杭州)网络有限公司 The processing method and processing device of animation resource
JP6526775B1 (en) * 2017-12-08 2019-06-05 株式会社スクウェア・エニックス Animation data compression program, animation data recovery program, animation data compression device, and animation data compression method
CN108635849A (en) * 2018-05-15 2018-10-12 深圳市腾讯网络信息技术有限公司 A kind of compression of animation data, decompression method and device
CN110263720A (en) * 2019-06-21 2019-09-20 中国民航大学 Action identification method based on depth image and bone information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王鹏杰;潘志庚;李威;: "人体运动捕获数据压缩技术研究进展", 计算机辅助设计与图形学学报, no. 07, pages 1037 - 1046 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112354186A (en) * 2020-11-10 2021-02-12 网易(杭州)网络有限公司 Game animation model control method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111773700B (en) 2024-05-10

Similar Documents

Publication Publication Date Title
US6633294B1 (en) Method and apparatus for using captured high density motion for animation
CN111445418B (en) Image defogging processing method and device and computer equipment
Jiang et al. The spatial relationship of DCT coefficients between a block and its sub-blocks
Halit et al. Multiscale motion saliency for keyframe extraction from motion capture sequences
CN111382867A (en) Neural network compression method, data processing method and related device
JP4298749B2 (en) Time-series data dimension compressor
CN114222179B (en) Virtual image video synthesis method and equipment
CN108900788B (en) Video generation method, video generation device, electronic device, and storage medium
CN112866799A (en) Video frame extraction processing method, device, equipment and medium
CN112131431B (en) Data processing method, device and computer readable storage medium
CN104881640A (en) Method and device for acquiring vectors
JP2003199105A5 (en)
Wang et al. A 3D human motion refinement method based on sparse motion bases selection
CN111773700B (en) Animation data processing method and device
CN108416425A (en) A kind of convolution method and device
WO2001097173A1 (en) Basis functions of three-dimensional models for compression, transformation and streaming
CN114092610B (en) Character video generation method based on generation of confrontation network
Lewin Swish: Neural network cloth simulation on madden NFL 21
CN113887491A (en) Human skeleton behavior recognition system and method based on cross-space-time graph convolutional network
CN113553452A (en) Spatial domain name processing method and device based on virtual reality
CN112767240A (en) Method and device for improving beautifying processing efficiency of portrait video and mobile terminal
JP2844749B2 (en) Image processing method using texture features
CN112508776A (en) Action migration method and device and electronic equipment
Borusyak et al. Development of an algorithm for adaptive compression of indexed images using contextual simulation
Liu et al. Fast inter-frame motion prediction for compressed dynamic point cloud attribute enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant