CN117788650B - Data processing method, device, electronic equipment, storage medium and program product - Google Patents

Data processing method, device, electronic equipment, storage medium and program product Download PDF

Info

Publication number
CN117788650B
CN117788650B CN202410216349.2A CN202410216349A CN117788650B CN 117788650 B CN117788650 B CN 117788650B CN 202410216349 A CN202410216349 A CN 202410216349A CN 117788650 B CN117788650 B CN 117788650B
Authority
CN
China
Prior art keywords
object model
animation frame
target
animation
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410216349.2A
Other languages
Chinese (zh)
Other versions
CN117788650A (en
Inventor
孙宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202410216349.2A priority Critical patent/CN117788650B/en
Publication of CN117788650A publication Critical patent/CN117788650A/en
Application granted granted Critical
Publication of CN117788650B publication Critical patent/CN117788650B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application provides a data processing method, a device, an electronic device, a computer readable storage medium and a computer program product, wherein the embodiment of the application relates to an image processing technology, and the method comprises the following steps: acquiring a source transition animation and a target transition animation; acquiring a first spatial attitude characteristic of an object model of a source animation frame and a second spatial attitude characteristic of an object model of a target animation frame corresponding to each time stamp; based on the first space gesture feature and the second space gesture feature corresponding to the time stamp, the gesture of the object model of the target animation frame is adjusted, and the gesture to be mixed of the target animation frame is obtained; mixing the gesture of the object model of the source animation frame corresponding to the time stamp with the gesture to be mixed of the object model of the target animation frame to obtain a mixed gesture corresponding to the time stamp; rendering the mixed gesture with the plurality of time stamps to obtain the mixed animation. The application can improve the transitional animation mixing effect.

Description

Data processing method, device, electronic equipment, storage medium and program product
Technical Field
The present application relates to the field of computers, and in particular, to a data processing method, apparatus, electronic device, computer readable storage medium, and computer program product.
Background
An animation transition refers to a behavioral event that occurs when one animation state transitions to another. The main stream engine animation mixing method adopts a simple linear interpolation technology to realize fade-in and fade-out mixing.
When the simple linear interpolation technology is used for fade-in and fade-out mixing, the source animation and the target animation are required to be overlapped in a certain time interval and then mixed, so that the transition animation is obtained. However, this method only focuses on the mixing of the animation in the time dimension, and does not focus on other information carried by the animation, so that the transitional animation acts as a sample, and the animation mixing effect is poor.
Disclosure of Invention
Embodiments of the present application provide a data processing method, apparatus, electronic device, computer readable storage medium, and computer program product, capable of improving animation mixing effects.
The technical scheme of the embodiment of the application is realized as follows:
The embodiment of the application provides a data processing method, which comprises the following steps:
acquiring a source transition animation and a target transition animation, wherein the source transition animation comprises a plurality of source animation frames which are in one-to-one correspondence with a plurality of time stamps, and the target transition animation comprises a plurality of target animation frames which are in one-to-one correspondence with the plurality of time stamps;
acquiring a first spatial attitude characteristic of an object model of a source animation frame corresponding to each time stamp, and acquiring a second spatial attitude characteristic of an object model of a target animation frame corresponding to each time stamp;
Aiming at each time stamp, based on a first space gesture feature and a second space gesture feature corresponding to the time stamp, adjusting the gesture of an object model of a target animation frame corresponding to the time stamp to obtain a gesture to be mixed of the target animation frame corresponding to the time stamp;
for each time stamp, mixing the gesture of the object model of the source animation frame corresponding to the time stamp with the gesture to be mixed of the object model of the target animation frame corresponding to the time stamp to obtain a mixed gesture corresponding to the time stamp;
And rendering the mixed postures of the timestamps to obtain the mixed animation of the source transition animation and the target transition animation.
An embodiment of the present application provides a data processing apparatus, including:
The system comprises an acquisition module, a storage module and a display module, wherein the acquisition module acquires a source transition animation and a target transition animation, the source transition animation comprises a plurality of source animation frames which are in one-to-one correspondence with a plurality of time stamps, and the target transition animation comprises a plurality of target animation frames which are in one-to-one correspondence with the plurality of time stamps;
The feature acquisition module is used for acquiring a first spatial attitude feature of the object model of the source animation frame corresponding to each time stamp and acquiring a second spatial attitude feature of the object model of the target animation frame corresponding to each time stamp;
The adjustment processing module is used for adjusting the gesture of the object model of the target animation frame corresponding to the time stamp based on the first space gesture feature and the second space gesture feature corresponding to the time stamp to obtain the gesture to be mixed of the target animation frame corresponding to the time stamp;
the mixing processing module is used for carrying out mixing processing on the gesture of the object model of the source animation frame corresponding to each time stamp and the gesture to be mixed of the object model of the target animation frame corresponding to the time stamp to obtain a mixing gesture corresponding to the time stamp;
And the animation processing module is used for performing rendering processing based on the mixed postures of the plurality of time stamps to obtain the mixed animation of the source transition animation and the target transition animation.
In the above scheme, the obtaining module is further configured to obtain a source animation and a target animation; acquiring an animation mixing interval where the source animation and the target animation overlap on a time sequence; and taking the part corresponding to the animation mixing interval in the source animation as the source transition animation, and taking the part corresponding to the animation mixing interval in the target animation as the target transition animation.
In the above aspect, the feature obtaining module is further configured to perform at least one of the following processing on an object model of each of the source animation frames: performing spatial position feature extraction processing on the object model of the source animation frame to obtain a first spatial position feature of the object model of the source animation frame; performing space orientation feature extraction processing on the object model of the source animation frame to obtain a first space orientation feature of the object model of the source animation frame; performing gravity center feature extraction processing on the object model of the source animation frame to obtain a first gravity center feature of the object model of the source animation frame; the first spatial pose feature is determined based on at least one of the first spatial position feature, the first spatial orientation feature, and the first center of gravity feature.
In the above aspect, the feature obtaining module is further configured to perform the following processing on an object model of each of the target animation frames: performing spatial position feature extraction processing on the object model of the target animation frame to obtain a second spatial position feature of the object model corresponding to the target animation frame; performing space orientation feature extraction processing on the object model of the target animation frame to obtain a second space orientation feature of the object model of the corresponding target animation frame; performing gravity center feature extraction processing on the object model of the target animation frame to obtain a second gravity center feature of the object model corresponding to the target animation frame; the second spatial pose feature is determined based on at least one of the second spatial position feature, the second spatial orientation feature, and the second centroid feature.
In the above solution, the feature obtaining module is further configured to obtain first spatial data of a root skeleton of an object model of the target animation frame; and carrying out spatial position feature extraction processing on the first spatial data of the root bones to obtain second spatial position features corresponding to the object model.
In the above aspect, the feature obtaining module is further configured to obtain second spatial data of each main skeleton of the object model of the target animation frame; carrying out space orientation feature extraction processing on the second space data of each main skeleton to obtain a horizontal rotation angle of the main skeleton; and carrying out mixing treatment on the horizontal rotation angles of the main bones to obtain a second space orientation characteristic of the object model corresponding to the target animation frame.
In the above-mentioned scheme, the feature obtaining module is further configured to use a grounded limb as the barycentric limb when the object model of the target animation frame is in a single-limb touchdown state; when the object model of the target animation frame is in a multi-limb touchdown state, carrying out gravity center movement analysis processing on the object model of the target animation frame to obtain a gravity center movement direction, and determining the gravity center limb based on the gravity center movement direction.
In the above scheme, the feature acquisition module is further configured to acquire an adjacent target animation frame adjacent to the target animation frame; acquiring a first ground projection position of each limb of the object model of the target animation frame, and acquiring a second ground projection position of each limb of the object model of the adjacent target animation frame; mixing the first ground projection positions of a plurality of limbs of the object model of the target animation frame to obtain first barycenter position data of the object model of the target animation frame; mixing the second ground projection positions of a plurality of limbs of the object model of the adjacent target animation frame to obtain second center position data of the object model of the adjacent target animation frame; the center of gravity movement direction is determined based on the first center of gravity position data and the second center of gravity position data.
In the above aspect, the adjustment processing module is further configured to perform, based on the pose of the object model of the target animation frame, a position adjustment process corresponding to the first spatial position feature on the second spatial position feature, to obtain the pose of the object model of the target animation frame after the position adjustment; based on the gesture of the object model of the target animation frame after the position adjustment, performing orientation alignment processing based on the second spatial orientation feature and the first spatial orientation feature to obtain the gesture of the object model of the target animation frame after orientation alignment; and carrying out center of gravity alignment processing based on the first center of gravity characteristic on the basis of the gesture of the object model of the target animation frame with the aligned orientation, so as to obtain the gesture to be mixed of the target animation frame.
In the above aspect, the adjustment processing module is further configured to execute, based on the pose of the object model of the target animation frame after the position adjustment, a process of aligning the second spatial orientation feature with respect to the first spatial orientation feature to obtain the pose of the object model of the target animation frame after the alignment, in response to the second spatial orientation feature not being specified to be reserved; and responding to the second space orientation feature specification to be reserved, and taking the gesture of the object model of the target animation frame after the position adjustment as the gesture of the object model of the target animation frame after the orientation alignment.
In the above scheme, the adjustment processing module is further configured to perform, when the first barycenter feature characterizes that the object model of the source animation frame has a barycenter limb and the second barycenter feature characterizes that the object model of the target animation frame has the barycenter limb, barycenter alignment processing based on the second barycenter feature and the first barycenter feature, to obtain a pose to be mixed of the target animation frame; when the first barycenter characteristic represents that the object model of the source animation frame has the barycenter limb and the second barycenter characteristic represents that the object model of the target animation frame does not have the barycenter limb, barycenter alignment processing is performed based on the first barycenter characteristic, and a to-be-mixed gesture of the target animation frame is obtained; and when the first barycenter characteristic represents that the object model of the source animation frame does not have the barycenter limb, taking the gesture of the object model of the target animation frame after the alignment of the orientation as the gesture to be mixed of the target animation frame.
In the above-mentioned scheme, the adjustment processing module is further configured to, when a first barycentric limb corresponding to the first barycentric feature is the same as a second barycentric limb corresponding to the second barycentric feature, perform barycentric alignment processing on an object model of the target animation frame based on the first barycentric limb or the second barycentric limb, to obtain a pose to be mixed of the target animation frame; and when the first barycenter limb corresponding to the first barycenter feature is different from the second barycenter limb corresponding to the second barycenter feature, performing barycenter alignment processing on the object model of the target animation frame based on the second barycenter limb to obtain the gesture to be mixed of the target animation frame.
In the above scheme, the mixing processing module is further configured to obtain source data of a plurality of bones of a pose of an object model of the source animation frame corresponding to the timestamp, and obtain target data of a plurality of bones of a pose to be mixed of the object model of the target animation frame corresponding to the timestamp; for each bone, performing linear interpolation processing on source data of the bone and target data of the bone to obtain mixed data corresponding to the bone; rendering processing is carried out based on the mixed data of the bones, and the mixed gesture of the object model of the source animation frame corresponding to the time stamp and the to-be-mixed gesture of the object model of the target animation frame corresponding to the time stamp is obtained.
An embodiment of the present application provides an electronic device, including:
a memory for storing computer executable instructions;
And the processor is used for realizing the data processing method provided by the embodiment of the application when executing the computer executable instructions stored in the memory.
The embodiment of the application provides a computer readable storage medium which stores computer executable instructions for realizing the data processing method provided by the embodiment of the application when being executed by a processor.
The embodiment of the application provides a computer program product, which comprises computer executable instructions, wherein the computer executable instructions are executed by a processor to realize the data processing method provided by the embodiment of the application.
The embodiment of the application has the following beneficial effects:
Obtaining basic animation required by realizing animation transition by obtaining source transition animation and target transition animation, obtaining first space gesture characteristics of an object model of a source animation frame corresponding to each time stamp, obtaining second space gesture characteristics of an object model of a target animation frame corresponding to each time stamp to characterize the space gesture information of the object model corresponding to each time stamp and the object model, adjusting the gesture of the object model of the target animation frame corresponding to the time stamp based on the first space gesture characteristics and the second space gesture characteristics of the corresponding time stamp aiming at each time stamp to obtain to-be-mixed gesture of the target animation frame corresponding to the time stamp, enabling the to-be-mixed gesture of the target animation frame to be matched with the gesture of the object model of the source animation frame aiming at each time stamp, mixing the gesture of the object model of the source animation frame corresponding to the timestamp with the gesture to be mixed of the object model of the target animation frame corresponding to the timestamp to obtain the mixed gesture corresponding to the timestamp, combining the spatial gesture characteristics of the gesture of the object model of the source animation frame and the gesture to be mixed of the object model of the target animation frame corresponding to the timestamp with the mixed gesture characteristics of the gesture to be mixed of the object model of the target animation frame corresponding to the timestamp, performing rendering processing on the mixed gesture of a plurality of timestamps to obtain the mixed animation of the source transition animation and the target transition animation, enabling the gesture of the object model of the source animation frame corresponding to the timestamp and the gesture to be mixed of the object model of the target animation frame corresponding to the timestamp not to have unreasonable difference, further ensuring that the mixed gesture obtained by mixing processing according to the gesture of the object model of the source animation frame corresponding to the timestamp and the gesture to be mixed of the object model of the target animation frame accords with the expected effect of gesture transition, the problems of motion aliasing, object model sliding and the like of the mixed animation obtained based on mixed gesture rendering can be avoided, and therefore the animation mixing effect is improved.
Drawings
FIG. 1 is a schematic diagram of a data processing system according to an embodiment of the present application;
Fig. 2 is a schematic structural diagram of a server according to an embodiment of the present application;
FIG. 3A is a schematic diagram of a first flow chart of a data processing method according to an embodiment of the present application;
FIG. 3B is a schematic diagram of a second flow chart of a data processing method according to an embodiment of the present application;
FIG. 3C is a third flow chart of a data processing method according to an embodiment of the present application;
FIG. 3D is a fourth flowchart of a data processing method according to an embodiment of the present application;
FIG. 3E is a fifth flowchart of a data processing method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an animation mixing interval according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an animation mixing process according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a gravity center analysis provided by an embodiment of the present application;
FIG. 7 is a diagram showing an initial pose comparison of poses to be mixed according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a process for adjusting the posture to be mixed according to the embodiment of the present application;
fig. 9A is a first schematic diagram of a posture adjustment result to be mixed according to an embodiment of the present application;
fig. 9B is a second schematic diagram of a result of posture adjustment to be mixed according to an embodiment of the present application.
Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", "third" and the like are merely used to distinguish similar objects and do not represent a specific ordering of the objects, it being understood that the "first", "second", "third" may be interchanged with a specific order or sequence, as permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
In the present embodiment, the term "module" or "unit" refers to a computer program or a part of a computer program having a predetermined function and working together with other relevant parts to achieve a predetermined object, and may be implemented in whole or in part by using software, hardware (such as a processing circuit or a memory), or a combination thereof. Also, a processor (or multiple processors or memories) may be used to implement one or more modules or units. Furthermore, each module or unit may be part of an overall module or unit that incorporates the functionality of the module or unit.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the embodiments of the application is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
In the embodiment of the application, the relevant data collection processing should be strictly according to the requirements of relevant national laws and regulations when the example is applied, so as to acquire the informed consent or independent consent of the personal information body, and develop the subsequent data use and processing within the authorized range of the laws and regulations and the personal information body.
Before describing embodiments of the present application in further detail, the terms and terminology involved in the embodiments of the present application will be described, and the terms and terminology involved in the embodiments of the present application will be used in the following explanation.
1) Cross fade mix: a fade-in fade-out type conventional animation blending technique.
2) Attitude spatial position: typically the spatial location of the root bone.
3) Root Bone (Root Bone): is a basic skeleton of a skeleton, unlike other skeletons, a root skeleton is not intended to show a certain skeleton in the skeleton, such as a leg or arm, but is rather to be taken as a reference point for the entire skeleton structure.
4) Attitude space orientation: by performing orientation analysis on bones capable of representing the orientation of the character gesture space, the estimated directions capable of representing the overall orientation of the character can be any one or more of pelvic bones, spine bones or sternum bones.
5) Source animation: animation being played at the beginning of the transition.
6) Target animation: i.e. to transition to the animation being played.
7) Source animation transition start frame: at the beginning of the transition, the source animation is playing an animation frame.
8) Target animation transition start frame: and when the transition starts, the target animation starts playing the animation frame.
9) Model space: the world coordinate system is also called a space coordinate system where the skeleton model is located, and all skeleton nodes identify corresponding space positions in the space coordinate system of the model space.
10 Local space): the joint coordinate system refers to a space coordinate system in which a joint node taking a skeleton self node as an origin and taking the skeleton as a father node is located.
The simple Cross fade scheme works well in some aspects, such as the joint transition of close ending gesture actions from a Blend of walking to running, a Blend Space in the Blend shape (Blend Space) of the illusion engine (Unreal Engine, UE). And the performance is not as good in the case where the difference in the two motion magnitudes is relatively large. So far, mainstream engine animation mixing technologies such as a game engine (Unity) and a fantasy engine (Unreal Engine) only adopt mature fade-in fade-out mixing (Cross fade), namely, linear interpolation is applied in animation transition transformation, and the linear interpolation can cause some strange animation aliasing problems such as a sliding problem. When fade-in and fade-out mixing is performed, the source animation and the target animation are required to be overlapped in a certain time interval, then mixed, the time interval is called a mixing time interval, and an animation interval corresponding to the mixing time interval is called an animation mixing interval. There are two common methods to fade in and fade out mix:
1. Smooth transformation: the source animation and the target animation are played simultaneously, the interpolation factor is changed from 0 to 1 smoothly, and certain matching is required between the motion of the source animation and the motion of the target animation in the case.
2. Freezing transformation, when the target animation starts to play, the source animation stops at the last frame, the target animation starts to take over, the source animation is mixed with the last frame of the source animation each time, the interpolation factor is smoothed from 0 to 1, and in this case, the motion of the source animation can be completely irrelevant to the motion of the target animation.
In carrying out the embodiments of the present application, the present inventors have found that the following problems exist in the prior art:
1. only the mixing of the skeleton animation in the time dimension is concerned, and the spatial gesture information carried by the skeleton animation is not concerned, so that the transition animation acts and is aliased, and the animation mixing effect is poor.
2. And simply corresponding the object model of the source animation frame and the object model of the target animation frame only by using the spatial position, and ignoring the spatial gesture change trend corresponding to different actions to cause the transition animation action to be aliased.
The embodiments of the present application provide a data processing method, apparatus, electronic device, computer readable storage medium, and computer program product, which can improve animation mixing effect, and the following describes an exemplary application of the electronic device provided by the embodiments of the present application, where the device provided by the embodiments of the present application may be implemented as a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (for example, a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device), a smart phone, a smart speaker, a smart watch, a smart television, a vehicle-mounted terminal, and other various types of user terminals, and may also be implemented as a server. In the following, an exemplary application when the device is implemented as a server will be described.
With reference to fig. 1, fig. 1 is a schematic diagram of an architecture of a data processing system 100 according to an embodiment of the present application, in order to support a data processing application, a terminal 400 is connected to a server 200 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of both.
The terminal 400 is configured to generate an animation mixing request, for example, a user selects a source transition animation and a target transition animation through a graphical interface 410 of the terminal 400, and issues an animation mixing processing instruction, in response to the animation mixing processing instruction, the terminal 400 generates an animation mixing request based on the source transition animation and the target transition animation, the server 200 is configured to obtain a first spatial gesture feature of an object model of a source animation frame corresponding to each time stamp based on the animation mixing request, obtain a second spatial gesture feature of an object model of a target animation frame corresponding to each time stamp, adjust a gesture of an object model of the target animation frame corresponding to the time stamp based on the first spatial gesture feature and the second spatial gesture feature of each time stamp, obtain a to-be-mixed gesture of the target animation frame corresponding to the time stamp, mix the gesture of the source animation frame corresponding to the time stamp with the to-be-mixed gesture of the object model of the target animation frame corresponding to obtain a mixed gesture of the time stamp, perform mixing processing on the mixed gesture of the object model of the source animation frame corresponding to the time stamp, render the animation frame corresponding to the time stamp, and render the target animation frame corresponding to the time stamp, and perform mixing processing to the target animation frame to obtain the target animation frame, and render the mixed animation frame corresponding to the target animation frame.
In some embodiments, the data processing method provided by the embodiment of the application can be used in the field of game making. For example, in the game making stage, different actions are required to be made by the object model according to different game operation instructions, corresponding animations are required to be made for each action, meanwhile, as the game operation instructions are often continuously carried out in a plurality, different actions are required to be connected, on the game animation layer, the source animation corresponding to the current action and the target animation corresponding to the next action are required to be connected, at this time, the source animation and the target animation are mixed, the source transition animation and the target transition animation corresponding to the animation mixing section are executed, the corresponding mixed animation is obtained, the source animation and the target animation except for the animation mixing section are replaced by the mixed animation, and the game animation for carrying out the current action, the current action is transited to the next action and carrying out the next action is obtained, so that the game animation of smooth transition of actions can be obtained, and the animation mixing effect is improved.
The data processing method provided by the embodiment of the application can be used in the field of animation production. For example, in the process of animation production, in order to save time cost, animation segments corresponding to different time periods may be produced in a multithread manner, after the production of different animation segments is completed, connection processing is needed to be performed on the animation segments, when actions corresponding to object models of adjacent animation segments on a time sequence are different, a source animation corresponding to a current action is connected with a target animation corresponding to a next action, at this time, the source animation and the target animation are mixed, the source transition animation and the target transition animation corresponding to an animation mixing section are executed to obtain the corresponding mixed animation, the source animation and the target transition animation corresponding to the animation mixing section are replaced by the mixed animation, and the source animation and the target animation outside the animation mixing section are connected, so that the animation segments for performing the current action, the current action is transited to the next action, and performing the next action are obtained, and the operations are repeated, so that the natural and complete animation of the transition effect can be obtained, thereby improving the animation mixing effect.
The data processing method provided by the embodiment of the application can also be applied to the field of virtual live broadcasting. For example, the virtual character of the virtual live broadcast needs to make different actions according to different contents, in the process of linking different actions, the animation corresponding to the different actions needs to be transited, the animation corresponding to the action which the virtual character has already presented is taken as a source animation, the animation corresponding to the action which the virtual character is to present is taken as a target animation, at the moment, the source animation and the target animation are mixed, the data processing method provided by the embodiment of the application is executed on the source transition animation and the target transition animation corresponding to the animation mixing section, the corresponding mixed animation is obtained, the source transition animation and the target transition animation corresponding to the animation mixing section are replaced by the mixed animation, and the source animation and the target animation outside the animation mixing section are linked, so that the transition from the action which the virtual character has already presented to the mixed animation of the action to be presented is obtained, thereby enabling the action transition of the virtual character to be natural and improving the animation mixing effect.
The electronic device for executing the data processing method provided by the embodiment of the present application may be various types of terminal devices or servers, in some embodiments, the server 200 may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content distribution networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms. The terminal 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, a smart voice interaction device, a smart home appliance, a vehicle-mounted terminal, an aircraft, etc. The terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present application.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a server 200 according to an embodiment of the present application, and the server 200 shown in fig. 2 includes: at least one processor 210, a memory 250, at least one network interface 220, and a user interface 230. The various components in server 200 are coupled together by bus system 240. It is understood that the bus system 240 is used to enable connected communications between these components. The bus system 240 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled as bus system 240 in fig. 2.
The Processor 210 may be an integrated circuit chip having signal processing capabilities such as a general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc., where the general purpose Processor may be a microprocessor or any conventional Processor, etc.
The user interface 230 includes one or more output devices 231, including one or more speakers and/or one or more visual displays, that enable presentation of media content. The user interface 230 also includes one or more input devices 232, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 250 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 250 optionally includes one or more storage devices physically located remote from processor 210.
Memory 250 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The non-volatile Memory may be a Read Only Memory (ROM) and the volatile Memory may be a random access Memory (Random Access Memory, RAM). The memory 250 described in embodiments of the present application is intended to comprise any suitable type of memory.
In some embodiments, memory 250 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 251 including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
A network communication module 252 for reaching other electronic devices via one or more (wired or wireless) network interfaces 220, the exemplary network interfaces 220 include: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (Universal Serial Bus, USB), etc.;
In some embodiments, the data processing apparatus provided in the embodiments of the present application may be implemented in software, and fig. 2 shows the data processing apparatus 253 stored in the memory 250, which may be software in the form of a program, a plug-in, or the like, including the following software modules: the acquisition module 2531, feature acquisition module 2532, adjustment processing module 2533, blending processing module 2534, and animation processing module 2535 are logical, and thus may be arbitrarily combined or further split depending on the functions implemented. The functions of the respective modules will be described hereinafter.
In other embodiments, the data processing apparatus provided in the embodiments of the present application may be implemented in hardware, and by way of example, the data processing apparatus provided in the embodiments of the present application may be a Processor in the form of a hardware decoding Processor that is programmed to perform the data processing method provided in the embodiments of the present application, for example, the Processor in the form of a hardware decoding Processor may be one or more Application-specific integrated circuits (ASICs), digital signal processors (DIGITAL SIGNAL processors, DSPs), programmable logic devices (Programmable Logic Device, PLDs), complex Programmable logic devices (Complex Programmable Logic Device, CPLDs), field-Programmable gate arrays (Field-Programmable GATE ARRAY, FPGA), or other electronic components.
In some embodiments, a terminal or server may implement the data processing method provided by the embodiments of the present application by running various computer-executable instructions or computer programs. For example, the computer-executable instructions may be commands at the micro-program level, machine instructions, or software instructions. The computer program may be a native program or a software module in an operating system; may be a Native application (APPlication, APP), i.e., a program that needs to be installed in an operating system to run, such as an animation APP or game APP; or an applet that can be embedded in any APP, i.e., a program that can be run only by being downloaded into a browser environment. In general, the computer-executable instructions may be any form of instructions and the computer program may be any form of application, module, or plug-in.
In the following, the data processing method provided by the embodiment of the present application will be described in connection with exemplary applications and implementations of the server provided by the embodiment of the present application.
It should be noted that, in the following examples of data processing, the object model is taken as an example of the human model, and those skilled in the art may apply the data processing method provided in the embodiments of the present application to the mixed animation processing including other types of object models according to the understanding of the following.
Referring to fig. 3A, fig. 3A is a schematic flow chart of a data processing method according to an embodiment of the present application, and will be described with reference to steps 101 to 105 shown in fig. 3A.
In step 101, a source transition animation and a target transition animation are acquired.
In some embodiments, step 101 may be implemented by: acquiring a source animation and a target animation; acquiring an animation mixing interval in which a source animation and a target animation overlap on a time sequence; and taking the part corresponding to the animation mixing interval in the source animation as a source transition animation, and taking the part corresponding to the animation mixing interval in the target animation as a target transition animation.
For example, when the animation mixing process is performed, in order to link the animation content of the source animation with the animation content of the target animation, the animation mixing time period is set, the source animation and the target animation corresponding to the animation mixing time period are overlapped in time sequence, the overlapped portion is used as an animation mixing section, the portion corresponding to the animation mixing section in the source animation is used as a source transition animation, and the portion corresponding to the animation mixing section in the target animation is used as a target transition animation. The source transition animation includes a plurality of source animation frames in one-to-one correspondence with the plurality of time stamps, and the target transition animation includes a plurality of target animation frames in one-to-one correspondence with the plurality of time stamps.
The source animation corresponding to the animation mixing interval is used as the source transition animation, and the target animation corresponding to the animation mixing interval is used as the target transition animation, so that subsequent animation mixing processing is performed, the obtained mixed animation can be tightly connected with the source animation and the target animation, and the animation mixing effect is improved.
In step 102, a first spatial pose feature of an object model of a source animation frame corresponding to each timestamp is obtained, and a second spatial pose feature of an object model of a target animation frame corresponding to each timestamp is obtained.
As an example, the first spatial pose feature of the object model of the source animation frame comprises at least one of a first spatial position feature, a first spatial orientation feature, and a first center of gravity feature, and the second spatial pose feature of the object model in the target animation frame comprises at least one of a second spatial position feature, a second spatial orientation feature, and a second center of gravity feature.
Referring to fig. 3B, fig. 3B is a schematic diagram of a second flow of the data processing method according to the embodiment of the present application. In some embodiments, the obtaining the first spatial pose characteristics of the object model of the source animation frame corresponding to each timestamp in step 102 of fig. 3A may be implemented through steps 1021 through 1024 shown in fig. 3B, which is described in detail below.
Performing at least one of the following processes on the object model of each source animation frame:
In step 1021, spatial position feature extraction processing is performed on the object model of the source animation frame, so as to obtain a first spatial position feature of the object model of the corresponding source animation frame.
As an example, the object model of the source animation frame has a first spatial position in the model space, the first spatial position may be represented by spatial data of a root bone of the object model in the model space, and spatial position feature extraction processing is performed on the first spatial position, so as to obtain a first spatial position feature of a corresponding object model, for example, the first spatial position of the root bone of the object model representing the object model is taken as an X-axis 0 point, a Y-axis 105 point, and a Z-axis 42 point, and the first spatial position feature of the corresponding object model is taken as (X P,YP,ZP) = (0, 105, 42), where X P represents a point position on the X-axis, Y P represents a point position on the Y-axis, and Z P represents a point position on the Z-axis.
In step 1022, the object model of the source animation frame is subjected to a spatial orientation feature extraction process, so as to obtain a first spatial orientation feature of the object model of the corresponding source animation frame.
As an example, the object model of the source animation frame has a first spatial position in the model space, the first spatial position may be represented by spatial data of a root bone of the object model in the model space, or may be represented by spatial data of a user-defined position representation bone in the model space, and the first spatial position is subjected to spatial orientation feature extraction processing, so as to obtain a first spatial orientation feature of the corresponding object model, for example, the first spatial position of the object model is represented by a bipedal bone, a sternal bone and a pelvic bone of the object model, the rotation angle of the bipedal bone on the X axis is 30 degrees, the rotation angle on the Y axis is 0 degrees, the rotation angle on the Z axis is 0 degrees, the rotation angle of the sternal bone on the X axis is 50 degrees, the rotation angle on the Y axis is 5 degrees, the rotation angle on the Z axis is 0 degrees, and the rotation angle on the Z axis is 0 degrees, wherein the X axis is a vertical coordinate axis. The two bones of the object model, the sternum bone and the pelvic bone are respectively subjected to space orientation feature extraction processing, so that the horizontal rotation angle of the two bones is 30 degrees, the horizontal rotation angle of the sternum bone is 50 degrees, the horizontal rotation angle of the pelvic bone is 35 degrees, and the first space orientation feature r1=30 degrees×0.2+50 degrees×0.4+35 degrees×0.4=40 degrees of the object model is obtained according to the angle weights of different bones, for example, the two bones weight=0.2, the sternum weight=0.4 and the pelvic bone weight=0.4.
In step 1023, a barycenter feature extraction process is performed on the object model of the source animation frame, so as to obtain a first barycenter feature of the object model of the corresponding source animation frame.
As an example, when the object model of the source animation frame is in a single-limb touchdown state, for example, only the left foot of the object model is touchdown, the touchdown left foot is taken as the first barycentric limb corresponding to the first barycentric feature. When the object model of the source animation frame is in a multi-limb touchdown state, for example, when the left foot and the right foot of the object model are touchdown, an adjacent source animation frame adjacent to the source animation frame is acquired, third ground projection positions of the left foot and the right foot of the object model of the source animation frame are acquired, fourth ground projection positions of the left foot and the right foot of the object model of the adjacent source animation frame are acquired, the third ground projection positions of the left foot and the right foot of the object model of the source animation frame are subjected to mixed processing, third center position data of the object model of the source animation frame are acquired, fourth center position data of the object model of the adjacent source animation frame are acquired, a center of gravity moving direction is determined based on the third center position data and the fourth center position data, and a center of gravity moving direction is determined to be left to right, for example, according to the third center position data and the fourth center of gravity position data, and the right limb corresponding to the first center of gravity characteristic is determined to be the third center of gravity of the object model.
In step 1024, a first spatial pose feature is determined based on at least one of the first spatial position feature, the first spatial orientation feature, and the first center of gravity feature.
As an example, the first spatial position feature of the object model of the source animation frame is (X P,YP,ZP) = (0, 105, 42), the first spatial orientation feature r1=40 degrees of the object model, the first barycentric limb of the first barycentric feature is the right foot of the object model, and the first spatial pose feature of the object model of the source animation frame is determined by the first spatial position feature (0, 105, 42), the first spatial orientation feature 40 degrees and the first barycentric feature right foot.
The method comprises the steps of carrying out spatial position feature extraction processing on an object model of a source animation frame to obtain a first spatial position feature, carrying out characterization on the spatial position of the object model of the source animation frame, carrying out spatial orientation feature extraction processing on the object model of the source animation frame to obtain a first spatial orientation feature, carrying out characterization on the spatial orientation of the object model of the source animation frame, carrying out gravity center feature extraction processing on the object model of the source animation frame to obtain a first gravity center feature, and taking at least one of the first spatial position feature, the first spatial orientation feature and the first gravity center feature as a first spatial posture feature of the object model of the source animation frame as a basis for adjusting the posture of the animation model of the target animation frame to ensure that the regulated posture of the animation model is matched with the posture of the object of the source animation frame so as to enable the mixed posture to be natural, thereby improving the animation mixing effect.
Referring to fig. 3C, fig. 3C is a schematic third flow chart of a data processing method according to an embodiment of the application. In some embodiments, the second spatial pose feature of the object model of the target animation frame corresponding to each timestamp in step 102 of fig. 3A may be implemented through steps 1025 to 1028 shown in fig. 3C, which is described in detail below.
The following processing is performed on the object model of each target animation frame:
in step 1025, spatial location feature extraction processing is performed on the object model of the target animation frame, so as to obtain a second spatial location feature of the object model of the corresponding target animation frame.
In some embodiments, step 1025 may be implemented by: acquiring first space data of a root skeleton of an object model of a target animation frame; and carrying out spatial position feature extraction processing on the first spatial data of the root bones to obtain second spatial position features of the corresponding object model.
As an example, the object model of the target animation frame has a second spatial position in the model space, the second spatial position may be represented by first spatial data of a root bone of the object model in the model space, and the first spatial data of the root bone of the object model of the target animation frame is subjected to spatial position feature extraction processing, so as to obtain a second spatial position feature of the corresponding object model, for example, the first spatial data of the root bone of the object model representing the object model is taken as 0 point on the X axis, 95 point on the Y axis, and 32 point on the Z axis, and the second spatial position feature of the corresponding object model is taken as (X P,YP,ZP) = (0, 95, 32), where X P represents a point position on the X axis, Y P represents a point position on the Y axis, and Z P represents a point position on the Z axis.
The first spatial data of the root skeleton of the object model of the target animation frame is subjected to spatial position feature extraction processing, so that the second spatial position of the object model of the target animation frame with reference significance can be obtained, the gesture of the object model of the target animation frame can be effectively aligned with the gesture of the object model of the source animation frame in the spatial position in the adjustment processing process, the problem that the motion of the object model presented in the finally formed mixed animation cannot cause abnormal transformation of the spatial position is solved, and the animation mixing effect is improved.
In step 1026, the object model of the target animation frame is subjected to a spatial orientation feature extraction process, so as to obtain a second spatial orientation feature of the object model of the corresponding target animation frame.
In some embodiments, step 1026 may be implemented by: acquiring second spatial data of each main skeleton of an object model of a target animation frame; carrying out space orientation feature extraction processing on the second space data of each main skeleton to obtain the horizontal rotation angle of the main skeleton; and carrying out mixed processing on the horizontal rotation angles of the main bones to obtain a second space orientation characteristic of the object model corresponding to the target animation frame.
As an example, the spatial orientation of the object model of the target animation frame is characterized by taking the bipedal bone, the sternal bone and the pelvic bone of the object model of the target animation frame as main bones, wherein the second spatial data of the bipedal bone is 25 degrees in rotation angle on the X axis, 0 degrees in rotation angle on the Y axis, 0 degrees in rotation angle on the Z axis, 40 degrees in rotation angle on the Y axis, -5 degrees in rotation angle on the Z axis, 0 degrees in rotation angle on the Z axis, 35 degrees in rotation angle on the X axis, 0 degrees in rotation angle on the Y axis, and 0 degrees in rotation angle on the Z axis, wherein the X axis is a coordinate axis in the vertical direction. And respectively carrying out space orientation feature extraction processing on the second space data of the two bones of the object model of the target animation frame, namely, the sternum bone and the pelvic bone, wherein the horizontal rotation angle of the two bones is 25 degrees, the horizontal rotation angle of the sternum bone is 40 degrees, the horizontal rotation angle of the pelvic bone is 35 degrees, and according to the angle weights corresponding to different bones, for example, the two bones weight=0.2, the sternum weight=0.4 and the pelvic bone weight=0.4, the second space orientation feature R2=25 degrees×0.2+40 degrees×0.4+35 degrees×0.4=21 degrees of the object model of the target animation frame is obtained.
The second spatial data of the plurality of main skeletons for representing the spatial orientation of the object model of the target animation frame are subjected to spatial orientation feature extraction processing, so that the spatial orientation of the object model of the target animation frame with reference significance can be obtained, and further, in the adjustment processing process, the gesture of the object model of the target animation frame and the gesture of the object model of the source animation frame can be effectively aligned in the spatial orientation according to actual needs, and the problem that the spatial orientation of the object model presented in the finally formed mixed animation is free from abnormal distortion is solved, so that the animation mixing effect is improved.
In step 1027, the object model of the target animation frame is subjected to gravity center feature extraction processing, so as to obtain a second gravity center feature of the object model of the corresponding target animation frame.
In some embodiments, step 1027 may be implemented by: when the object model of the target animation frame is in a single-limb touchdown state, the touchdown limb is taken as a gravity limb; when the object model of the target animation frame is in a multi-limb touchdown state, the object model of the target animation frame is subjected to gravity center movement analysis processing to obtain a gravity center movement direction, and a gravity center limb is determined based on the gravity center movement direction.
As an example, when the object model of the target animation frame is in a single-limb touchdown state, for example, only the left foot is in contact with the horizontal plane among the left foot and the right foot of the object model of the target animation frame, it may be determined according to the physical rule that the only touchdown limb, i.e., the left foot, necessarily serves as the barycentric limb of the object model of the target animation frame. When the object model of the target animation frame is in a multi-limb touchdown state, for example, the left foot and the right foot of the object model of the target animation frame are in contact with the horizontal plane, the barycenter limb cannot be determined only according to the touchdown state at this time, and according to the physical law, the barycenter moving direction of the object model is related to the barycenter limb, for example, the barycenter moving direction of the object model is from left to right, and the barycenter limb of the object model is right, so that in the multi-limb touchdown state, the barycenter moving direction of the object model of the target animation frame can be obtained by performing barycenter moving analysis processing on the object model of the target animation frame, and the barycenter limb of the object model of the target animation frame is determined based on the barycenter moving direction.
In some embodiments, the foregoing center of gravity movement analysis processing is performed on the object model of the target animation frame, so as to obtain the center of gravity movement direction, which may be implemented by the following ways: acquiring an adjacent target animation frame adjacent to the target animation frame; acquiring a first ground projection position of each limb of an object model of a target animation frame, and acquiring a second ground projection position of each limb of an object model of an adjacent target animation frame; mixing the first ground projection positions of a plurality of limbs of the object model of the target animation frame to obtain first barycenter position data of the object model of the target animation frame; mixing the second ground projection positions of the limbs of the object model of the adjacent target animation frame to obtain second center position data of the object model of the adjacent target animation frame; a center of gravity movement direction is determined based on the first center of gravity position data and the second center of gravity position data.
As an example, adjacent target animation frames correspond to a first timestamp adjacent to the timestamp corresponding to the target animation frame. Determining the direction of the movement of the center of gravity of the object model of the target animation frame, determining the first center of gravity position of the object model of the target animation frame and the second center of gravity position of the object model of the adjacent target animation frame, and determining what direction the first center of gravity position moves relative to the second center of gravity position according to the second center of gravity position, thereby determining the direction of the movement of the center of gravity. For example, when the left foot and the right foot of the object model of the target animation frame are in the touchdown state, acquiring that the first ground projection position of the left foot of the object model of the target animation frame is (Y P1,ZP1) = (10, 5), the first ground projection position of the right foot of the object model of the target animation frame is (Y P1,ZP1) = (10, 25), and performing hybrid processing on the first ground projection positions of the left foot and the right foot of the object model of the target animation frame, wherein the left foot weight is 0.5, the right foot weight is 0.5, so as to obtain first barycentric position data g1= (10×0.5+10×0.5,5×0.5+25×0.5) = (10, 15) of the object model of the target animation frame; acquiring a second ground projection position of a left foot of an object model of an adjacent target animation frame as (Y P2,ZP2) = (10, 15), acquiring a second ground projection position of a right foot of the object model of the adjacent target animation frame as (Y P2,ZP2) = (10, 25), and performing mixed processing on the second ground projection positions of the left foot and the right foot of the object model of the adjacent target animation frame, wherein the left foot weight is 0.5, the right foot weight is 0.5, and obtaining second center-of-gravity position data G2= (10×0.5+10×0.5, 15×0.5+25×0.5) = (10, 20) of the object model of the adjacent target animation frame; from the first gravity center position data g1= (10, 15) and the second gravity center position data g2= (10, 20), it is known that the first gravity center position is shifted from left to right with respect to the second gravity center position, that is, the gravity center shifting direction is left to right.
The first barycenter position data of the object model of the target animation frame is determined according to the first ground projection positions of the plurality of limbs of the object model of the target animation frame, the second barycenter position data of the object model of the adjacent target animation frame is determined according to the second ground projection positions of the plurality of limbs of the object model of the adjacent target animation frame, and the barycenter moving direction of the object model of the target animation frame is determined based on the first barycenter position data and the second barycenter position data, so that the barycenter limbs of the object model of the target animation frame are determined according to the barycenter moving direction under the condition that the barycenter limbs of the object model of the target animation frame touch the ground, and the gesture of the object model of the target animation frame is adjusted according to the barycenter limbs, so that the barycenter limbs of the object model in the finally obtained mixed animation can not have abnormal position variation, and the animation mixing effect is improved.
According to different ground contact states of a plurality of limbs of an object model of a target animation frame, the gravity center limb of the object model of the target animation frame is determined in a corresponding mode, in a single limb ground contact state, the grounded limb is taken as the gravity center limb, in a multi-limb ground contact state, the gravity center limb is determined according to the gravity center moving direction of the object model of the target animation frame, in subsequent adjustment processing, the gravity center limb alignment processing is carried out on the gesture of the object model of the target animation frame according to actual needs, so that the action gesture change of the object model presented in the finally obtained mixed animation accords with a physical rule and accords with visual expression expectations, and the animation mixing effect is improved.
In step 1028, a second spatial pose feature is determined based on at least one of the second spatial position feature, the second spatial orientation feature, and the second centroid feature.
As an example, the second spatial position feature of the object model of the target animation frame is (X P,YP,ZP) = (0, 105, 42), the second spatial orientation feature r2=21 degrees of the object model, the second barycenter of the second barycenter feature is the right foot of the object model, and the second spatial pose feature of the object model of the target animation frame is determined by the first spatial position feature (0, 105, 42), the first spatial orientation feature 40 degrees, and the first barycenter feature right foot.
The method comprises the steps of carrying out spatial position feature extraction processing on an object model of a target animation frame to obtain a second spatial position feature so as to represent the spatial position of the object model of the target animation frame, carrying out spatial orientation feature extraction processing on the object model of the target animation frame to obtain a second spatial orientation feature so as to represent the spatial orientation of the object model of the target animation frame, carrying out gravity center feature extraction processing on the object model of the target animation frame to obtain a second gravity center feature, taking at least one of the second spatial position feature, the second spatial orientation feature and the second gravity center feature as a second spatial posture feature of the object model of the target animation frame, and taking the second spatial posture feature as a basic spatial posture feature for adjusting the posture of the animation model of the target animation frame, so that at least one of the spatial position, the spatial orientation and the gravity center feature of the object model to be mixed of the target animation frame obtained through subsequent adjustment is matched with the posture of the object model of the source animation frame, and the action posture change of the finally obtained object model to be presented in the mixed animation accords with a physical rule and accords with visual expression expectations, thereby improving the animation mixing effect.
With continued reference to fig. 3A, in step 103, for each timestamp, based on the first spatial pose feature and the second spatial pose feature of the corresponding timestamp, the pose of the object model of the target animation frame of the corresponding timestamp is adjusted, so as to obtain the pose to be mixed of the target animation frame of the corresponding timestamp.
As an example, the first spatial pose feature includes a first spatial position feature, a first spatial orientation feature, and a first center of gravity feature, and the second spatial pose feature includes a second spatial position feature, a second spatial orientation feature, and a second center of gravity feature. And for the source animation frame and the target animation frame corresponding to each time stamp, according to the first spatial posture characteristics of the object model of the source animation frame, adjusting the second spatial posture characteristics of the object model of the target animation frame, namely adjusting the posture of the object model of the target animation frame, and obtaining the to-be-mixed posture of the object model of the target animation frame corresponding to the time stamp.
Referring to fig. 3D, fig. 3D is a fourth flowchart of a data processing method according to an embodiment of the present application. In some embodiments, step 103 shown in fig. 3A may be implemented by steps 1031 to 1033 shown in fig. 3D, which are described in detail below.
In step 1031, a position adjustment process corresponding to the first spatial position feature is performed on the second spatial position feature based on the posture of the object model of the target animation frame, to obtain the posture of the object model of the target animation frame after the position adjustment.
As an example, the object model of the target animation frame and the object model of the source animation frame are embodied in the same model space, the pose of the object model of the source animation frame corresponds to the first spatial pose feature, and the pose of the object model of the target animation frame corresponds to the second spatial pose feature. In this step, first, the object model of the target animation frame is aligned with the object model of the source animation frame, and the spatial position of the object model of the target animation frame is adjusted to correspond to the first spatial position feature, for example, the second spatial position feature (0, 95, 32) is adjusted to (0, 105, 42), that is, the position adjustment process corresponding to the first spatial position feature is performed on the second spatial position feature, with the first spatial position feature (0, 105, 42) as a standard, so as to obtain the posture of the object model of the target animation frame after the position adjustment.
In step 1032, based on the posture of the object model of the target animation frame after the position adjustment, the orientation alignment process is performed based on the second spatial orientation feature and the first spatial orientation feature, and the posture of the object model of the target animation frame after the orientation alignment is obtained.
In some embodiments, step 1032 may be implemented by: in response to the second spatial orientation feature not being designated to be reserved, performing an orientation alignment process corresponding to the first spatial orientation feature on the second spatial orientation feature based on the pose of the object model of the target animation frame after the position adjustment, to obtain the pose of the object model of the target animation frame after the orientation alignment; responsive to the second spatial orientation feature designation being preserved, a pose of the object model of the position-adjusted target animation frame is taken as a pose of the object model of the orientation-aligned target animation frame.
As an example, after the position adjustment is performed, according to different actual needs, the spatial orientation of the object model of the target animation frame may need to be adjusted, if the spatial orientation of the object model of the target animation frame after the position adjustment does not match the spatial orientation of the object model of the source animation frame, the spatial orientation of the object model of the target animation frame needs to be adjusted, and at this time, in response to an instruction that the second spatial orientation feature is not specified to be reserved, the second spatial orientation feature is horizontally rotated by 40 degrees based on the posture of the object model of the target animation frame after the position adjustment, that is, based on the first spatial orientation feature being horizontally rotated by 40 degrees as a standard, that is, the second spatial orientation feature is horizontally rotated by 21 degrees, that is, the orientation alignment processing corresponding to the first spatial orientation feature is performed on the second spatial orientation feature, so as to obtain the posture of the object model of the target animation frame after the orientation alignment; if the spatial orientation of the object model of the target animation frame after the position adjustment is matched with the spatial orientation of the object model of the source animation frame, the spatial orientation of the object model of the target animation frame does not need to be adjusted, and the posture of the object model of the current target animation frame is taken as the posture of the object model of the target animation frame after the orientation alignment. The matching here does not necessarily refer to the same relationship, but may be any matching relationship conforming to the visual requirements.
According to the actual situation, when the object model of the target animation frame needs to be subjected to orientation alignment processing, the first spatial orientation characteristic of the source animation frame is taken as a standard, and the spatial orientation of the object model of the target animation frame is subjected to orientation alignment processing based on the second spatial orientation characteristic of the object model of the target animation frame, so that on one hand, the adjustment processing efficiency is improved, on the other hand, the spatial orientation of the object model of the target animation frame is ensured to be matched with the spatial orientation of the object model of the source animation frame, further, the subsequently obtained mixed gesture meets the visual requirement, and the action gesture presented by the finally obtained mixed animation is smooth and natural, so that the animation mixing effect is improved.
In step 1033, based on the gesture of the object model of the aligned target animation frame, center of gravity alignment processing is performed based on the first center of gravity feature, and a to-be-mixed gesture of the target animation frame is obtained.
As an example, the first barycentric feature may characterize one of the following: the object model of the source animation frame does not have a barycentric limb and the object model of the source animation frame has a barycentric limb. The second centering feature may characterize one of the following: the object model of the target animation frame does not have a barycentric limb and the object model of the target animation frame has a barycentric limb.
In some embodiments, step 1033 may be implemented by: when the first barycenter characteristic represents that the object model of the source animation frame has barycenter limbs and the second barycenter characteristic represents that the object model of the target animation frame has barycenter limbs, carrying out barycenter alignment processing based on the second barycenter characteristic and the first barycenter characteristic to obtain a to-be-mixed gesture of the target animation frame; when the first barycenter characteristic represents that the object model of the source animation frame has barycenter limbs and the second barycenter characteristic represents that the object model of the target animation frame does not have barycenter limbs, carrying out barycenter alignment processing based on the first barycenter characteristic to obtain a to-be-mixed gesture of the target animation frame; when the first barycenter characteristic represents that the object model of the source animation frame does not have barycenter limbs, taking the gesture of the object model of the aligned target animation frame as the gesture to be mixed of the target animation frame.
As an example, since the first barycentric feature and the second barycentric feature respectively characterize both cases, the first barycentric feature and the second barycentric feature are combined, and four cases can be characterized. In the case where the object model of the source animation frame does not have a barycentric limb and the object model of the target animation frame has a barycentric limb, it is determined that the barycentric limb, which is a body fulcrum, is not displaced when the motion posture of the object model transitions from the source animation to the target animation, and therefore, in order to naturally transition the motion posture of the object model of the source animation frame to the motion posture of the object model of the target animation frame, it is necessary to perform an alignment process for the object model of the target animation frame and the object model of the source animation frame based on the barycentric limb of the object model of the target animation frame, for example, the barycentric limb of the object model of the target animation frame is the left foot, and then the left foot of the object model of the target animation frame and the left foot of the object model of the source animation frame are aligned. In the case where the object model of the source animation frame has a barycentric limb and the object model of the target animation frame does not have a barycentric limb, the barycentric limb of the object model of the source animation frame is not displaced according to the principle described above, and therefore, in order to naturally transition the motion posture of the object model of the source animation frame to the motion posture of the object model of the target animation frame, when the frame-by-frame adjustment processing is performed, it is necessary to perform the alignment processing of the object model of the target animation frame and the object model of the source animation frame based on the barycentric limb of the object model of the source animation frame, for example, when the barycentric limb of the object model of the source animation frame is the right foot, the right foot of the object model of the target animation frame and the right foot of the object model of the source animation frame are aligned. In the case that the object model of the source animation frame does not have a barycentric limb and the object model of the target animation frame does not have a barycentric limb, since the barycentric alignment processing of the object model of the source animation frame and the object model of the target animation frame according to the barycentric limb is mainly used for avoiding the problems of motion aliasing and sliding, and in the case that the barycentric limb does not exist in the object model of the source animation frame and the object model of the target animation frame, even if the sliding phenomenon occurs, the situation is visually reasonable, therefore, the barycentric alignment processing of the object model of the source animation frame and the object model of the target animation frame is not required, and only the gesture of the object model of the target animation frame after alignment is used as the gesture to be mixed of the target animation frame. And for the situation that the object model of the source animation frame has a barycenter limb and the object model of the target animation frame has a barycenter limb, the barycenter alignment processing is needed to be carried out on the object model of the source animation frame and the object model of the target animation frame based on the second barycenter feature and the first barycenter feature, so as to obtain the to-be-mixed gesture of the target animation frame.
In some embodiments, the above-mentioned processing of aligning the center of gravity based on the second center of gravity feature and the first center of gravity feature to obtain the gesture to be mixed of the target animation frame may be implemented by the following ways: when the first barycentric limb corresponding to the first barycentric feature is the same as the second barycentric limb corresponding to the second barycentric feature, performing barycentric alignment processing on the object model of the target animation frame based on the first barycentric limb or the second barycentric limb to obtain the gesture to be mixed of the target animation frame; and when the first barycenter limb corresponding to the first barycenter feature is different from the second barycenter limb corresponding to the second barycenter feature, performing barycenter alignment processing on the object model of the target animation frame based on the second barycenter limb to obtain the to-be-mixed gesture of the target animation frame.
As an example, when the first barycentric limb of the object model of the source animation frame is the same as the second barycentric limb of the object model of the target animation frame, if both are left feet of the object model, the left feet of the object model of the source animation frame and the left feet of the object model of the target animation frame are aligned, and after alignment, the gesture of the object model of the target animation frame is the gesture to be mixed of the target animation frame. When the first barycentric limb of the object model of the source animation frame is different from the second barycentric limb of the target animation frame, for example, the first barycentric limb of the object model of the source animation frame is the right foot, and the second barycentric limb of the object model of the target animation frame is the left foot, at this time, as the action gesture presented by the source animation corresponding to the source animation frame is completed, the main task of animation transition is to link the action gesture presented by the target animation corresponding to the target animation frame with the action gesture presented by the source animation frame, so as to ensure that the action gesture presented by the target animation frame can be well presented, therefore, the left foot of the object model of the source animation frame and the left foot of the object model of the target animation frame are aligned by the second barycentric limb of the object model of the target animation frame, so that when the action gesture of the source animation frame is transited to the action gesture of the target animation frame, the left foot of the barycentric limb is not displaced, that is, the problem of sliding is not caused, and the gesture of the object model of the target animation frame is the object model to be mixed gesture of the target animation frame after alignment.
When the first barycentric limb of the object model of the source animation frame is identical to the second barycentric limb of the object model of the target animation frame, the same barycentric limb is used for aligning the object model of the source animation frame with the object model of the target animation frame, and when the first barycentric limb of the object model of the source animation frame is different from the second barycentric limb of the object model of the target animation frame, the second barycentric limb of the object model of the target animation frame is used as a standard, the object model of the source animation frame and the object model of the target animation frame are aligned, so that the barycentric limb of the object model cannot shift when the motion gesture of the source animation is transited to the motion gesture of the target animation in the finally obtained mixed animation, namely, the problem of sliding steps cannot occur, and the animation mixing effect is improved.
According to different conditions corresponding to the first gravity center characteristic of the object model of the source animation frame and the second gravity center characteristic of the object model of the target animation frame, different alignment processing methods are determined, so that the to-be-mixed gesture of the target animation frame obtained after alignment processing is matched with the gesture of the object model of the source animation frame according to each condition, the mixed gesture obtained by subsequent mixing processing according to the to-be-mixed gesture of the target animation frame and the gesture of the object model of the source animation frame is ensured to meet visual requirements, and the action gesture presented by the finally obtained mixed animation is further enabled to be transited naturally, so that the animation mixing effect is improved.
The spatial position of the object model of the target animation frame is adjusted according to the first spatial position characteristic of the object model of the source animation frame, so that the spatial position of the object model of the target animation frame is matched with the spatial position of the object model of the source animation frame, according to the actual situation, the spatial orientation of the object model of the target animation frame is adjusted according to the first spatial orientation characteristic of the object model of the source animation frame, or the second spatial orientation characteristic of the object model of the target animation frame is reserved, the spatial orientation of the object model of the target animation frame is matched with the spatial orientation of the object model of the source animation frame, according to the actual situation, the object model of the source animation frame and the object model of the target animation frame are aligned based on the gravity centers, or the gravity center alignment processing is not performed on the object model of the target animation frame, so that the action appearance phenomenon such as action distortion and sliding cannot occur in the gesture of the object model when the action gesture of the finally obtained mixed animation is transited to the action gesture of the target animation, thereby improving the animation mixing effect.
With continued reference to fig. 3A, in step 104, for each timestamp, a pose of the object model of the source animation frame corresponding to the timestamp and a pose to be mixed of the object model of the target animation frame corresponding to the timestamp are mixed, so as to obtain a mixed pose corresponding to the timestamp.
Referring to fig. 3E, fig. 3E is a fifth flowchart of a data processing method according to an embodiment of the present application. In some embodiments, step 104 shown in fig. 3A may be implemented by steps 1041 to 1043 shown in fig. 3E, which are described in detail below.
In step 1041, source data of a plurality of bones of a pose of an object model of a source animation frame corresponding to a time stamp is acquired, and target data of a plurality of bones of a pose to be mixed of an object model of a target animation frame corresponding to a time stamp is acquired.
As an example, the plurality of bones includes a left-hand bone, a right-hand bone, source data of the left-hand bone corresponding to the pose of the object model of the source animation frame of the timestamp is a description matrix A1, source data of the right-hand bone corresponding to the pose of the object model of the source animation frame of the timestamp is a description matrix A2, target data of the left-hand bone corresponding to the pose to be mixed of the object model of the target animation frame of the timestamp is a description matrix A3, and target data of the right-hand bone corresponding to the pose to be mixed of the object model of the target animation frame of the timestamp is a description matrix A4, wherein each description matrix includes points in the X-, Y-, and Z-coordinate axes, rotation angles around the X-, Y-, and Z-coordinate axes, and scaling along the X-, Y-, and Z-coordinate axes including the bone.
In step 1042, for each bone, linear interpolation processing is performed on the source data of the bone and the target data of the bone to obtain mixed data of the corresponding bone.
As an example, for a left-hand bone, linear interpolation processing is performed on source data of the left-hand bone and target data of the bone, that is, linear interpolation processing is performed on the description matrix A1 and the description matrix A3, so as to obtain an interpolated description matrix B, where the description matrix B is the mixed data corresponding to the left-hand bone. The value of the interpolation factor for performing the linear interpolation process may be determined according to actual needs, and in the animation hybrid application scene, the value of the interpolation factor gradually transits from 0 to 1 with the gradual transition of the time stamp.
In step 1043, rendering processing is performed based on the mixed data of the bones, so as to obtain a mixed pose of the object model of the source animation frame corresponding to the timestamp and the pose to be mixed of the object model of the target animation frame corresponding to the timestamp.
As an example, rendering is performed on the mixed data of the plurality of bones under the time stamp, so that a pose presented by an object model formed according to the mixed data of the plurality of bones can be obtained, wherein the pose is a mixed pose combining a pose of an object model of a source animation frame under the time stamp and a pose to be mixed of an object model of a target animation frame corresponding to the time stamp, and is a transition pose between the pose of the object model of the source animation frame under the time stamp and the pose to be mixed of the object model of the target animation frame corresponding to the time stamp.
The method comprises the steps of carrying out linear interpolation processing on the gesture of an object model of a source animation frame corresponding to each timestamp and the gesture of an object model of a target animation frame to obtain a transition gesture between the gesture of the object model of the source animation frame under the timestamp and the gesture to be mixed of the object model of the target animation frame corresponding to the timestamp, wherein the transition gesture is used as a mixed gesture between the gesture of the object model of the source animation frame under the corresponding timestamp and the gesture to be mixed of the object model of the target animation frame corresponding to the timestamp, so that the action presented by the mixed gesture meets the visual requirement, and the action gesture presented by the mixed animation generated according to the mixed gesture is ensured to be transited naturally, thereby improving the animation mixing effect.
With continued reference to fig. 3A, in step 105, a rendering process is performed on the mixed poses of the plurality of timestamps, resulting in a mixed animation of the source transitional animation and the target transitional animation.
As an example, after obtaining a plurality of mixed poses corresponding to a plurality of timestamps, generating a mixed animation frame corresponding to each timestamp according to the mixed pose corresponding to the timestamp, where the mixed animation frame corresponds to the mixed pose, so as to obtain a plurality of mixed animation frames corresponding to the mixed poses of the plurality of timestamps, and performing animation synthesis processing on the plurality of mixed animation frames corresponding to the mixed pose of the plurality of timestamps according to the time sequence of the plurality of timestamps, so as to obtain the mixed animation corresponding to the source transition animation and the target transition animation.
An exemplary application of embodiments of the present application in an actual basketball game animation hybrid application scenario will now be described.
Referring to fig. 4, fig. 4 is a schematic diagram of an animation mixing section according to an embodiment of the present application. As shown in fig. 4, when the source animation and the target animation are joined, an animation mixing time period is set, the source animation and the target animation corresponding to the animation mixing time period are overlapped in time sequence, the overlapped part is used as an animation mixing section, the part corresponding to the animation mixing section in the source animation is used as a source transition animation, and the part corresponding to the animation mixing section in the target animation is used as a target transition animation.
Referring to fig. 5, fig. 5 is a schematic diagram of an animation mixing process according to an embodiment of the present application. As shown in fig. 5, for the source transition animation and the target transition animation of the animation mixing section corresponding to the source animation and the target animation, respectively, the animation mixing process is performed, and may be mainly divided into 4 steps: step 501 is spatial position and spatial orientation analysis, namely, analyzing the spatial position and spatial orientation characteristics of the object model in the animation mixing interval; step 502 is a barycentric limb analysis; step 503 is to adjust the gesture to be mixed of the target animation frames according to the spatial position, spatial orientation, and the characteristics of the gesture of the object model of all the source animation frames in the animation mixing interval; step 504 is a mixed output, which mixes the adjusted pose to be mixed of the target animation frame with the pose of the object model of the source animation frame.
Step 501, spatial position and spatial orientation analysis: and identifying the second spatial position features and the second spatial orientation features of all target animation frames in the animation mixing interval. These two features are somewhat subjective and therefore there are two ways to identify: the first is the human specified mode: in the animation process, the spatial position and spatial orientation features of the motion of a character Bone model will be generally described by a Root Bone (Root Bone), which is the basic Bone of the skeleton, unlike other bones, not intended to display a certain Bone in the skeleton, such as a leg or arm, but a reference point of the whole skeleton structure, in which case the orientation of the Bone model is not a subjective orientation, for example in basketball motion; a second approach, namely multi-skeletal joint analysis, was introduced: the spatial position features of the motion are described by Root bones, and a plurality of main bones (including Root bones) which can be subjectively considered to represent the directions, such as the two-foot bones, the pelvic bones and the sternum bones, are selected, and after unifying the coordinate spaces, the directions of the two bones and the sternum bones are analyzed, wherein the specific analysis process is as follows: the spatial Position of each main skeleton has a description matrix containing three information of the Position of the main skeleton on X, Y and Z coordinate axes (Position), rotation angle around X, Y and Z coordinate axes (Rotation), and scaling along X, Y and Z coordinate axes (Scale). And setting an X axis as a coordinate axis perpendicular to a horizontal plane, taking the rotation angle of each main skeleton in the X axis direction in a world coordinate system as the space orientation information of the main skeleton, and calculating the orientation of the whole skeleton model in the world coordinate system according to preset weight mixing. So that the user can freely control the details of each animation transition.
Step 502, barycentric limb analysis: the barycentric limb analysis can identify the barycentric movement direction and barycentric limb change of two animations in the animation mixing interval. The barycentric limb needs to analyze the position change information of a plurality of limbs in all animation frames in the whole animation mixing interval, judge the ground contact state of a plurality of limbs, such as the ground contact condition of left and right feet, and judge the barycentric limb by combining the barycentric movement direction. The specific implementation mode is as follows: if the object model only has a single foot to touch the ground, the touch foot is the gravity limb; if both feet of the object model touch the ground, the moving direction of the center of gravity of the object model is analyzed, such as from left to right or from right to left, and the accurate position is not needed, and only analysis is performed in a two-dimensional space, so that a great amount of calculation cost can be saved, and the analysis of various actions shows that the position obtained by using the positions of the pelvic bones and the sternum to perform weight mixing can be suitable for the characterization of the center of gravity by the algorithm, and if the center of gravity is moved from left to right, the right foot is used as a barycenter limb, see fig. 6, and fig. 6 is a barycenter analysis schematic diagram provided by the embodiment of the application. As shown in fig. 6, the weight mixing is performed according to the positions of the pelvic bone and the sternum bone, specifically, the projection position of the pelvic bone on the horizontal plane and the projection position of the sternum bone on the horizontal plane are used, the weight mixing is performed according to the first weight corresponding to the pelvic bone and the second weight corresponding to the sternum bone, and the gravity center position of the object model on the horizontal plane is determined; if both feet are not grounded, the problem of judging the barycenter limb does not exist, because barycenter limb judgment aims to solve the problem of barycenter limb sliding of animation mixing under the condition that the limbs are grounded, and the sliding of barycenter limb on the ground is reduced. After the barycentric limb is analyzed, the steps of animation mixing can be accurately controlled, so that the barycentric limb slides minimally, skeleton actions are smoother, and the animation mixing quality is improved.
With continued reference to fig. 5, step 503, to-be-mixed pose adjustment: according to the analyzed spatial position features, spatial orientation features and barycentric limbs, the target animation frame of the target transition animation, for example, the gesture of the object model of the mixed initial frame, is adjusted, and the gesture of the object model of the mixed initial frame is adjusted according to a certain strategy to achieve the aim. Referring to fig. 7, fig. 7 is an initial pose comparison diagram of a pose to be mixed according to an embodiment of the present application. As shown in fig. 7, in the application scenario of the basketball game, the object model needs to complete different actions according to different basketball motion commands, and in the basketball game production stage, different actions corresponding to different basketball motion commands need to be produced, such as a dribbling action, a shooting action, a passing action, etc., after the object model of the source animation completes the current action corresponding to the current basketball motion command, that is, the object model of the target animation completes the subsequent action corresponding to the next basketball motion command by mixing the animation to the target animation, and before the spatial posture adjustment process is not performed, the posture of the object model of the target animation frame is in the initial posture state, at this time, the spatial positions of the object model of the source animation frame and the object model of the target animation frame are, The spatial orientation and the position of the barycentric limb are different, and the first barycenter of the object model of the source animation frame and the second barycenter of the object model of the target animation frame are also different, so that problems such as sliding, moving and moving, and unsmooth moving line can occur in the transitional animation obtained by mixing under the condition. Referring to fig. 8, fig. 8 is a schematic diagram of a process for adjusting a posture to be mixed according to the embodiment of the present application. As shown in fig. 8: ① Firstly, the gesture of the object model of the target animation frame and the gesture of the object model of the source animation frame are aligned by the space position, and the gesture deformation caused by different space positions of the source animation and the target animation in the aspect of resource production can be reduced. ② Then, the space orientation alignment problem of the gesture is processed, wherein whether the target orientation is reserved or not, namely, the second space orientation feature of the object model of the target animation frame is manually specified, the second space orientation feature is the space orientation of the object model of the target animation frame, if yes, the subsequent barycenter limb analysis of the source animation frame is carried out without using the first space orientation feature alignment gesture of the object model of the source animation frame, if not, the second space orientation feature of the object model of the target animation frame is adjusted to correspond to the first space orientation feature by using the space orientation alignment gesture, namely, using the first space orientation feature of the object model of the source animation frame, the alignment process is to align the spatial orientation of the object model of the target animation frame with the spatial orientation of the object model of the source animation frame, which is the spatial orientation of the object model of the target animation frame corresponding to the same time stamp, so that the transition of the animation is more natural and smooth, but the spatial orientation of the object model of the target animation is caused to deviate from the design intent, which is not a problem, generally, the animation with the design intent generally needs to be solved by using Motion warping (Motion warping) technology in the game process, and the Motion warping technology can solve the problem caused by the alignment orientation. ③ After alignment, the barycenter limbs of the object model of the source animation frame and the barycenter limbs of the object model of the target animation frame need to be aligned, first barycenter limb analysis of the source animation frame needs to be performed, namely, barycenter limb analysis is performed on the object model of the source animation frame, whether the first barycenter limb exists in the object model of the source animation frame is determined, if yes, whether the first barycenter limb is specified is determined, when the first barycenter limb is specified, barycenter limb analysis of the target animation frame is performed, namely, whether the first barycenter limb and the second barycenter limb are identical according to the object model of the target animation frame acquired in the step 502 is compared, if the first barycentric limb is different from the second barycentric limb, the first barycentric limb is transited to the second barycentric limb and is aligned, and if the first barycentric limb is the same as the second barycentric limb, the second barycentric limb is aligned; when the first barycenter limb is not designated, carrying out barycenter limb analysis of the target animation frame, namely judging whether a second barycenter limb exists according to the barycenter foot analysis result of the object model of the target animation frame obtained in the step 502, comparing whether the first barycenter limb is the same as the second barycenter limb when the second barycenter limb exists, if the first barycenter limb is different from the second barycenter limb, transferring the first barycenter limb to the second barycenter limb and aligning the gesture, if the first barycenter limb is the same as the second barycenter limb, aligning the gesture with the second barycenter limb when the second barycenter limb does not exist; if the first barycenter limb does not exist in the object model of the source animation frame, determining whether the first barycenter limb is specified, when the first barycenter limb is specified, performing target animation frame barycenter limb analysis, namely, specifying the first barycenter limb according to the second barycenter limb of the object model of the target animation frame acquired in step 502, when the first barycenter limb is considered to be the same as the second barycenter limb, performing target animation frame barycenter limb analysis when the first barycenter limb is not specified, namely, determining whether the second barycenter limb exists according to the barycenter leg analysis result of the object model of the target animation frame acquired in step 502, if the second gravity limb does not exist, the object model of the source animation frame and the object model of the target animation frame are not provided with the gravity limb, namely, the problem of gravity center alignment processing is not involved at the moment, the gesture is aligned only by the spatial position, and if the second gravity limb exists, the gesture is aligned by the second gravity limb. Taking the first barycentric limb and the second barycentric limb as the left foot as an example, the meaning of aligning the barycentric limbs is that the left foot of the object model of the target animation frame corresponding to all the time stamps is at the same position as the left foot of the object model of the source animation frame in the whole animation mixing interval, so that the problem of sliding is greatly improved, and even if the right foot of the non-barycentric limb has the phenomenon of sliding, the phenomenon of sliding is quite reasonable. For the case that the gravity center limbs of the source animation and the target animation are different, the mode of using the gravity center limb transition is reasonable.
Referring to fig. 9A, fig. 9A is a first schematic diagram of an adjustment result of an attitude to be mixed according to an embodiment of the present application. As shown in fig. 9A, in the application scenario of the basketball game, when the motion gesture of the object model transits from the source motion presented by the source animation to the target motion presented by the target animation, because the motion track of the object model in the basketball motion is continuous, the gesture of the object model of the source animation frame corresponding to the same timestamp is always the same or similar to the spatial position and the spatial orientation of the object model of the target animation frame to be mixed, and in the basketball motion, the barycenter limb serving as the body fulcrum does not move, so that the gesture of the object model of the source animation frame corresponding to the same timestamp needs to be aligned with the barycenter limb of the object model of the target animation frame to be mixed, after the spatial gesture adjustment processing is performed according to the spatial position, the spatial orientation and the barycenter limb, the gesture of the object model of the target animation frame is in the state to be mixed, at this time, the spatial position and the spatial orientation of the object model of the source animation frame are matched with the object model of the target animation frame, and according to the barycenter foot of the object model, that is the left foot, the object model of the source animation frame is aligned with the object model of the target animation frame is not required to be aligned with the barycenter limb of the target animation frame, so that the barycenter limb of the object model is the barycenter of the target animation frame is completed in the transition process of the first motion of the two-frame, and the barycenter frame is required to be aligned with the barycenter model of the barycenter frame of the object model.
Referring to fig. 9B, fig. 9B is a second schematic diagram of a posture adjustment result to be mixed according to an embodiment of the present application. As shown in fig. 9B, in the application scenario of the basketball game, since the motion track of the object model in the basketball motion is continuous, and the barycentric limb of the object model of the source animation frame as the body fulcrum does not move in the basketball motion, the spatial position and the spatial orientation of the pose to be mixed of the object model of the source animation frame corresponding to the same timestamp are required to be the same or similar, the pose of the object model of the source animation frame corresponding to the same timestamp is required to be aligned with the barycentric limb of the pose to be mixed of the object model of the target animation frame, after the spatial pose adjustment processing is performed according to the spatial position, the spatial orientation and the barycentric limb, the pose of the object model of the target animation frame is in the state to be mixed, at this time, the object model of the source animation frame is matched with the spatial position and the spatial orientation of the object model of the target animation frame, and the object model of the source animation frame is aligned with the object model of the target animation frame according to the second barycentric limb, that is, the right foot, in the transition animation, the object model of the source animation frame is completed with the right foot center, and the barycentric motion of the object model of the target animation frame is not required to be aligned with the first barycentric model of the object model of the target animation frame.
With continued reference to fig. 5, step 504, mixing output: and mixing the gesture to be mixed of the adjusted target animation frame corresponding to each timestamp with the gesture of the object model of the source animation frame, and outputting the mixed gesture, namely performing linear interpolation according to the matrix description of the gesture of the source animation frame corresponding to the same node and the matrix description of the gesture corresponding to the adjusted gesture, so as to obtain the mixed gesture corresponding to two gestures, wherein the interpolation factor of the linear interpolation gradually transits from 0 to 1. The aim of the step is to mix according to the gesture or the source animation frame adjusted in the last step and then output the mixed animation to an engine for rendering, and finally obtain the mixed animation corresponding to the source transition animation and the target transition animation. The method comprises the steps of adjusting the position and the orientation under the local coordinate space of a target animation, converting to a model space to align the center of gravity feet, mixing the gestures of a source animation and the target animation, and converting back to the local space and outputting to an engine.
Continuing with the description below of an exemplary architecture of the data processing apparatus 253 implemented as software modules provided by an embodiment of the present application, in some embodiments, as shown in fig. 2, the software modules stored in the data processing apparatus 253 of the memory 250 may include:
An acquisition module 2531 that acquires a source transitional animation and a target transitional animation, wherein the source transitional animation includes a plurality of source animation frames that are in one-to-one correspondence with a plurality of time stamps, and the target transitional animation includes a plurality of target animation frames that are in one-to-one correspondence with a plurality of time stamps; the feature obtaining module 2532 is configured to obtain a first spatial pose feature of an object model of a source animation frame corresponding to each timestamp, and obtain a second spatial pose feature of an object model of a target animation frame corresponding to each timestamp; the adjustment processing module 2533 is configured to adjust, for each timestamp, the pose of the object model of the target animation frame corresponding to the timestamp based on the first spatial pose feature and the second spatial pose feature of the corresponding timestamp, to obtain a pose to be mixed of the target animation frame corresponding to the timestamp; the mixing processing module 2534 is configured to perform mixing processing on the pose of the object model of the source animation frame corresponding to the timestamp and the pose to be mixed of the object model of the target animation frame corresponding to the timestamp, for each timestamp, to obtain a mixed pose corresponding to the timestamp; the animation processing module 2535 is configured to perform rendering processing based on the mixed poses of the plurality of timestamps, so as to obtain a mixed animation of the source transitional animation and the target transitional animation.
In some embodiments, the obtaining module 2531 is further configured to obtain a source animation and a target animation; acquiring an animation mixing interval in which a source animation and a target animation overlap on a time sequence; and taking the part corresponding to the animation mixing interval in the source animation as a source transition animation, and taking the part corresponding to the animation mixing interval in the target animation as a target transition animation.
In some embodiments, the feature acquisition module 2532 is further configured to perform at least one of the following processing on the object model of each source animation frame: performing spatial position feature extraction processing on the object model of the source animation frame to obtain a first spatial position feature of the object model of the corresponding source animation frame; performing space orientation feature extraction processing on the object model of the source animation frame to obtain a first space orientation feature of the object model of the corresponding source animation frame; performing gravity center feature extraction processing on the object model of the source animation frame to obtain a first gravity center feature of the object model of the corresponding source animation frame; a first spatial pose feature is determined based on at least one of the first spatial position feature, the first spatial orientation feature, and the first center of gravity feature.
In some embodiments, the feature acquisition module 2532 is further configured to perform the following processing on the object model of each target animation frame: performing spatial position feature extraction processing on the object model of the target animation frame to obtain a second spatial position feature of the object model of the corresponding target animation frame; performing space orientation feature extraction processing on the object model of the target animation frame to obtain a second space orientation feature of the object model of the corresponding target animation frame; performing gravity center feature extraction processing on the object model of the target animation frame to obtain a second gravity center feature of the object model of the corresponding target animation frame; a second spatial pose feature is determined based on at least one of the second spatial position feature, the second spatial orientation feature, and the second centroid feature.
In some embodiments, the feature acquisition module 2532 is further configured to acquire first spatial data of a root skeleton of the object model of the target animation frame; and carrying out spatial position feature extraction processing on the first spatial data of the root bones to obtain second spatial position features of the corresponding object model.
In some embodiments, the feature acquisition module 2532 is further configured to acquire second spatial data of each main skeleton of the object model of the target animation frame; carrying out space orientation feature extraction processing on the second space data of each main skeleton to obtain the horizontal rotation angle of the main skeleton; and carrying out mixed processing on the horizontal rotation angles of the main bones to obtain a second space orientation characteristic of the object model corresponding to the target animation frame.
In some embodiments, the feature obtaining module 2532 is further configured to, when the object model of the target animation frame is in a single-limb touchdown state, take the touchdown limb as a barycentric limb; when the object model of the target animation frame is in a multi-limb touchdown state, the object model of the target animation frame is subjected to gravity center movement analysis processing to obtain a gravity center movement direction, and a gravity center limb is determined based on the gravity center movement direction.
In some embodiments, the feature acquisition module 2532 is further configured to acquire an adjacent target animation frame adjacent to the target animation frame; acquiring a first ground projection position of each limb of an object model of a target animation frame, and acquiring a second ground projection position of each limb of an object model of an adjacent target animation frame; mixing the first ground projection positions of a plurality of limbs of the object model of the target animation frame to obtain first barycenter position data of the object model of the target animation frame; mixing the second ground projection positions of the limbs of the object model of the adjacent target animation frame to obtain second center position data of the object model of the adjacent target animation frame; a center of gravity movement direction is determined based on the first center of gravity position data and the second center of gravity position data.
In some embodiments, the adjustment processing module 2533 is further configured to perform, on the basis of the pose of the object model of the target animation frame, a position adjustment process corresponding to the first spatial position feature on the second spatial position feature, to obtain the pose of the object model of the target animation frame after the position adjustment; based on the posture of the object model of the target animation frame after the position adjustment, carrying out orientation alignment processing based on the second space orientation feature and the first space orientation feature to obtain the posture of the object model of the target animation frame after the orientation alignment; and carrying out center of gravity alignment processing based on the first center of gravity characteristic on the basis of the gesture of the object model of the target animation frame after alignment, and obtaining the gesture to be mixed of the target animation frame.
In some embodiments, the adjustment processing module 2533 is further configured to, in response to the second spatial orientation feature not being specified to be preserved, perform an orientation alignment process corresponding to the first spatial orientation feature on the second spatial orientation feature based on the pose of the object model of the target animation frame after the position adjustment, to obtain the pose of the object model of the target animation frame after the orientation alignment; responsive to the second spatial orientation feature designation being preserved, a pose of the object model of the position-adjusted target animation frame is taken as a pose of the object model of the orientation-aligned target animation frame.
In some embodiments, the adjustment processing module 2533 is further configured to, when the first barycenter feature characterizes the object model of the source animation frame as having a barycenter limb and the second barycenter feature characterizes the object model of the target animation frame as having a barycenter limb, perform barycenter alignment processing based on the second barycenter feature and the first barycenter feature, and obtain a pose to be mixed of the target animation frame; when the first barycenter characteristic represents that the object model of the source animation frame has barycenter limbs and the second barycenter characteristic represents that the object model of the target animation frame does not have barycenter limbs, carrying out barycenter alignment processing based on the first barycenter characteristic to obtain a to-be-mixed gesture of the target animation frame; when the first barycenter characteristic represents that the object model of the source animation frame does not have barycenter limbs, taking the gesture of the object model of the aligned target animation frame as the gesture to be mixed of the target animation frame.
In some embodiments, the adjustment processing module 2533 is further configured to, when a first barycentric limb corresponding to the first barycentric feature is the same as a second barycentric limb corresponding to the second barycentric feature, perform barycentric alignment processing on the object model of the target animation frame based on the first barycentric limb or the second barycentric limb, to obtain a pose to be mixed of the target animation frame; and when the first barycenter limb corresponding to the first barycenter feature is different from the second barycenter limb corresponding to the second barycenter feature, performing barycenter alignment processing on the object model of the target animation frame based on the second barycenter limb to obtain the to-be-mixed gesture of the target animation frame.
In some embodiments, the blending processing module 2534 is further configured to obtain source data of a plurality of bones of a pose of the object model of the source animation frame corresponding to the timestamp, and obtain target data of a plurality of bones of a pose to be blended of the object model of the target animation frame corresponding to the timestamp; for each bone, performing linear interpolation processing on the source data of the bone and the target data of the bone to obtain mixed data of the corresponding bone; rendering processing is carried out based on the mixed data of a plurality of bones, and the mixed gesture of the object model of the source animation frame corresponding to the time stamp and the gesture to be mixed of the object model of the target animation frame corresponding to the time stamp is obtained.
Embodiments of the present application provide a computer program product comprising computer-executable instructions stored in a computer-readable storage medium. The processor of the electronic device reads the computer-executable instructions from the computer-readable storage medium, and executes the computer-executable instructions, so that the electronic device performs the data processing method according to the embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, cause the processor to perform a data processing method provided by embodiments of the present application, for example, a data processing method as illustrated in fig. 3A.
In some embodiments, the computer readable storage medium may be RAM, ROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, computer-executable instructions may be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, in the form of programs, software modules, scripts, or code, and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, computer-executable instructions may, but need not, correspond to files in a file system, may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext markup language (Hyper Text Markup Language, HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, computer-executable instructions may be deployed to be executed on one electronic device or on multiple electronic devices located at one site or distributed across multiple sites and interconnected by a communication network.
In summary, the embodiment of the application obtains the source transition animation and the target transition animation to obtain the basic animation required for realizing the animation transition, obtains the spatial pose information of the object model of the source animation frame and the object model of the target animation frame corresponding to each time stamp by obtaining the first spatial pose feature of the object model of the source animation frame corresponding to each time stamp and obtaining the second spatial pose feature of the object model of the target animation frame corresponding to each time stamp, characterizes the spatial pose information of the object model of the source animation frame and the object model of the target animation frame corresponding to each time stamp, adjusts the spatial position, the spatial orientation and the center of gravity limb of the object model of the target animation frame corresponding to the time stamp based on the spatial position, the spatial orientation and the center of gravity limb of the object model of the source animation frame corresponding to the time stamp to obtain the pose to be mixed of the target animation frame corresponding to the time stamp, matching the gesture to be mixed of the target animation frame with the gesture of the object model of the source animation frame, mixing the gesture of the object model of the source animation frame corresponding to the timestamp with the gesture to be mixed of the target animation frame corresponding to the timestamp aiming at each timestamp to obtain the mixed gesture of the corresponding timestamp, combining the gesture of the object model of the source animation frame and the gesture to be mixed of the target animation frame, enabling the object model of the source animation frame corresponding to the timestamp and the object model of the target animation frame corresponding to the timestamp not to have unreasonable difference in space gesture, rendering a plurality of mixed gestures corresponding to a plurality of timestamps one by one to obtain mixed animations of the corresponding source transitional animations and the target transitional animations, ensuring natural motion transition of the obtained mixed animations of the object model, and avoiding the problems of motion aliasing, object model sliding and the like, thereby improving the animation mixing effect.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (16)

1. A method of data processing, the method comprising:
acquiring a source transition animation and a target transition animation, wherein the source transition animation comprises a plurality of source animation frames which are in one-to-one correspondence with a plurality of time stamps, and the target transition animation comprises a plurality of target animation frames which are in one-to-one correspondence with the plurality of time stamps;
Acquiring a first spatial attitude characteristic of an object model of a source animation frame corresponding to each timestamp, and acquiring a second spatial attitude characteristic of the object model of the target animation frame corresponding to each timestamp, wherein the first spatial attitude characteristic comprises a first spatial position characteristic, a first spatial orientation characteristic and a first gravity center characteristic, and the second spatial attitude characteristic comprises a second spatial position characteristic, a second spatial orientation characteristic and a second gravity center characteristic;
For each of the time stamps, the following processing is performed:
Based on the gesture of the object model of the target animation frame, performing position adjustment processing corresponding to the first spatial position feature on the second spatial position feature to obtain the gesture of the object model of the target animation frame after position adjustment;
Based on the gesture of the object model of the target animation frame after the position adjustment, responding to the fact that the second space orientation feature is not appointed to be reserved, and based on the gesture of the object model of the target animation frame after the position adjustment, performing orientation alignment processing corresponding to the first space orientation feature on the second space orientation feature to obtain the gesture of the object model of the target animation frame after the orientation alignment;
Based on the gesture of the object model of the target animation frame after the orientation alignment, carrying out center-of-gravity alignment processing based on the first center-of-gravity characteristics to obtain a gesture to be mixed of the target animation frame;
for each time stamp, mixing the gesture of the object model of the source animation frame corresponding to the time stamp with the gesture to be mixed of the object model of the target animation frame corresponding to the time stamp to obtain a mixed gesture corresponding to the time stamp;
And rendering the mixed postures of the timestamps to obtain the mixed animation of the source transition animation and the target transition animation.
2. The method of claim 1, wherein the obtaining a source transition animation and a target transition animation comprises:
acquiring a source animation and a target animation;
Acquiring an animation mixing interval where the source animation and the target animation overlap on a time sequence;
and taking the part corresponding to the animation mixing interval in the source animation as the source transition animation, and taking the part corresponding to the animation mixing interval in the target animation as the target transition animation.
3. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The obtaining the first spatial pose feature of the object model of the source animation frame corresponding to each timestamp includes:
the following processing is performed on the object model of each source animation frame:
Performing spatial position feature extraction processing on the object model of the source animation frame to obtain a first spatial position feature of the object model of the source animation frame;
performing space orientation feature extraction processing on the object model of the source animation frame to obtain a first space orientation feature of the object model of the source animation frame;
and carrying out gravity center feature extraction processing on the object model of the source animation frame to obtain a first gravity center feature of the object model corresponding to the source animation frame.
4. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The obtaining the second spatial pose characteristics of the object model of the target animation frame corresponding to each timestamp includes:
The following processing is performed on the object model of each target animation frame:
Performing spatial position feature extraction processing on the object model of the target animation frame to obtain a second spatial position feature of the object model corresponding to the target animation frame;
performing space orientation feature extraction processing on the object model of the target animation frame to obtain a second space orientation feature of the object model of the corresponding target animation frame;
and carrying out gravity center feature extraction processing on the object model of the target animation frame to obtain a second gravity center feature of the object model corresponding to the target animation frame.
5. The method according to claim 4, wherein the performing spatial location feature extraction processing on the object model of the target animation frame to obtain the second spatial location feature of the object model corresponding to the target animation frame includes:
acquiring first space data of a root skeleton of an object model of the target animation frame;
And carrying out spatial position feature extraction processing on the first spatial data of the root bones to obtain second spatial position features corresponding to the object model.
6. The method according to claim 4, wherein the performing a spatial orientation feature extraction process on the object model of the target animation frame to obtain the second spatial orientation feature of the object model of the target animation frame includes:
Acquiring second spatial data of each main skeleton of an object model of the target animation frame;
carrying out space orientation feature extraction processing on the second space data of each main skeleton to obtain a horizontal rotation angle of the main skeleton;
And carrying out mixing treatment on the horizontal rotation angles of the main bones to obtain a second space orientation characteristic of the object model corresponding to the target animation frame.
7. The method of claim 4, wherein the second gravity characteristic comprises a barycentric limb; and performing gravity center feature extraction processing on the object model of the target animation frame to obtain a second gravity center feature of the object model of the target animation frame, wherein the second gravity center feature comprises:
when the object model of the target animation frame is in a single limb grounding state, taking a grounded limb as the gravity center limb;
When the object model of the target animation frame is in a multi-limb touchdown state, carrying out gravity center movement analysis processing on the object model of the target animation frame to obtain a gravity center movement direction, and determining the gravity center limb based on the gravity center movement direction.
8. The method according to claim 7, wherein the performing a center of gravity movement analysis process on the object model of the target animation frame to obtain a center of gravity movement direction includes:
Acquiring an adjacent target animation frame adjacent to the target animation frame;
acquiring a first ground projection position of each limb of the object model of the target animation frame, and acquiring a second ground projection position of each limb of the object model of the adjacent target animation frame;
Mixing the first ground projection positions of a plurality of limbs of the object model of the target animation frame to obtain first barycenter position data of the object model of the target animation frame;
Mixing the second ground projection positions of a plurality of limbs of the object model of the adjacent target animation frame to obtain second center position data of the object model of the adjacent target animation frame;
The center of gravity movement direction is determined based on the first center of gravity position data and the second center of gravity position data.
9. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The method further comprises the steps of:
And responding to the second space orientation feature specification to be reserved, and taking the gesture of the object model of the target animation frame after the position adjustment as the gesture of the object model of the target animation frame after the orientation alignment.
10. The method according to claim 1, wherein the performing the center of gravity alignment processing based on the first center of gravity feature based on the posture of the object model of the target animation frame after the orientation alignment, to obtain the to-be-mixed posture of the target animation frame, includes:
When the first barycenter characteristic represents that the object model of the source animation frame has a barycenter limb and the second barycenter characteristic represents that the object model of the target animation frame has the barycenter limb, carrying out barycenter alignment processing based on the second barycenter characteristic and the first barycenter characteristic to obtain a to-be-mixed gesture of the target animation frame;
When the first barycenter characteristic represents that the object model of the source animation frame has the barycenter limb and the second barycenter characteristic represents that the object model of the target animation frame does not have the barycenter limb, barycenter alignment processing is performed based on the first barycenter characteristic, and a to-be-mixed gesture of the target animation frame is obtained;
the method further comprises the steps of:
And when the first barycenter characteristic represents that the object model of the source animation frame does not have the barycenter limb, taking the gesture of the object model of the target animation frame after the alignment of the orientation as the gesture to be mixed of the target animation frame.
11. The method according to claim 10, wherein the performing the center of gravity alignment process based on the second center of gravity feature and the first center of gravity feature to obtain the pose to be mixed of the target animation frame includes:
When a first barycentric limb corresponding to the first barycentric feature is identical to a second barycentric limb corresponding to the second barycentric feature, performing barycentric alignment processing on an object model of the target animation frame based on the first barycentric limb or the second barycentric limb to obtain a to-be-mixed gesture of the target animation frame;
And when the first barycenter limb corresponding to the first barycenter feature is different from the second barycenter limb corresponding to the second barycenter feature, performing barycenter alignment processing on the object model of the target animation frame based on the second barycenter limb to obtain the gesture to be mixed of the target animation frame.
12. The method according to claim 1, wherein the mixing the pose of the object model of the source animation frame corresponding to the timestamp with the pose to be mixed of the object model of the target animation frame corresponding to the timestamp to obtain the mixed pose corresponding to the timestamp includes:
acquiring source data of a plurality of bones of the gesture of the object model of the source animation frame corresponding to the timestamp, and acquiring target data of a plurality of bones of the gesture to be mixed of the object model of the target animation frame corresponding to the timestamp;
For each bone, performing linear interpolation processing on source data of the bone and target data of the bone to obtain mixed data corresponding to the bone;
rendering processing is carried out based on the mixed data of the bones, and the mixed gesture of the object model of the source animation frame corresponding to the time stamp and the to-be-mixed gesture of the object model of the target animation frame corresponding to the time stamp is obtained.
13. A data processing apparatus, characterized in that the data processing apparatus comprises:
The system comprises an acquisition module, a storage module and a display module, wherein the acquisition module acquires a source transition animation and a target transition animation, the source transition animation comprises a plurality of source animation frames which are in one-to-one correspondence with a plurality of time stamps, and the target transition animation comprises a plurality of target animation frames which are in one-to-one correspondence with the plurality of time stamps;
The feature acquisition module is used for acquiring a first spatial gesture feature of the object model of the source animation frame corresponding to each time stamp and acquiring a second spatial gesture feature of the object model of the target animation frame corresponding to each time stamp, wherein the first spatial gesture feature comprises a first spatial position feature, a first spatial orientation feature and a first gravity center feature, and the second spatial gesture feature comprises a second spatial position feature, a second spatial orientation feature and a second gravity center feature;
An adjustment processing module, configured to execute, for each of the timestamps, the following processing: based on the gesture of the object model of the target animation frame, performing position adjustment processing corresponding to the first spatial position feature on the second spatial position feature to obtain the gesture of the object model of the target animation frame after position adjustment; based on the gesture of the object model of the target animation frame after the position adjustment, responding to the fact that the second space orientation feature is not appointed to be reserved, and based on the gesture of the object model of the target animation frame after the position adjustment, performing orientation alignment processing corresponding to the first space orientation feature on the second space orientation feature to obtain the gesture of the object model of the target animation frame after the orientation alignment; based on the gesture of the object model of the target animation frame after the orientation alignment, carrying out center-of-gravity alignment processing based on the first center-of-gravity characteristics to obtain a gesture to be mixed of the target animation frame;
the mixing processing module is used for carrying out mixing processing on the gesture of the object model of the source animation frame corresponding to each time stamp and the gesture to be mixed of the object model of the target animation frame corresponding to the time stamp to obtain a mixing gesture corresponding to the time stamp;
And the animation processing module is used for performing rendering processing based on the mixed postures of the plurality of time stamps to obtain the mixed animation of the source transition animation and the target transition animation.
14. An electronic device, the electronic device comprising:
a memory for storing computer executable instructions;
a processor for implementing the data processing method of any one of claims 1 to 12 when executing computer executable instructions stored in said memory.
15. A computer-readable storage medium storing computer-executable instructions or a computer program, which when executed by a processor implement the data processing method of any one of claims 1 to 12.
16. A computer program product comprising computer executable instructions which, when executed by a processor, implement the data processing method of any one of claims 1 to 12.
CN202410216349.2A 2024-02-27 2024-02-27 Data processing method, device, electronic equipment, storage medium and program product Active CN117788650B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410216349.2A CN117788650B (en) 2024-02-27 2024-02-27 Data processing method, device, electronic equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410216349.2A CN117788650B (en) 2024-02-27 2024-02-27 Data processing method, device, electronic equipment, storage medium and program product

Publications (2)

Publication Number Publication Date
CN117788650A CN117788650A (en) 2024-03-29
CN117788650B true CN117788650B (en) 2024-06-07

Family

ID=90402179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410216349.2A Active CN117788650B (en) 2024-02-27 2024-02-27 Data processing method, device, electronic equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN117788650B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112669414A (en) * 2020-12-22 2021-04-16 完美世界(北京)软件科技发展有限公司 Animation data processing method and device, storage medium and computer equipment
CN112950751A (en) * 2019-12-11 2021-06-11 阿里巴巴集团控股有限公司 Gesture action display method and device, storage medium and system
CN116385605A (en) * 2022-12-05 2023-07-04 网易(杭州)网络有限公司 Method and device for generating flight animation of target object and electronic equipment
CN116704154A (en) * 2022-02-28 2023-09-05 腾讯科技(深圳)有限公司 Data processing method and device and related equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10553009B2 (en) * 2018-03-15 2020-02-04 Disney Enterprises, Inc. Automatically generating quadruped locomotion controllers

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950751A (en) * 2019-12-11 2021-06-11 阿里巴巴集团控股有限公司 Gesture action display method and device, storage medium and system
CN112669414A (en) * 2020-12-22 2021-04-16 完美世界(北京)软件科技发展有限公司 Animation data processing method and device, storage medium and computer equipment
CN116704154A (en) * 2022-02-28 2023-09-05 腾讯科技(深圳)有限公司 Data processing method and device and related equipment
CN116385605A (en) * 2022-12-05 2023-07-04 网易(杭州)网络有限公司 Method and device for generating flight animation of target object and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于计算机动画的虚拟导游技术及其应用;罗月童;孙静;储昭辉;;工程图学学报;20100415(第02期);第174-178页 *

Also Published As

Publication number Publication date
CN117788650A (en) 2024-03-29

Similar Documents

Publication Publication Date Title
US11232621B2 (en) Enhanced animation generation based on conditional modeling
US11648480B2 (en) Enhanced pose generation based on generative modeling
CN102609991B (en) Volume-reduction optimization method for three-dimensional solid model
US9240070B2 (en) Methods and systems for viewing dynamic high-resolution 3D imagery over a network
CN103258338A (en) Method and system for driving simulated virtual environments with real data
CN103886631A (en) Three-dimensional virtual indoor display system based on mobile equipment
CN103548012A (en) Remotely emulating computing devices
Marcinčin et al. Utilization of open source tools in assembling process with application of elements of augmented reality
CN110210012A (en) One kind being based on virtual reality technology interactivity courseware making methods
CN113781615B (en) Animation generation method, device, equipment and storage medium
JP2024502407A (en) Display methods, devices, devices and storage media based on augmented reality
CN110544314A (en) Fusion method, system, medium and device of virtual reality and simulation model
CN117372602B (en) Heterogeneous three-dimensional multi-object fusion rendering method, equipment and system
CN117788650B (en) Data processing method, device, electronic equipment, storage medium and program product
CN116843809A (en) Virtual character processing method and device
US20230120883A1 (en) Inferred skeletal structure for practical 3d assets
Xin et al. Application of 3D tracking and registration in exhibition hall navigation interaction
CN114022616B (en) Model processing method and device, electronic equipment and storage medium
CN117765143A (en) Mobile animation generation method and device, storage medium and electronic equipment
Duan The practice and exploration of virtual roaming based on 3Ds max
CN113805532B (en) Method and terminal for manufacturing physical robot actions
WO2023168999A1 (en) Rendering method and apparatus for virtual scene, and electronic device, computer-readable storage medium and computer program product
Bordegoni et al. Mixed reality prototyping for handheld products testing
Soga et al. A system for choreographic simulation of ballet using a 3D motion archive on the web
Fahim A motion capture system based on natural interaction devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant