CN112215930A - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
CN112215930A
CN112215930A CN202011120133.4A CN202011120133A CN112215930A CN 112215930 A CN112215930 A CN 112215930A CN 202011120133 A CN202011120133 A CN 202011120133A CN 112215930 A CN112215930 A CN 112215930A
Authority
CN
China
Prior art keywords
action
character
target
virtual character
animation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011120133.4A
Other languages
Chinese (zh)
Inventor
金晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Kingsoft Online Game Technology Co Ltd
Original Assignee
Zhuhai Kingsoft Online Game Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Kingsoft Online Game Technology Co Ltd filed Critical Zhuhai Kingsoft Online Game Technology Co Ltd
Priority to CN202011120133.4A priority Critical patent/CN112215930A/en
Publication of CN112215930A publication Critical patent/CN112215930A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6009Methods for processing data by generating or executing the game program for importing or creating game content, e.g. authoring tools during game development, adapting content to different platforms, use of a scripting language to create content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a data processing method and a data processing device, wherein the method comprises the following steps: receiving an avatar of a target role to be generated and target role attribute data; determining a skeleton animation corresponding to the target role attribute data in an action database according to the target role attribute data; and combining the virtual image and the skeleton animation of the target character to construct an action sequence for the target character. According to the data processing method provided by the embodiment, the action sequence is constructed for the target role by combining the virtual image and the bone animation of the target role in the action database, so that the defect that the action sequence of the target role cannot be generated by acquiring the bone animation to generate the action database, extracting the stored bone animation from the action database and combining the virtual image of the target role in the prior art is overcome, and the workload of using manpower to design actions is reduced.

Description

Data processing method and device
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a data processing method and apparatus.
Background
In a movie work, virtual characters have different attributes, such as characters, ages, and the like, and virtual characters with different attributes contain different actions. With the development of the film and television industry, the requirement for the action design of the virtual role is greater and greater.
In the prior art, when the action of a virtual character is designed, a few basic actions are basically designed in advance, and then the actions are spliced, finely adjusted and transformed, so that the process of final action is obtained. In the prior art, the action design and generation mode has fewer actions and the number of designed actions is limited, which finally results in poor expression effect of the virtual character. To improve the expression effect of the virtual character, a large amount of manpower is required to design and calculate the motion of the virtual character.
In the prior art, the method for simplifying the design flow by acquiring the skeleton animation generation action database is lacked.
Disclosure of Invention
In view of this, embodiments of the present invention provide a data processing method and apparatus, a computing device, and a computer-readable storage medium, so as to solve technical defects in the prior art.
The embodiment discloses a data processing method, which comprises the following steps:
receiving an avatar of a target role to be generated and target role attribute data;
determining a skeleton animation corresponding to the target role attribute data in an action database according to the target role attribute data;
and combining the virtual image of the target role with the skeleton animation to construct an action sequence for the target role.
Optionally, determining, according to the target character attribute data, a bone animation corresponding to the target character attribute data in an action database, including:
and determining a target skeleton animation in an action database, wherein the label content carried by the target skeleton animation corresponds to the target role attribute.
Optionally, combining the avatar and the skeletal animation of the target character, generating a new action of the target character, including:
generating at least one target image for the avatar;
extracting action key frames from the skeleton animation;
and replacing the virtual character image in the action key frame with the target character to obtain the action sequence of the target character.
The embodiment discloses a method for generating an action database, which comprises the following steps:
obtaining a sample virtual role, and determining virtual role attribute data of the sample virtual role;
determining key nodes of the body parts of the sample virtual character, and acquiring action parameters of the key nodes;
constructing skeleton animation of a sample virtual character according to the action parameters of the key nodes, and adding corresponding labels to the skeleton animation by combining the attribute data of the virtual character;
and storing the skeleton animation and the corresponding label into a preset action database.
Optionally, the sample virtual character comprises facial features and posture features;
determining avatar attribute data for the sample avatar, comprising:
and determining virtual character attribute data of the sample virtual character according to the facial features and the posture features of the sample virtual character, wherein the virtual character attribute data comprises the age, the character and the gender of the sample virtual character and the current emotion and action of the sample virtual character.
Optionally, the motion parameters of the key node include a rotation rate, a rotation angle, a motion direction and a motion track;
determining a key node of the body part of the sample virtual character, and acquiring action parameters of the key node, wherein the steps comprise:
and detecting the body part of the sample virtual character, positioning the key node according to the body part of the sample virtual character, and acquiring the rotation rate, the rotation angle, the motion direction and the motion track of the key node.
Optionally, positioning the key node according to the body part of the virtual character, and acquiring the rotation rate, the rotation angle, the movement direction, and the movement trajectory of the key node, includes:
judging whether the source data of the key node can be acquired or not, if so, acquiring the source data of the key node, extracting action source data related to actions from the source data of the key node, and extracting the rotation rate, the rotation angle, the movement direction and the movement track of the key node from the action source data, wherein the action source data comprises the rotation rate, the rotation angle, the movement direction and the movement track of the key node;
if not, acquiring video data of the virtual character within a preset time length, determining a body part of the sample virtual character according to the video data of the sample virtual character, positioning a key node according to the body part of the sample virtual character, and acquiring a rotation rate, a rotation angle, a movement direction and a movement track of the key node of the sample virtual character according to the key node.
Optionally, adding a corresponding tag to the skeletal animation in combination with the virtual character attribute data includes:
adding virtual character attribute data as a tag to the skeletal animation.
Optionally, before storing the bone animation and the corresponding tag in a preset action database, the method further includes:
and detecting the label, and deleting the bone animation and the corresponding label if the label is overlapped with the label of the bone animation stored in the action database.
A data processing apparatus, the apparatus comprising:
the receiving module is configured to receive an avatar of a target role to be generated and target role attribute data;
a determining module configured to determine, according to the target character attribute data, a skeletal animation corresponding to the target character attribute data in an action database;
a generation module configured to construct an action sequence for the target character in conjunction with the avatar of the target character and the skeletal animation.
An action database generation apparatus, the apparatus comprising:
the system comprises a sample acquisition module, a data processing module and a data processing module, wherein the sample acquisition module is configured to acquire a sample virtual role and determine virtual role attribute data of the sample virtual role;
the node module is configured to determine key nodes of the body parts of the sample virtual character and acquire action parameters of the key nodes;
the label adding module is configured to construct a skeleton animation of a sample virtual character according to the action parameters of the key nodes, and add corresponding labels to the skeleton animation in combination with the attribute data of the virtual character;
a storage module configured to store the skeletal animation and the corresponding tag in a preset action database.
According to the data processing method and device provided by the invention, the action sequence is constructed for the target role by combining the virtual image and the skeleton animation of the target role in the action database, so that the defect that the action database cannot be generated by acquiring the skeleton animation, the stored skeleton animation is extracted from the action database, and the action sequence of the target role is generated by combining the virtual image of the target role in the prior art is overcome, and the workload of using manpower to design the action can be reduced.
Secondly, receiving the virtual image of the target role to be generated and the attribute data of the target role, the skeletal animation which is consistent with the attribute of the target role can be accurately extracted from the action database, and the condition that the skeletal animation is inconsistent with the virtual image of the target role is avoided.
And thirdly, combining the virtual image and the skeleton animation of the target role, constructing an action sequence for the target role, ensuring that the action of the target role can be independently designed without using manpower, and ensuring that the workload of using manpower to design the action can be reduced.
In addition, by acquiring the key nodes of the sample virtual character, constructing the skeleton animation of the sample virtual character based on the key nodes, and generating the action database according to the skeleton animation of the sample virtual character, the diversity of the skeleton animation serving as the sample is ensured, and the workload of using manpower to carry out action design is ensured to be reduced.
Drawings
FIG. 1 is a schematic diagram of a computing device of an example of the invention;
FIG. 2 is a flow chart of a data processing method according to an embodiment of the present invention;
FIG. 3 is a flow chart of a data processing method according to an embodiment of the present invention;
FIG. 4 is a flow chart illustrating a method for generating an action database according to an embodiment of the invention;
FIG. 5 is a schematic view of a joint node of a human body according to an embodiment of the present invention;
FIG. 6 is a flow chart illustrating a method for generating an action database according to an embodiment of the invention;
FIG. 7 is a block diagram of a data processing apparatus according to an embodiment of the present invention;
fig. 8 is a block diagram of an action database generation apparatus according to an embodiment of the present invention.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, but rather construed as limited to the embodiments set forth herein.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first can also be referred to as a second and, similarly, a second can also be referred to as a first without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
First, the noun terms to which one or more embodiments of the present invention relate are explained.
The Unity model: the game development tool is a comprehensive game development tool for creating multiple platforms of interactive contents such as three-dimensional video games, building visualizations, real-time three-dimensional animations and the like, and is a comprehensive and integrated professional game engine.
LineRender component: the line components in the Unity model are mainly used for processing nodes and lines.
Update function: the update function in the Unity model is mainly used for derivation calculation and update of parameters.
Bone animation: the model has a skeletal structure of interconnected "bones" (bones are made up of connecting lines of key nodes and nodes), and the model is animated by changing the orientation and position of the bones.
Fig. 1 is a block diagram illustrating a configuration of a computing device 100 according to an embodiment of the present specification. The components of the computing device 100 include, but are not limited to, memory 110 and processor 120. The processor 120 is coupled to the memory 110 via a bus 130 and a database 150 is used to store data.
Computing device 100 also includes access device 140, access device 140 enabling computing device 100 to communicate via one or more networks 160. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 140 may include one or more of any type of network interface (e.g., a Network Interface Card (NIC)) whether wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present description, the above-described components of computing device 100 and other components not shown in FIG. 1 may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 1 is for purposes of example only and is not limiting as to the scope of the description. Those skilled in the art may add or replace other components as desired.
Computing device 100 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), a mobile phone (e.g., smartphone), a wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 100 may also be a mobile or stationary server.
Wherein the processor 120 may perform the steps of the method shown in fig. 2. Fig. 2 is a schematic flowchart showing a human motion generation method according to an embodiment of the present invention, including steps S201 to S203.
Step S201: and receiving the virtual image of the target role to be generated and the attribute data of the target role.
The target role to be generated is a person created by a person, the virtual image is a movie image of the person, and the target role attribute data comprises the age, the character and the sex of the target role and the current emotion of the target role.
Specifically, for example, an animated character named a is artificially created, the character image of a in the animation is the virtual image, and the target character attribute data of a includes: age: 18. sex: male, character: cheerful, current mood: and (5) calming.
For the target role and the virtual image to be generated, the name and the attribute data of the target role are received in a character string mode, and the virtual image is received in a picture mode for confirmation.
Specifically, taking the above-mentioned animated character a as an example, by receiving the name of the animated character: A. receiving a picture of an avatar containing the animated character, receiving an age of A: 18. sex: male, character: cheerful, current mood: calm to confirm the character image of the animated character in particular.
And receiving the data of the virtual image of the target role to be generated and the attribute data of the target role, so that the bone animation which is consistent with the attribute of the target role can be accurately extracted from the action database, and the condition that the bone animation is inconsistent with the virtual image of the target role is avoided.
Step S202: and determining the bone animation corresponding to the target role attribute data in an action database according to the target role attribute data.
Further, a target skeleton animation is determined in the action database, and the label content carried by the target skeleton animation corresponds to the target character attribute.
Specifically, for example, in order to construct an action sequence for the animated character a, the action database is required to receive the target character attribute data of the animated character a, extract a tag corresponding to the target character attribute data of the animated character a from the action database, and then extract a skeletal animation corresponding to the tag according to the tag.
For example, age of target character attribute data of animated character a: 18. sex: male, character: cheerful, current mood: and comparing label contents in the calmness and action database, and extracting the labels with the closest label contents and the bone animations corresponding to the labels. However, the tag content may have a small amplitude deviation, and for example, if the tag content identical to the target character attribute information of the animated character a does not exist in the motion database, the tag content closest to the target character attribute information of a is extracted.
Specifically, if the tag content does not exist in the action database, the tag content is: age: 18. sex: male, character: cheerful, current mood: and if the label is calm, extracting the label content as follows: age: 19. sex: male, character: cheerful, current mood: the calm label is the bone animation corresponding to the label, and the bone animation is numbered as A1.
And determining the skeleton animation corresponding to the target character attribute data in the action database according to the target character attribute data, so that the skeleton animation meeting the requirements can be accurately extracted from the action database, and the condition that the skeleton animation is not consistent with the virtual image of the target character to be generated is avoided.
Step S203: and combining the virtual image and the skeleton animation of the target character to construct an action sequence for the target character.
Further, generating at least one target image for the avatar;
extracting action key frames from the skeleton animation;
and replacing the skeleton image in the action key frame with the target image to obtain the action sequence of the target character.
Specifically, taking the above-mentioned animated character a as an example, assume that the target image of the animated character a is: running at a certain field, extracting an action key frame in A1, and replacing a skeleton image in the action key frame with a target image of the animated character A, thereby obtaining an action sequence of the animated character A: a running action sequence at a certain field.
By combining the virtual image and the skeleton animation of the target role, an action sequence is constructed for the target role, the action of the target role can be designed independently without manpower, and the workload of action design by using manpower can be reduced.
The present embodiment discloses a data processing method, as shown in fig. 3, including steps S301 to S305:
step S301: and receiving the virtual image of the target role to be generated and the attribute data of the target role.
Specifically, for example, an animated character named B, whose character image in the animation is the avatar, is artificially created, and the target character attribute data of a includes: age: 22. sex: female, character: inward, current mood: sadness.
And receiving the data of the virtual image of the target role to be generated and the attribute data of the target role, so that the bone animation which is consistent with the attribute of the target role can be accurately extracted from the action database, and the condition that the bone animation is inconsistent with the virtual image of the target role is avoided.
Step S302: and determining a target skeleton animation in an action database, wherein the label content carried by the target skeleton animation corresponds to the target role attribute.
Taking the animation character B as an example, the target character attribute data age of the animation character B is: 22. sex: female, character: inward, current mood: sadness is compared with the label content in the action database, the label with the closest label content and the bone animation corresponding to the label are extracted from the action database, and the bone animation is numbered as B1.
And determining the skeleton animation corresponding to the target character attribute data in the action database according to the target character attribute data, so that the skeleton animation meeting the requirements can be accurately extracted from the action database, and the condition that the skeleton animation is not consistent with the virtual image of the target character to be generated is avoided.
Step S303: generating at least one target figure for the avatar.
Step S304: and extracting action key frames from the skeleton animation.
Step S305: and replacing the skeleton image in the action key frame with the target image to obtain the action sequence of the target character.
Specifically, taking the above-mentioned animated character B as an example, assume that the target image of the animated character B is: sitting on the chair, extracting the action key frame in B1, and replacing the bone image in the action key frame with the target image of the animated character B, thereby obtaining the action sequence of the animated character B: b sequence of actions on the chair.
By combining the virtual image and the skeleton animation of the target role, an action sequence is constructed for the target role, the action of the target role can be designed independently without manpower, and the workload of action design by using manpower can be reduced.
The present embodiment discloses a method for generating an action database, as shown in fig. 4, including steps S401 to S404:
step S401: obtaining a sample virtual role, and determining virtual role attribute data of the sample virtual role.
Further, the sample virtual character comprises facial features and posture features.
Further, virtual character attribute data of the sample virtual character is determined according to the facial features and the posture features of the sample virtual character, wherein the virtual character attribute data comprises the age, the character, the gender and the current emotion and action of the sample virtual character.
Specifically, assuming that the sample virtual character is a human being and is marked as C, the facial features of the sample virtual character are fair and full faces, smiling expressions, slim and long skirt wearing, standing actions, and according to the facial features and the physical features, the age of C is estimated to be 25 years old, the character is bright, the character is female, and the current emotion is happy.
By obtaining the sample virtual roles and determining the virtual role attribute data of the sample virtual roles, the sample virtual roles can be ensured to have specific categories and accurate classification modes.
And S402, determining key nodes of the body parts of the sample virtual character, and acquiring action parameters of the key nodes.
Further, detecting the body part of the sample virtual character, positioning the key node according to the body part of the sample virtual character, and acquiring the rotation rate, the rotation angle, the movement direction and the movement track of the key node.
Further, whether source data of the key node can be acquired is judged, if yes, the source data of the key node is acquired, action source data related to actions are extracted from the source data of the key node, and the rotation rate, the rotation angle, the movement direction and the movement track of the key node are extracted from the action source data, wherein the action source data comprise the rotation rate, the rotation angle, the movement direction and the movement track of the key node;
if not, acquiring video data of the virtual character within a preset time length, determining a body part of the sample virtual character according to the video data of the sample virtual character, positioning a key node according to the body part of the sample virtual character, and acquiring a rotation rate, a rotation angle, a movement direction and a movement track of the key node of the sample virtual character according to the key node.
Specifically, for example, the sample virtual character C is determined whether the source code of C can be acquired, and if the source code of C can be acquired, the action source code related to the action in the source code is extracted, then the rotation rate, the rotation angle, the movement direction, and the movement trajectory of the key node included in the action source code are extracted, and the rotation rate, the rotation angle, the movement direction, and the movement trajectory of the key node are used as the action parameters.
And if the source code cannot be directly acquired, determining body parts of the sample virtual character C, including the head, the five sense organs, the neck, the arms, the trunk and the legs, according to the video data.
Further, as shown in fig. 5, the key nodes included in each body part are located according to the body parts of the sample virtual character C, the number of the key nodes is 21, and the specific positions are respectively: a crown 501, a left ear 502, a right ear 503, a left eye 504, a right eye 505, a nose 506, a left mouth corner 507, a right mouth corner 508, a neck 509, a left shoulder 510, a right shoulder 511, a left elbow 512, a right elbow 513, a left wrist 514, a right wrist 515, a left hip 516, a right hip 517, a left knee 518, a right knee 519, a left ankle 520, and a right ankle 521.
Further, the video data and the key nodes are combined to obtain the rotation rate, the rotation angle, the movement direction and the movement track of each key node.
By determining the key nodes of the body parts of the sample virtual characters and acquiring the action parameters of the key nodes, the body parts of the sample virtual characters in the video can be accurately identified, and the action parameters can accurately describe the actions of the sample virtual characters.
Step S403: and constructing skeleton animation of the sample virtual character according to the action parameters of the key nodes, and adding corresponding labels to the skeleton animation by combining the attribute data of the virtual character.
Further, the age, character, sex, and current emotion and action of the sample virtual character included in the virtual character attribute data are added as tags to the skeletal animation.
Specifically, for example, after obtaining key nodes and motion parameters of a sample virtual character C, importing the key nodes and the motion parameters into a Unity model, connecting adjacent key nodes to generate a skeleton of the sample virtual character C by using a linerer component, deriving a rotation rate, a rotation angle, a motion direction and a motion trajectory of each key node by using an Update function on the basis of the skeleton of the sample virtual character C, generating a motion sequence of the sample virtual character C, taking the motion sequence as a skeleton animation, recording the skeleton animation as C1, and adding attribute information of the sample virtual character, which is 25 years old, happy with gender, female gender, happy with current emotion, and standing with motion, acquired in step S401 as a label to the skeleton animation.
Constructing skeleton animation of the sample virtual character according to the action parameters of the key nodes, ensuring that the constructed skeleton animation can accurately restore the action of the sample virtual character, and reducing the workload of using manpower to design the action; and adding corresponding labels for the skeleton animation by combining the virtual character attribute data, so that the skeleton animation constructed by each structure has corresponding identification information, and the accuracy when the skeleton animation is used is ensured.
Step S404: and storing the skeleton animation and the corresponding label into a preset action database.
Further, before the skeleton animation and the corresponding label are stored in a preset action database, the label is detected, and if the label is overlapped with the label of the skeleton animation stored in the action database, the skeleton animation and the corresponding label are deleted.
Specifically, taking the sample virtual character C as an example, when the skeleton animation of the sample virtual character C is stored in the motion database, it is necessary to detect the tag of the skeleton animation of the sample virtual character C, and if the tag content already exists in the motion database, the content of the tag is: and if the current emotion is happy, the motion is standing and the corresponding bone animation is 25 years old, the character is bright, the gender is female, the current emotion is happy, and the motion is standing, the bone animation C1 to be stored currently and the corresponding label are deleted.
By detecting the tags, if the tags are overlapped with the tags of the skeleton animation stored in the action database, the skeleton animation and the corresponding tags are deleted, so that the condition that the overlapped skeleton animation does not exist in the action database is ensured, the uniqueness of the skeleton animation in the action database is ensured, the condition that the same skeleton animation is extracted from the action database is avoided, and the condition that the action sequence of the generated target character is not overlapped is also ensured.
The present embodiment discloses a method for generating an action database, as shown in fig. 6, including steps S601 to S606:
step S601: obtaining a sample virtual role, and determining virtual role attribute data of the sample virtual role.
Specifically, assuming that the sample virtual character is a cat and is recorded as D, the facial features of the sample virtual character are that the face is yellow and thin, the expression is calm, the posture features are that the face is thin and weak, the action is running, and according to the facial features and the posture features, the image of D is estimated to be a cat aged 5 years, with inward personality, male gender and sadness in the current emotion.
By obtaining the sample virtual roles and determining the virtual role attribute data of the sample virtual roles, the sample virtual roles can be ensured to have specific categories and accurate classification modes.
Step S602: and determining key nodes of the body parts of the sample virtual character, and acquiring action parameters of the key nodes.
Specifically, taking the sample virtual character D as an example, first, it is determined whether the source data of the key node can be obtained, and if not, the video data of the virtual character within the preset time length is obtained.
After the video data of the sample virtual character D is obtained, the body parts of the sample virtual character C, including the head, the five sense organs, the neck, the arms, the trunk and the legs, are determined according to the video data.
Further, positioning key nodes contained in each body part according to the body parts of the sample virtual character D, wherein the number of the key nodes is 21, and the specific positions are respectively as follows: the head top, the left ear, the right ear, the left eye, the right eye, the nose, the left mouth corner, the right mouth corner, the neck, the left shoulder, the right shoulder, the left anterior joint, the right anterior joint, the left posterior joint, the right posterior joint, the left hip, the right hip, the left paw, the right paw, the left paw joint and the right paw joint.
Further, the video data and the key nodes are combined to obtain the rotation rate, the rotation angle, the movement direction and the movement track of each key node.
By determining the key nodes of the body parts of the sample virtual characters and acquiring the action parameters of the key nodes, the body parts of the sample virtual characters in the video can be accurately identified, and the action parameters can accurately describe the actions of the sample virtual characters.
Step S603: and constructing skeleton animation of the sample virtual character according to the action parameters of the key nodes.
Specifically, taking the sample virtual character D as an example, after obtaining key nodes and action parameters of the sample virtual character D, importing the key nodes and the action parameters into a Unity model, connecting adjacent key nodes by using a linerer component to generate a skeleton of the sample virtual character D, deriving a rotation rate, a rotation angle, a movement direction and a movement trajectory of each key node by using an Update function on the basis of the skeleton of the sample virtual character D, generating an action sequence of the sample virtual character D, taking the action sequence as a skeleton animation, and recording the skeleton animation as D1.
The skeleton animation of the sample virtual character is constructed according to the action parameters of the key nodes, so that the action of the sample virtual character can be accurately restored by constructing the skeleton animation, the workload of using manpower to design the action can be reduced, and the constructed skeleton animation is not only limited to human action, but also can be applied to animals.
Step S604: and adding corresponding labels for the skeleton animation by combining the virtual character attribute data.
Step S605: and storing the skeleton animation and the corresponding label into a preset action database.
Step S606: and detecting the label, and deleting the bone animation and the corresponding label if the label is overlapped with the label of the bone animation stored in the action database.
Specifically, taking the above sample virtual character D as an example, the sample virtual character attribute information of the cat of age 5, sexuality inward, sex male, and current emotion sad acquired in step S601 is added as a label to the skeletal animation D1.
Before the skeletal animation D1 is stored in an action database, according to the corresponding label content of the skeletal animation D1: and detecting cats aged 5 years, having inward characters, having male sexes and having current sad emotions in an action database, and storing the bone animation D1 and the corresponding label into the action database when the detection result shows that the same label does not exist.
And adding corresponding labels for the skeleton animation by combining the virtual character attribute data, so that the corresponding identification information of each constructed skeleton animation is ensured, and the accuracy in extracting and using the skeleton animation is ensured.
By detecting the tags, if the tags are repeated with the tags of the skeleton animation stored in the action database, the skeleton animation and the corresponding tags are deleted, so that the condition that the repeated skeleton animation does not exist in the action database is ensured, the uniqueness of the skeleton animation in the action database is also ensured, the condition that the same skeleton animation is extracted from the action database is avoided, and the condition that the action sequence of the generated target character does not repeat is also ensured.
The present embodiment discloses a data processing apparatus, as shown in fig. 7, the apparatus includes:
a receiving module 701 configured to receive an avatar of a target character to be generated and target character attribute data;
a determining module 702 configured to determine, according to the target character attribute data, a skeletal animation corresponding to the target character attribute data in an action database;
a generating module 703 configured to construct an action sequence for the target character in combination with the avatar of the target character and the skeletal animation.
Further, the determining module 702 is specifically configured to:
and determining a target skeleton animation in an action database, wherein the label content carried by the target skeleton animation corresponds to the target role attribute.
And determining the skeleton animation corresponding to the target character attribute data in the action database according to the target character attribute data, so that the skeleton animation meeting the requirements can be accurately extracted from the action database, and the condition that the skeleton animation is not consistent with the virtual image of the target character to be generated is avoided.
Further, the generating module 703 is specifically configured to:
generating at least one target image for the avatar;
extracting action key frames from the skeleton animation;
and replacing the skeleton image in the action key frame with the target image to obtain the action sequence of the target character.
By combining the virtual image and the skeleton animation of the target role, an action sequence is constructed for the target role, the action of the target role can be designed independently without manpower, and the workload of action design by using manpower can be reduced.
The embodiment discloses an action database generation device, as shown in fig. 8, the device includes
A sample obtaining module 801 configured to obtain a sample virtual character, and determine virtual character attribute data of the sample virtual character;
a node module 802 configured to determine a key node of the body part of the sample virtual character, and obtain an action parameter of the key node;
a label adding module 803, configured to construct a skeleton animation of a sample virtual character according to the action parameters of the key nodes, and add corresponding labels to the skeleton animation in combination with the attribute data of the virtual character;
a storage module 804 configured to store the skeleton animation and the corresponding tag in a preset action database.
Further, the obtaining module 801 is specifically configured to:
and determining virtual character attribute data of the sample virtual character according to the facial features and the body state features of the sample virtual character, wherein the virtual character attribute data comprise the age, the character and the gender of the sample virtual character and the current emotion and action of the sample virtual character, and the sample virtual character comprises the facial features and the body state features.
Further, the node module 802 is specifically configured to:
detecting the body part of the sample virtual character, positioning a key node according to the body part of the sample virtual character, and acquiring the rotation rate, the rotation angle, the motion direction and the motion track of the key node, wherein the motion parameters of the key node comprise the rotation rate, the rotation angle, the motion direction and the motion track.
The node module 802 is further configured to:
judging whether the source data of the key node can be acquired or not, if so, acquiring the source data of the key node, extracting action source data related to actions from the source data of the key node, and extracting the rotation rate, the rotation angle, the movement direction and the movement track of the key node from the action source data, wherein the action source data comprises the rotation rate, the rotation angle, the movement direction and the movement track of the key node;
if not, acquiring video data of the virtual character within a preset time length, determining a body part of the sample virtual character according to the video data of the sample virtual character, positioning a key node according to the body part of the sample virtual character, and acquiring a rotation rate, a rotation angle, a movement direction and a movement track of the key node of the sample virtual character according to the key node.
Further, the label adding module 803 is specifically configured to:
adding virtual character attribute data as a tag to the skeletal animation.
Further, the apparatus also includes a detection module 805.
The detection module 805 is specifically configured to:
and detecting the label, and deleting the bone animation and the corresponding label if the label is overlapped with the label of the bone animation stored in the action database.
According to the action database generation device provided by the embodiment of the invention, the key nodes of the sample virtual character are obtained, the skeleton animation of the sample virtual character is constructed on the basis of the key nodes, and the action database is generated according to the skeleton animation of the sample virtual character, so that the diversity of the skeleton animation serving as the sample is ensured, and the workload of using manpower to carry out action design is reduced.
The present embodiment also provides a computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of a data processing method as described above.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium is the same concept as the technical solution of the above-mentioned character motion generation method, and for details that are not described in detail in the technical solution of the storage medium, reference may be made to the description of the technical solution of the above-mentioned character motion generation method.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present invention is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no acts or modules are necessarily required of the invention.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and their full scope and equivalents.

Claims (13)

1. A method of data processing, the method comprising:
receiving an avatar of a target role to be generated and target role attribute data;
determining a skeleton animation corresponding to the target role attribute data in an action database according to the target role attribute data;
and combining the virtual image of the target role with the skeleton animation to construct an action sequence for the target role.
2. The method of claim 1, wherein determining, from the target character attribute data, a skeletal animation in an action database corresponding to the target character attribute data comprises:
and determining a target skeleton animation in an action database, wherein the label content carried by the target skeleton animation corresponds to the target role attribute.
3. The method of claim 1, wherein constructing a sequence of actions for a target character in conjunction with an avatar and skeletal animation of the target character comprises:
generating at least one target image for the avatar;
extracting action key frames from the skeleton animation;
and replacing the skeleton image in the action key frame with the target image to obtain the action sequence of the target character.
4. A method of action database generation, the method comprising:
obtaining a sample virtual role, and determining virtual role attribute data of the sample virtual role;
determining key nodes of the body parts of the sample virtual character, and acquiring action parameters of the key nodes;
constructing skeleton animation of a sample virtual character according to the action parameters of the key nodes, and adding corresponding labels to the skeleton animation by combining the attribute data of the virtual character;
and storing the skeleton animation and the corresponding label into a preset action database.
5. The method of claim 4, wherein the sample avatars include facial features and posture features;
determining avatar attribute data for the sample avatar, comprising:
and determining virtual character attribute data of the sample virtual character according to the facial features and the posture features of the sample virtual character, wherein the virtual character attribute data comprises the age, the character and the gender of the sample virtual character and the current emotion and action of the sample virtual character.
6. The method of claim 4, wherein the motion parameters of the key nodes include rotation rate, rotation angle, motion direction and motion trajectory;
determining a key node of the body part of the sample virtual character, and acquiring action parameters of the key node, wherein the steps comprise:
and detecting the body part of the sample virtual character, positioning the key node according to the body part of the sample virtual character, and acquiring the rotation rate, the rotation angle, the motion direction and the motion track of the key node.
7. The method of claim 6, wherein locating key nodes from body parts of the sample virtual character, and obtaining rotation rates, rotation angles, movement directions and movement trajectories of the key nodes comprises:
judging whether the source data of the key node can be acquired or not, if so, acquiring the source data of the key node, extracting action source data related to actions from the source data of the key node, and extracting the rotation rate, the rotation angle, the movement direction and the movement track of the key node from the action source data, wherein the action source data comprises the rotation rate, the rotation angle, the movement direction and the movement track of the key node;
if not, acquiring video data of the virtual character within a preset time length, determining a body part of the sample virtual character according to the video data of the sample virtual character, positioning a key node according to the body part of the sample virtual character, and acquiring a rotation rate, a rotation angle, a movement direction and a movement track of the key node of the sample virtual character according to the key node.
8. The method of claim 4, wherein adding corresponding tags to the skeletal animation in conjunction with the virtual character attribute data comprises:
adding the age, character, sex, and current emotion and action of the sample virtual character included in the virtual character attribute data to the skeletal animation as tags.
9. The method of claim 4, wherein prior to storing the skeletal animation and corresponding tags in a preset action database, the method further comprises:
and detecting the label, and deleting the bone animation and the corresponding label if the label is overlapped with the label of the bone animation stored in the action database.
10. A data processing apparatus, characterized in that the apparatus comprises:
the receiving module is configured to receive an avatar of a target role to be generated and target role attribute data;
the determining module is configured to determine a bone animation corresponding to the target role attribute data in an action database according to the target role attribute data;
a generation module configured to construct an action sequence for the target character in conjunction with the avatar of the target character and the skeletal animation.
11. An action database generation apparatus, characterized in that the apparatus comprises:
the system comprises a sample acquisition module, a data processing module and a data processing module, wherein the sample acquisition module is configured to acquire a sample virtual role and determine virtual role attribute data of the sample virtual role;
the node module is configured to determine key nodes of the body parts of the sample virtual character and acquire action parameters of the key nodes;
the label adding module is configured to construct a skeleton animation of a sample virtual character according to the action parameters of the key nodes, and add corresponding labels to the skeleton animation in combination with the attribute data of the virtual character;
a storage module configured to store the skeletal animation and the corresponding tag in a preset action database.
12. A computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any of claims 1-3 or claims 4-9 when executing the instructions.
13. A computer-readable storage medium storing computer instructions, which when executed by a processor implement the steps of the method of any one of claims 1 to 3 or claims 4 to 9.
CN202011120133.4A 2020-10-19 2020-10-19 Data processing method and device Pending CN112215930A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011120133.4A CN112215930A (en) 2020-10-19 2020-10-19 Data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011120133.4A CN112215930A (en) 2020-10-19 2020-10-19 Data processing method and device

Publications (1)

Publication Number Publication Date
CN112215930A true CN112215930A (en) 2021-01-12

Family

ID=74055845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011120133.4A Pending CN112215930A (en) 2020-10-19 2020-10-19 Data processing method and device

Country Status (1)

Country Link
CN (1) CN112215930A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113546415A (en) * 2021-08-11 2021-10-26 北京字跳网络技术有限公司 Plot animation playing method, plot animation generating method, terminal, plot animation device and plot animation equipment
CN113923462A (en) * 2021-09-10 2022-01-11 阿里巴巴达摩院(杭州)科技有限公司 Video generation method, live broadcast processing method, video generation device, live broadcast processing device and readable medium
WO2023087753A1 (en) * 2021-11-19 2023-05-25 达闼科技(北京)有限公司 Action data obtaining method, system, apparatus, and device, and storage medium and computer program product
CN116228942A (en) * 2023-03-17 2023-06-06 北京优酷科技有限公司 Character action extraction method, device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274466A (en) * 2017-05-31 2017-10-20 珠海金山网络游戏科技有限公司 The methods, devices and systems that a kind of real-time double is caught
CN109508625A (en) * 2018-09-07 2019-03-22 咪咕文化科技有限公司 Emotional data analysis method and device
CN111968207A (en) * 2020-09-25 2020-11-20 魔珐(上海)信息科技有限公司 Animation generation method, device, system and storage medium
CN114241099A (en) * 2021-12-17 2022-03-25 网易(杭州)网络有限公司 Method and device for batch zeroing of animation data and computer equipment
CN116433808A (en) * 2021-12-30 2023-07-14 上海米哈游璃月科技有限公司 Character animation generation method, animation generation model training method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274466A (en) * 2017-05-31 2017-10-20 珠海金山网络游戏科技有限公司 The methods, devices and systems that a kind of real-time double is caught
CN109508625A (en) * 2018-09-07 2019-03-22 咪咕文化科技有限公司 Emotional data analysis method and device
CN111968207A (en) * 2020-09-25 2020-11-20 魔珐(上海)信息科技有限公司 Animation generation method, device, system and storage medium
CN114241099A (en) * 2021-12-17 2022-03-25 网易(杭州)网络有限公司 Method and device for batch zeroing of animation data and computer equipment
CN116433808A (en) * 2021-12-30 2023-07-14 上海米哈游璃月科技有限公司 Character animation generation method, animation generation model training method and device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113546415A (en) * 2021-08-11 2021-10-26 北京字跳网络技术有限公司 Plot animation playing method, plot animation generating method, terminal, plot animation device and plot animation equipment
WO2023016176A1 (en) * 2021-08-11 2023-02-16 北京字跳网络技术有限公司 Plot animation playing method, plot animation generation method and apparatus, and terminal and device
CN113546415B (en) * 2021-08-11 2024-03-29 北京字跳网络技术有限公司 Scenario animation playing method, scenario animation generating method, terminal, device and equipment
CN113923462A (en) * 2021-09-10 2022-01-11 阿里巴巴达摩院(杭州)科技有限公司 Video generation method, live broadcast processing method, video generation device, live broadcast processing device and readable medium
WO2023087753A1 (en) * 2021-11-19 2023-05-25 达闼科技(北京)有限公司 Action data obtaining method, system, apparatus, and device, and storage medium and computer program product
CN116228942A (en) * 2023-03-17 2023-06-06 北京优酷科技有限公司 Character action extraction method, device and storage medium
CN116228942B (en) * 2023-03-17 2024-02-06 北京优酷科技有限公司 Character action extraction method, device and storage medium

Similar Documents

Publication Publication Date Title
CN112215930A (en) Data processing method and device
US11600033B2 (en) System and method for creating avatars or animated sequences using human body features extracted from a still image
US11010896B2 (en) Methods and systems for generating 3D datasets to train deep learning networks for measurements estimation
US20210295020A1 (en) Image face manipulation
WO2016177290A1 (en) Method and system for generating and using expression for virtual image created through free combination
WO2021094537A1 (en) 3d body model generation
KR20220066366A (en) Predictive individual 3D body model
CN110675475B (en) Face model generation method, device, equipment and storage medium
CN110570499B (en) Expression generating method, device, computing equipment and storage medium
CN113011505B (en) Thermodynamic diagram conversion model training method and device
US11798299B2 (en) Methods and systems for generating 3D datasets to train deep learning networks for measurements estimation
CN111680550B (en) Emotion information identification method and device, storage medium and computer equipment
KR102373606B1 (en) Electronic apparatus and method for image formation, and program stored in computer readable medium performing the same
Kang et al. Interactive animation generation of virtual characters using single RGB-D camera
CN112190921A (en) Game interaction method and device
US20210049802A1 (en) Method and apparatus for rigging 3d scanned human models
WO2023035725A1 (en) Virtual prop display method and apparatus
CN114167993B (en) Information processing method and device
KR20210019182A (en) Device and method for generating job image having face to which age transformation is applied
EP4123588A1 (en) Image processing device and moving-image data generation method
US11887252B1 (en) Body model composition update from two-dimensional face images
WO2023185241A1 (en) Data processing method and apparatus, device and medium
CN117808940A (en) Image generation method and device, storage medium and terminal
Basset Morphologically Plausible Deformation Transfer
CN115861536A (en) Method for optimizing face driving parameters and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 519000 Room 102, 202, 302 and 402, No. 325, Qiandao Ring Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, Room 102 and 202, No. 327 and Room 302, No. 329

Applicant after: Zhuhai Jinshan Digital Network Technology Co.,Ltd.

Address before: 519000 Room 102, 202, 302 and 402, No. 325, Qiandao Ring Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, Room 102 and 202, No. 327 and Room 302, No. 329

Applicant before: ZHUHAI KINGSOFT ONLINE GAME TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information