CN112107865A - Facial animation model processing method and device, electronic equipment and storage medium - Google Patents

Facial animation model processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112107865A
CN112107865A CN202011036167.5A CN202011036167A CN112107865A CN 112107865 A CN112107865 A CN 112107865A CN 202011036167 A CN202011036167 A CN 202011036167A CN 112107865 A CN112107865 A CN 112107865A
Authority
CN
China
Prior art keywords
facial
bone
animation model
skeleton
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011036167.5A
Other languages
Chinese (zh)
Inventor
马浩然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perfect World Beijing Software Technology Development Co Ltd
Original Assignee
Perfect World Beijing Software Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perfect World Beijing Software Technology Development Co Ltd filed Critical Perfect World Beijing Software Technology Development Co Ltd
Priority to CN202011036167.5A priority Critical patent/CN112107865A/en
Publication of CN112107865A publication Critical patent/CN112107865A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to a facial animation model processing method, a facial animation model processing device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring adjustment parameters corresponding to facial bones; determining a facial ornament bone associated with the facial bone; adjusting the facial embellishment bone according to the adjustment parameter; and performing skinning treatment on the adjusted face ornament bone to obtain an adjusted face animation model. This technical scheme facial embellishment bone and the facial bone sharing of relevance are held between the fingers the adjustment parameter of face, and facial embellishment bone follows facial bone promptly and revises the change to avoid when holding between the fingers the face because facial bone changes too greatly, the problem that facial embellishment was not followed prevents to wear group's phenomenon to take place, makes the user can freely hold between the fingers the face, improves the degree of accuracy and the naturalness of holding between the fingers the face effect.

Description

Facial animation model processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a facial animation model processing method and apparatus, an electronic device, and a storage medium.
Background
At present, in a common game face pinching function, a user can freely adjust the size, the position, the angle, the style and the like of each part of the face, and the immersion feeling of the player and the interactivity of the game are improved.
In the face pinching process, in order to meet the requirements of various face pinching effects of a user, the adjustment threshold value of each parameter is large. There is facial embellishment when holding between the fingers the face, if mask, veil, bang, hair accessories etc. hold between the fingers the face after the face if too exaggerate, surpass original face too much, make up then appear facial embellishment and facial not following easily with facial embellishment, take place to wear group phenomenon.
Disclosure of Invention
In order to solve the technical problem or at least partially solve the technical problem, embodiments of the present application provide a facial animation model processing method, apparatus, electronic device and storage medium.
According to an aspect of an embodiment of the present application, there is provided a facial animation model processing method including:
acquiring adjustment parameters corresponding to facial bones;
determining a facial ornament bone associated with the facial bone;
adjusting the facial embellishment bone according to the adjustment parameter;
and performing skinning treatment on the adjusted face ornament bone to obtain an adjusted face animation model.
Optionally, the facial ornament bone and the facial bone use the same wiring grid.
Optionally, the skinning the adjusted facial ornament bone includes:
obtaining a first skinning weight corresponding to the facial skeleton;
using the first skinning weight as a second skinning weight for the adjusted facial embellishment skeleton;
and performing skinning treatment on the adjusted face ornament bone according to the second skinning weight.
Optionally, the method further includes:
performing collision detection on the facial ornament bones during the movement of the facial ornament;
when it is detected that the facial ornament bone collides with a colliding body, the facial ornament bone is moved away from the current position.
Optionally, the method further includes:
when a first scaling operation on the facial skeleton is acquired, determining a collision body corresponding to the facial skeleton;
performing a second zoom operation on the collision volume according to the first zoom operation.
Optionally, the determining a facial ornament bone associated with the facial bone comprises:
obtaining an adjusting skeleton corresponding to the facial skeleton, wherein the adjusting skeleton is bound with the facial animation model and inherits the coordinates of the facial skeleton;
determining the facial ornament bones associated with the adjusted bones.
Optionally, the method further includes:
executing the collapse operation of the face animation model according to a first collapse strategy to obtain a first collapsed animation model;
rendering the first collapsed animation model;
storing the facial animation model in a server;
when an adjustment operation on the first collapsed animation model is received, acquiring the facial animation model from the server;
executing the adjustment operation on the facial animation model to obtain an updated facial animation model;
executing collapse operation on the updated animation model according to a second collapse strategy to obtain a second collapsed animation model;
rendering the second collapsed animation model;
storing the updated facial animation model in the server.
According to another aspect of embodiments of the present application, there is provided a facial animation model processing apparatus including:
the acquisition module is used for acquiring adjustment parameters corresponding to facial bones;
a determination module to determine facial ornament bones associated with the facial bones;
an adjustment module for adjusting the facial embellishment bone according to the adjustment parameter;
and the skinning processing module is used for skinning the adjusted face ornament skeleton to obtain the adjusted face animation model.
According to another aspect of the embodiments of the present application, there is also provided a storage medium including a stored program that executes the above steps when the program is executed.
According to another aspect of an embodiment of the present application, there is provided an electronic device including: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the above method steps when executing the computer program.
According to another aspect of embodiments of the present application, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the above-mentioned method steps.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
face embellishment skeleton and the facial skeleton sharing of relevance are held between the fingers the adjustment parameter of face, and face embellishment skeleton is followed face skeleton promptly and is revised the change to avoid when holding between the fingers the face because face skeleton changes too greatly, the problem that face embellishment did not follow prevents to wear group's phenomenon to take place, makes the user can freely hold between the fingers the face, improves the degree of accuracy and the naturalness of holding between the fingers the face effect.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
FIG. 1 is a flowchart of a facial animation model processing method according to an embodiment of the present disclosure;
fig. 2a is a schematic view of a face decoration according to an embodiment of the present application;
FIG. 2b is a schematic view of a face decoration after pinching a face according to an embodiment of the present application;
FIG. 2c is a schematic view of a post-pinching facial ornament according to another embodiment of the present application;
FIG. 3 is a flowchart of a facial animation model processing method according to another embodiment of the present application;
FIG. 4 is a flowchart of a facial animation model processing method according to another embodiment of the present application;
fig. 5a is a schematic diagram of a bang according to an embodiment of the present disclosure;
FIG. 5b is a schematic diagram of a bang according to another embodiment of the present disclosure;
FIG. 5c is a schematic diagram of a bang according to another embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a facial skeleton hierarchy provided in an embodiment of the present application;
FIG. 7 is a flowchart of a facial animation model processing method according to another embodiment of the present application;
FIG. 8 is a flowchart of a facial animation model processing method according to another embodiment of the present application;
FIG. 9 is a block diagram of a facial animation model processing apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
This application embodiment, adjust the back to facial skeleton when pinching the face, face embellishment skeleton is being added to facial embellishment covering based on facial skeleton's accommodate parameter synchronization regulation for face embellishment is followed facial skeleton, avoids wearing group.
First, a method for processing a facial animation model according to an embodiment of the present invention will be described.
Fig. 1 is a flowchart of a facial animation model processing method according to an embodiment of the present application. As shown in fig. 1, the method comprises the steps of:
step S11, obtaining adjustment parameters corresponding to the facial skeleton;
a step S12 of determining a facial ornament bone associated with the facial bone;
step S13, adjusting the bones of the facial ornament according to the adjusting parameters;
and step S14, performing skinning treatment on the adjusted face ornament bone to obtain an adjusted face animation model.
Through above-mentioned step S11 to step S14, face embellishment skeleton and the face skeleton sharing of relevance are held between the fingers the adjustment parameter of face, and face embellishment skeleton follows face skeleton promptly and modifies the change to avoid when holding between the fingers the face because face skeleton changes too greatly, the problem that face embellishment does not follow prevents to wear group' S phenomenon to take place, makes the user can freely hold between the fingers the face, improves the degree of accuracy and the naturalness of holding between the fingers the face effect.
Alternatively, the facial ornament skeleton and the facial skeleton use the same wiring grid.
Fig. 2a is a schematic view of a face decoration according to an embodiment of the present application. As shown in fig. 2a, the wiring mesh of the face ornament near the face region is adjusted to coincide with the wiring mesh of the face.
Fig. 2b is a schematic view of a face decoration after pinching a face according to an embodiment of the present application. As shown in fig. 2b, when the eyebrow interval is adjusted, the face decorations positioned at the eyebrow portion follow the adjustment.
Fig. 2c is a schematic view of a face decoration after pinching a face according to another embodiment of the present application. As shown in fig. 2c, when the eyes and the brow bone are rotated, the bones of the face ornaments located at the eyes and the brow follow the adjustment.
Fig. 3 is a flowchart of a facial animation model processing method according to another embodiment of the present application. As shown in fig. 3, the step S14 includes the following steps:
step S21, obtaining a first skinning weight corresponding to the facial skeleton;
step S22, the first skinning weight is used as the second skinning weight of the adjusted face decoration skeleton;
and step S23, performing skinning treatment on the adjusted face decoration skeleton according to the second skinning weight.
In this embodiment, when the face decoration skin is added, the weight of the face decoration skeleton is consistent with the weight of the face skeleton associated with the face decoration skeleton, so that even if the face skeleton is changed greatly, the face decoration and the face decoration skeleton can deform along with the face skeleton, smoothness of the skin is improved, and the phenomenon of wearing the skin is avoided.
In an alternative embodiment, since the face decoration also includes the bang, when pinching the face, if the cheek is enlarged too much, the bang-through phenomenon in which the bang penetrates the face may occur. To solve this problem, in the present embodiment, collision detection of facial bones is added to a hairstyle made using dynamic bones.
Fig. 4 is a flowchart of a facial animation model processing method according to another embodiment of the present application. As shown in fig. 4, the method further includes:
step S31, in the process of the movement of the face decoration, the collision detection is carried out on the bones of the face decoration;
in step S32, when it is detected that the face ornament bone collides with a colliding body, the face ornament bone is moved away from the current position.
In the present embodiment, a bounding box (bounding box) may be added to the facial bone as a collision body, and when it is detected that the facial ornament bone moves to the coordinate position of the bounding box, it is determined that a collision occurs.
Alternatively, an additional collision skeleton may be created for the facial skeleton, and the collision skeleton may be used as a collision body, and when the movement of the facial ornament skeleton to the coordinate position of the collision skeleton is detected, it is determined that the collision has occurred.
Alternatively, a collision detector (Collider) may be directly provided for the facial skeleton, and the positional relationship between the facial ornament skeleton and the facial skeleton may be directly detected to determine whether a collision occurs.
Optionally, when the facial bone is zoomed, the collision volume is also zoomed accordingly. The method further comprises the following steps: when a first scaling operation on a facial skeleton is acquired, determining a collision body corresponding to the facial skeleton; and executing a second zooming operation on the collision body according to the first zooming operation.
Thus, when the facial bone is enlarged, the collision body is enlarged, and the collision of the facial ornament bone with the facial bone can be detected more accurately.
Fig. 5a is a schematic diagram of a bang according to an embodiment of the present disclosure. As shown in fig. 5a, when the cheek bones are normal, the hair hangs down naturally when the bang 51 does not collide with the collision body 52 corresponding to the cheek bones Face.
Fig. 5b is a schematic diagram of a bang according to another embodiment of the present disclosure. As shown in fig. 5b, when the cheek bone Face is enlarged, the corresponding collision body 52 is enlarged accordingly, and the bang 51 is pushed away after colliding with the collision body 52.
Fig. 5c is a schematic diagram of a bang according to another embodiment of the present disclosure. As shown in fig. 5c, when the cheek bone Face shrinks, the corresponding collision body 52 also shrinks, and at this time, the collision detection may be performed on the collision body corresponding to the bang and the Head bone Head, so as to prevent the bang from passing through the Head during the movement of the bang.
In the embodiment of the application, the pinching face bones belong to the head bone subset. Fig. 6 is a schematic diagram of a facial skeleton hierarchy provided in an embodiment of the present application. As shown in fig. 6, the subset bone of the Head bone Head includes a Head adjustment bone Head _ Adjust, each pinching face bone A, B, C … … N as a sub-bone of the Head adjustment bone Head _ Adjust.
Because there is a certain relationship between facial bones, for example, when the eyes are to be rotated, in addition to adjusting the rotation of the right orbital bone Eye _ Socket _ R, the combined effect of the displacement and rotation of the right inner and outer Eye angular bones Eye _ Corner _ R _02 and Eye _ Corner _ R _01 needs to be adjusted; meanwhile, the left Eye orbital bone Eye _ Socket _ L, the left inner and outer Eye angular bones Eye _ cornea _ L _02 and Eye _ cornea _ L _01 are also adjusted correspondingly to ensure the symmetrical adjustment of the left Eye and the right Eye.
Fig. 7 is a flowchart of a facial animation model processing method according to another embodiment of the present application. As shown in fig. 7, the method further comprises the steps of:
step S51, determining the related skeleton corresponding to the facial skeleton;
step S52, acquiring a first weight corresponding to a facial skeleton and a second weight corresponding to an associated skeleton;
step S53, determining an adjusting parameter corresponding to the related skeleton according to the adjusting parameter corresponding to the facial skeleton, the first weight and the second weight;
and step S54, adjusting the relation skeleton according to the adjustment parameters corresponding to the related skeleton.
In this embodiment, when a user adjusts one of the facial bones, the associated bone corresponding to the facial bone is also adjusted correspondingly, so that the facial bones and the associated bones thereof are synchronously adjusted, and the animation model is quickly and accurately adjusted according to the user's needs. Meanwhile, the user does not need to manually adjust each skeleton one by one, and other related skeletons can be synchronously adjusted only by adjusting one skeleton, so that the complexity of skeleton adjustment operation is reduced.
For the mutually related bones, each bone has a corresponding weight, and the weight represents the corresponding adjustment relation of each bone in the adjustment process. For example, each skeleton is divided into 31 gears of-15 to 15 according to original skeleton data, the weight of the current skeleton a is 0.6, and the weight of the associated skeleton B is 1.2, and each time the current skeleton a adjusts 0.6 gear, the associated skeleton B adjusts 1.2 gear. If the current bone a is adjusted 3 gears from the initial position, the associated bone B is adjusted 3 × 1.2 ÷ 0.6 ═ 6 gears.
The weight may be an adjustment relationship between actual bone data, including a relationship between adjustment parameters such as a rotation angle and a displacement distance, that is, the adjustment parameter is directly calculated according to the bone data of the current bone, and the adjustment parameter corresponding to the associated bone is calculated according to the weight of each bone, which is not described herein again.
In alternative embodiments, the associated skeletal range may be determined based on the turning on or off of the associated adjustment control. The above step S51 includes the following steps: acquiring a control state of the associated adjusting control; when the control state is an open state, determining that the associated skeleton comprises the facial skeleton and other skeletons associated with the facial skeleton; when the control state is an off state, it is determined that the associated skeleton includes only the facial skeleton itself.
For example, for bones corresponding to an eye, when the associated condition control is closed, and the user adjusts the "right eye socket" bone, the other bones "right inner canthus," "right outer canthus," "left eye socket," "left inner canthus," and "left outer canthus" are not adjusted in association. Only when the associated condition control is on will the associated bone of the "right eye socket" adjust synchronously.
Optionally, when the control state is the open state, the control state further includes an association range level; the step S51 further includes: and determining the related skeleton of the facial skeleton according to the related range level.
For example, for the bones corresponding to the eye, when the association range level is one level, the user adjusts the bones of the "right eye socket", only the "right inner corner of the eye" and the "right outer corner of the eye" bones follow the adjustment, while the bones of the "left eye socket", "left inner corner of the eye" and "left outer corner of the eye" remain unchanged.
Optionally, a plurality of gears may be further set on the associated adjustment control, and the associated skeleton ranges corresponding to different gears are different. For example, for the bones of the eye, the associated bones corresponding to the gear 1 are all the bones of the eye; the related bones corresponding to the gear 2 comprise nasal bones besides all bones of the eyes; the associated bones corresponding to gear 2 include the mouth bones in addition to all the bones of the eye and the nose bones.
In the above embodiment, each gear of the association adjustment control is associated with a tree-shaped bone data selection range. When the associated adjusting control is started, reading all sub-skeletons of the current facial skeleton; when the associated adjustment control is closed, only the current bone is read; when the associated regulating control is in a certain gear, reading the sub-skeleton corresponding to the range of the gear. In the bone adjusting process, the father bone of the current facial bone is not affected generally, but the father bone can be controlled to make corresponding following adjustment when the specific bone is adjusted according to the requirement.
In an alternative embodiment, when adjusting multiple associated bones, each bone needs a uniform initial state, i.e. multiple bones are adjusted from the same initial state. The step S53 includes: acquiring an intermediate gear and a parameter adjusting range corresponding to the associated skeleton, wherein the intermediate gear of the current skeleton is the same as that of the associated skeleton; and determining the corresponding adjustment parameters of the associated bones according to the intermediate gears and the parameter adjustment range.
When a plurality of associated bones are adjusted, the initial state of each bone, namely the intermediate state of each bone, needs to be synchronized, so that each associated bone can be uniformly adjusted subsequently. Therefore, the method further comprises the step of determining the intermediate gear of each bone, as follows:
step A1, acquiring a first parameter adjusting range of a current skeleton, a second parameter adjusting range of a related skeleton and the gear number of a first adjusting assembly and a second adjusting assembly;
step A2, determining a first original gear corresponding to the current bone original bone data according to the first parameter adjusting range and the gear number, and determining a second original gear corresponding to the associated bone original bone data according to the second parameter adjusting range and the gear number;
step A3, calculating an intermediate gear according to the first original gear, the second original gear, the first weight and the second weight.
The process of determining the intermediate gear will be described in detail below by way of a specific example.
Each skeleton is divided into 30 gears of-15 to 15 according to original skeleton data.
The associated three bones A, B, C, whose original gears were calculated to be-10, 7, 1, respectively, from their original bone data.
If the weight corresponding to the skeleton A, B, C is the same, the intermediate gear is
Figure BDA0002705151880000121
If the weights corresponding to skeleton A, B, C are 0.6, 1.2, and 0.8, respectively, the intermediate gear is
Figure BDA0002705151880000122
In this way, the bone A, B, C is adjusted with the same middle gear as a starting point, and although the initial state is different from the original effect, the error is relatively small, so that the subsequent multi-bone related adjustment is facilitated.
Optionally, because the number of bones and modification parameters of the animation model after face pinching are large, the animation model at the face can be collapsed, the vertex information corresponding to the bone data is assigned to the vertex of the original animation model, and the set bones of the face pinching/the person pinching are deleted, including the modification parameters corresponding to the bones, so that the animation model obtained by rendering is the model after face pinching/the person pinching, the bone amount of the animation model is reduced, the calculated amount in the rendering process is reduced, the calculation speed is improved, and the game fluency is optimized.
Fig. 8 is a flowchart of a facial animation model processing method according to another embodiment of the present application. As shown in fig. 8, the method further comprises the steps of:
step S61, executing the collapse operation of the face animation model according to the first collapse strategy to obtain a first collapse animation model;
step S62, rendering the first collapse animation model;
step S63, storing the facial animation model in the server;
step S64, when receiving the adjustment operation of the first collapse animation model, obtaining the face animation model from the server;
step S65, adjusting the face animation model to obtain an updated face animation model;
step S66, executing the collapse operation of the updated animation model according to the second collapse strategy to obtain a second collapse animation model;
step S67, rendering the second collapse animation model;
in step S68, the updated facial animation model is stored in the server.
Wherein the first collapse strategy and the second collapse strategy may be the same or different.
Through the steps S61 to S68, after the face pinching/person pinching is completed, the collapse operation is performed on the animation model, the bone data volume in the animation model is reduced, the calculated amount is reduced in the rendering process, the calculating speed is increased, and the game fluency is optimized. In addition, after the collapse operation is performed on the original animation model, the original animation model is not deleted, the collapsed animation model is rendered, and the original animation model is stored on the server, so that the animation model can be modified again in the follow-up process.
In the above embodiment, whether the collapse operation is performed or not may be controlled based on the opening or closing of the collapse control, which may be in the form of a slide bar, a knob, or the like. The collapse level may also be adjusted by the collapse control. For example, the collapse control may be provided with a plurality of shift positions, the collapse level corresponding to shift position 1 is 1, and 30% of bone data is deleted; the collapse level corresponding to the gear 2 is 2, and 50% of bone data are deleted; the collapse level for gear 3 is 3, and 70% of the bone data is deleted.
In an alternative embodiment, the collapse strategy is automatically adjusted from processor to processor. The method further comprises the following steps: acquiring performance parameters of equipment for executing rendering; and determining a collapse strategy according to the performance parameters.
In an alternative embodiment, whether the collapse control needs to be opened or not can be determined according to the number of the animated models needing to be rendered in the same screen or the number of bones (or the amount of bone data) needing to be rendered, and the collapse level of the animated models after the collapse control is opened. For example, when the number of animation models needing to be rendered under the same screen is 1-3, the collapse control is not started, and when the number of animation models under the same screen is 4 or more, the collapse control is started. When the number of animation models under the same screen is 4-6, adopting a collapse strategy of a collapse level 1, and deleting 30% of skeleton data of each animation model; when the number of animation models under the same screen is 7-9, adopting a collapse strategy of a collapse level 2, and deleting 50% of skeleton data of each animation model; when the number of animated models under the same screen is 10 and above, 70% of the skeletal data is deleted for each animated model using a collapse level 3 collapse strategy.
In another alternative embodiment, the collapse strategy may also be user controlled whether to turn on, and the user may also select the collapse level as desired.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application.
Fig. 9 is a block diagram of a facial animation model processing apparatus provided in an embodiment of the present application, which may be implemented as part or all of an electronic device through software, hardware, or a combination of the two. As shown in fig. 9, the facial animation model processing apparatus includes:
the acquisition module 1 is used for acquiring adjustment parameters corresponding to facial bones;
a determination module 2 for determining facial ornament bones associated with said facial bones;
an adjusting module 3, for adjusting the facial ornament bone according to the adjusting parameter;
and the skinning processing module 4 is used for skinning the adjusted face ornament skeleton to obtain an adjusted face animation model.
Optionally, the facial ornament bone and the facial bone use the same wiring grid.
Optionally, the skinning processing module 4 is configured to obtain a first skinning weight corresponding to the facial skeleton; using the first skinning weight as a second skinning weight for the adjusted facial embellishment skeleton; and performing skinning treatment on the adjusted face ornament bone according to the second skinning weight.
Optionally, the apparatus further comprises:
a collision detection module 5, configured to perform collision detection on the facial ornament bones during the movement of the facial ornament;
a displacement module 6 for moving the facial ornament bone away from a current position when it is detected that the facial ornament bone collides with a colliding body.
Optionally, the apparatus further comprises:
a collision volume determination module 7, configured to determine a collision volume corresponding to the facial skeleton when a first zoom operation on the facial skeleton is acquired;
and the zooming module 8 is used for executing a second zooming operation on the collision body according to the first zooming operation.
Optionally, the determining module 2 is configured to obtain an adjustment skeleton corresponding to the facial skeleton, where the adjustment skeleton is bound to the facial animation model and inherits coordinates of the facial skeleton; determining the facial ornament bones associated with the adjusted bones.
Optionally, the apparatus further comprises:
a collapse module 9, configured to perform a collapse operation on the face animation model according to a first collapse strategy, so as to obtain a first collapsed animation model;
a rendering module 10 for rendering the first collapsed animation model;
a storage module 11, configured to store the facial animation model in a server;
a model obtaining module 12 for obtaining the facial animation model from the server when an adjustment operation for the first collapsed animation model is received;
the adjusting module 13 is configured to perform the adjusting operation on the facial animation model to obtain an updated facial animation model;
a collapse module 9, configured to perform a collapse operation on the updated animation model according to a second collapse strategy to obtain a second collapsed animation model;
a rendering module 10, configured to render the second collapsed animation model;
a storage module 11, configured to store the updated facial animation model in the server.
An embodiment of the present application further provides an electronic device, as shown in fig. 10, the electronic device may include: the system comprises a processor 1501, a communication interface 1502, a memory 1503 and a communication bus 1504, wherein the processor 1501, the communication interface 1502 and the memory 1503 complete communication with each other through the communication bus 1504.
A memory 1503 for storing a computer program;
the processor 1501, when executing the computer program stored in the memory 1503, implements the steps of the method embodiments described below.
The communication bus mentioned in the electronic device may be a Peripheral component interconnect (pci) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method embodiments described below.
It should be noted that, for the above-mentioned apparatus, electronic device and computer-readable storage medium embodiments, since they are basically similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
It is further noted that, herein, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A facial animation model processing method is characterized by comprising the following steps:
acquiring adjustment parameters corresponding to facial bones;
determining a facial ornament bone associated with the facial bone;
adjusting the facial embellishment bone according to the adjustment parameter;
and performing skinning treatment on the adjusted face ornament bone to obtain an adjusted face animation model.
2. The method of claim 1, wherein the facial ornament bones and the facial bones employ the same wiring grid.
3. The method of claim 1, wherein skinning the adjusted facial embellishment bones comprises:
obtaining a first skinning weight corresponding to the facial skeleton;
using the first skinning weight as a second skinning weight for the adjusted facial embellishment skeleton;
and performing skinning treatment on the adjusted face ornament bone according to the second skinning weight.
4. The method of claim 1, further comprising:
performing collision detection on the facial ornament bones during the movement of the facial ornament;
when it is detected that the facial ornament bone collides with a colliding body, the facial ornament bone is moved away from the current position.
5. The method of claim 4, further comprising:
when a first scaling operation on the facial skeleton is acquired, determining a collision body corresponding to the facial skeleton;
performing a second zoom operation on the collision volume according to the first zoom operation.
6. The method of claim 1, wherein said determining a facial ornament bone associated with said facial bone comprises:
obtaining an adjusting skeleton corresponding to the facial skeleton, wherein the adjusting skeleton is bound with the facial animation model and inherits the coordinates of the facial skeleton;
determining the facial ornament bones associated with the adjusted bones.
7. The method of claim 1, further comprising:
executing the collapse operation of the face animation model according to a first collapse strategy to obtain a first collapsed animation model;
rendering the first collapsed animation model;
storing the facial animation model in a server;
when an adjustment operation on the first collapsed animation model is received, acquiring the facial animation model from the server;
executing the adjustment operation on the facial animation model to obtain an updated facial animation model;
executing collapse operation on the updated animation model according to a second collapse strategy to obtain a second collapsed animation model;
rendering the second collapsed animation model;
storing the updated facial animation model in the server.
8. A facial animation model processing apparatus, comprising:
the acquisition module is used for acquiring adjustment parameters corresponding to facial bones;
a determination module to determine facial ornament bones associated with the facial bones;
an adjustment module for adjusting the facial embellishment bone according to the adjustment parameter;
and the skinning processing module is used for skinning the adjusted face ornament skeleton to obtain the adjusted face animation model.
9. An electronic device, comprising: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor, when executing the computer program, implementing the method steps of any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method steps of any one of claims 1 to 7.
CN202011036167.5A 2020-09-27 2020-09-27 Facial animation model processing method and device, electronic equipment and storage medium Pending CN112107865A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011036167.5A CN112107865A (en) 2020-09-27 2020-09-27 Facial animation model processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011036167.5A CN112107865A (en) 2020-09-27 2020-09-27 Facial animation model processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112107865A true CN112107865A (en) 2020-12-22

Family

ID=73797188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011036167.5A Pending CN112107865A (en) 2020-09-27 2020-09-27 Facial animation model processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112107865A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080024487A1 (en) * 2006-07-31 2008-01-31 Michael Isner Converting deformation data for a mesh to animation data for a skeleton, skinning and shading in a runtime computer graphics animation engine
CN105976417A (en) * 2016-05-27 2016-09-28 腾讯科技(深圳)有限公司 Animation generating method and apparatus
CN105976418A (en) * 2016-06-28 2016-09-28 珠海金山网络游戏科技有限公司 Design system and method for human dynamic bone
CN106228592A (en) * 2016-09-12 2016-12-14 武汉布偶猫科技有限公司 A kind of method of clothing threedimensional model automatic Bind Skin information
CN106355629A (en) * 2016-08-19 2017-01-25 腾讯科技(深圳)有限公司 Virtual image configuration method and device
CN106504309A (en) * 2016-11-24 2017-03-15 腾讯科技(深圳)有限公司 A kind of method of image synthesis and image synthesizer
CN108734758A (en) * 2017-04-25 2018-11-02 腾讯科技(深圳)有限公司 A kind of model of image configuration method, device and computer storage media
CN108961386A (en) * 2017-05-26 2018-12-07 腾讯科技(深圳)有限公司 The display methods and device of virtual image
CN108961365A (en) * 2017-05-19 2018-12-07 腾讯科技(深圳)有限公司 Three-dimensional object swinging method, device, storage medium and computer equipment
CN110111417A (en) * 2019-05-15 2019-08-09 浙江商汤科技开发有限公司 Generation method, device and the equipment of three-dimensional partial body's model
CN110111247A (en) * 2019-05-15 2019-08-09 浙江商汤科技开发有限公司 Facial metamorphosis processing method, device and equipment
CN110189413A (en) * 2019-05-31 2019-08-30 广东元一科技实业有限公司 A kind of method and system generating clothes distorted pattern
CN111062864A (en) * 2019-12-20 2020-04-24 网易(杭州)网络有限公司 Animation model scaling method and device, electronic equipment and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080024487A1 (en) * 2006-07-31 2008-01-31 Michael Isner Converting deformation data for a mesh to animation data for a skeleton, skinning and shading in a runtime computer graphics animation engine
CN105976417A (en) * 2016-05-27 2016-09-28 腾讯科技(深圳)有限公司 Animation generating method and apparatus
CN105976418A (en) * 2016-06-28 2016-09-28 珠海金山网络游戏科技有限公司 Design system and method for human dynamic bone
CN106355629A (en) * 2016-08-19 2017-01-25 腾讯科技(深圳)有限公司 Virtual image configuration method and device
CN106228592A (en) * 2016-09-12 2016-12-14 武汉布偶猫科技有限公司 A kind of method of clothing threedimensional model automatic Bind Skin information
CN106504309A (en) * 2016-11-24 2017-03-15 腾讯科技(深圳)有限公司 A kind of method of image synthesis and image synthesizer
CN108734758A (en) * 2017-04-25 2018-11-02 腾讯科技(深圳)有限公司 A kind of model of image configuration method, device and computer storage media
CN108961365A (en) * 2017-05-19 2018-12-07 腾讯科技(深圳)有限公司 Three-dimensional object swinging method, device, storage medium and computer equipment
CN108961386A (en) * 2017-05-26 2018-12-07 腾讯科技(深圳)有限公司 The display methods and device of virtual image
CN110111417A (en) * 2019-05-15 2019-08-09 浙江商汤科技开发有限公司 Generation method, device and the equipment of three-dimensional partial body's model
CN110111247A (en) * 2019-05-15 2019-08-09 浙江商汤科技开发有限公司 Facial metamorphosis processing method, device and equipment
CN110189413A (en) * 2019-05-31 2019-08-30 广东元一科技实业有限公司 A kind of method and system generating clothes distorted pattern
CN111062864A (en) * 2019-12-20 2020-04-24 网易(杭州)网络有限公司 Animation model scaling method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109063560B (en) Image processing method, image processing device, computer-readable storage medium and terminal
US11270489B2 (en) Expression animation generation method and apparatus, storage medium, and electronic apparatus
KR102169918B1 (en) A method and apparatus for generating facial expression animation of a human face model
CN110310222A (en) A kind of image Style Transfer method, apparatus, electronic equipment and storage medium
CN112219229A (en) Optimized avatar asset resources
CN113924601A (en) Entertaining mobile application for animating and applying effects to a single image of a human body
CN108921856B (en) Image cropping method and device, electronic equipment and computer readable storage medium
WO2020019665A1 (en) Three-dimensional special effect generation method and apparatus based on human face, and electronic device
US20220375150A1 (en) Expression generation for animation object
CN109698914A (en) A kind of lightning special efficacy rendering method, device, equipment and storage medium
CN110624244A (en) Method and device for editing face model in game and terminal equipment
CN112090082A (en) Facial skeleton processing method and device, electronic equipment and storage medium
JP2002008057A (en) Device and method for compositing animation image
US20240163527A1 (en) Video generation method and apparatus, computer device, and storage medium
CN112107865A (en) Facial animation model processing method and device, electronic equipment and storage medium
US11948240B2 (en) Systems and methods for computer animation using an order of operations deformation engine
KR20210041534A (en) Image processing method, device and electronic device
US20090009520A1 (en) Animation Method Using an Animation Graph
CN112102453B (en) Animation model skeleton processing method and device, electronic equipment and storage medium
CN112102452A (en) Animation model processing method and device, electronic equipment and storage medium
CN110717373B (en) Image simulation method, electronic device, and computer-readable storage medium
CN114596602A (en) Image processing method and device, electronic equipment and readable storage medium
US11804022B1 (en) Systems and methods for generative drawing and customization of three-dimensional (“3D”) objects in 3D space using gestures
EP4089641A1 (en) Method for generating a 3d avatar, method for generating a perspective 2d image from a 3d avatar and computer program product thereof
US20240013465A1 (en) Automatic head pose neutralization and blend shape generation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination