CN110705094A - Flexible body simulation method and device, electronic equipment and computer readable storage medium - Google Patents

Flexible body simulation method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN110705094A
CN110705094A CN201910935599.0A CN201910935599A CN110705094A CN 110705094 A CN110705094 A CN 110705094A CN 201910935599 A CN201910935599 A CN 201910935599A CN 110705094 A CN110705094 A CN 110705094A
Authority
CN
China
Prior art keywords
flexible body
frame
image
information
body assembly
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910935599.0A
Other languages
Chinese (zh)
Inventor
韩蕊
黄展鹏
戴立根
朱袁煊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN201910935599.0A priority Critical patent/CN110705094A/en
Publication of CN110705094A publication Critical patent/CN110705094A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a flexible body simulation method and device, electronic equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring pose information and/or key point information of a target object contained in at least one frame of image in a video stream; determining deformation parameters of a flexible body assembly corresponding to each frame of image in at least one frame of image based on the pose information and/or the key point information; the flexible body assembly corresponding to the video stream is driven based on the deformation parameters of the flexible body assembly corresponding to each frame of image, the flexible body assembly is driven by obtaining the deformation parameters of the flexible body assembly, and simulation of bendable flexible bodies such as necks, ears, hairs and the like is achieved, so that simulation actions of the flexible body assembly are more in line with real conditions.

Description

Flexible body simulation method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to computer vision technologies, and in particular, to a flexible body simulation method and apparatus, an electronic device, and a computer-readable storage medium.
Background
At present, a plurality of products for driving the virtual image to make the corresponding expression by using the real-time facial expression and action exist in the market. These products can be operated at a computer (PC) end or a mobile end, and by detecting key points of a human face in a three-dimensional (3D) or two-dimensional (2D) image obtained by a camera, real-time positions of various parts of the face can be accurately located, and real-time tracking can be performed on the human face, so that accurate location can be performed even when the human face rotates at a large angle. After the specific key point information is obtained, expression coefficients corresponding to the key points are obtained through calculation, and then each group of expression coefficients are applied to the expression substrate of the virtual character in each frame in real time, so that the virtual character can make the same expression as the detected face.
Disclosure of Invention
The embodiment of the application provides a flexible body simulation technology.
According to an aspect of an embodiment of the present application, there is provided a flexible body simulation method, including:
acquiring pose information and/or key point information of a target object contained in at least one frame of image in a video stream;
determining deformation parameters of a flexible body assembly corresponding to each frame of image in the at least one frame of image based on the pose information and/or the key point information;
and driving the flexible body component corresponding to the video stream based on the deformation parameter of the flexible body component corresponding to each frame of image.
Optionally, in any of the method embodiments described above in the present application, the target object includes a human face;
the determining, based on the pose information and the key point information, a deformation parameter of a flexible body assembly corresponding to each frame of image in the at least one frame of image includes:
acquiring associated key point information from the key point information based on the flexible body component corresponding to each frame of image;
determining an action coefficient corresponding to the flexible body assembly based on the pose information and the associated key point information;
and determining a deformation parameter of the flexible body assembly based on the action coefficient and the expression base model corresponding to the flexible body assembly.
Optionally, in any of the method embodiments described above, the action coefficient represents an action completion degree corresponding to an action;
the determining of the deformation parameter of the flexible body component based on the action coefficient and the expression base model corresponding to the flexible body component comprises:
searching a deformation parameter corresponding to the action coefficient from an expression base model corresponding to the flexible body component, wherein the deformation parameter comprises a deformation amplitude corresponding to the action;
and taking the deformation parameter corresponding to the action coefficient as the deformation parameter of the flexible body component.
Optionally, in any of the method embodiments described above, before the acquiring pose information and/or keypoint information of the target object included in at least one frame of image in the video stream, the method further includes:
and establishing the expression base models for all the flexible body components to be subjected to flexible deformation, wherein the expression base models quantize deformation amplitudes of the flexible body components, different deformation amplitudes correspond to different action coefficients, and each flexible body component corresponds to one expression base model.
Optionally, in any of the above method embodiments of the present application, the flexible body assembly comprises at least one grid point;
the determining, based on the pose information, a deformation parameter of a flexible body assembly corresponding to each frame of image in the at least one frame of image includes:
determining a position of each grid point in the flexible body assembly in each frame of image based on the pose information;
determining a deformation parameter of the flexible body assembly based on a position of each grid point in each frame of image in the flexible body assembly.
Optionally, in any one of the above method embodiments of the present application, the determining a position of each grid point in the flexible body assembly in each frame of image based on the pose information includes:
for each grid point in the flexible body assembly, obtaining the acting force of the grid point based on the pose information of the current frame image and preset acting force information received by the grid point;
determining the speed and the offset direction of the current frame image corresponding to the grid point based on the acting force of the grid point and the speed information of the previous frame image corresponding to the grid point;
and determining the position of the grid point in the current frame image based on the speed and the offset direction of the grid point corresponding to the current frame image and the position information of the grid point corresponding to the previous frame image.
Optionally, in any of the method embodiments described above, before determining, based on the pose information, a deformation parameter of a flexible body assembly corresponding to each frame of image in the at least one frame of image, the method further includes:
establishing a spring mass point model for the flexible body assembly, and taking each grid point in the flexible body assembly as a mass point in the spring mass point model.
Optionally, in any one of the method embodiments described above, the determining, based on the pose information, a deformation parameter of a flexible body assembly corresponding to each frame of image in the at least one frame of image includes:
obtaining deformation parameters of a flexible deformation area in the flexible body assembly based on a simulation calculation matrix and a set rotation center;
and taking the deformation parameter of the flexible deformation area as the deformation parameter of the flexible body assembly.
Optionally, in any of the method embodiments described above, before determining, based on the pose information, a deformation parameter of a flexible body assembly corresponding to each frame of image in the at least one frame of image, the method further includes:
dividing the flexible body assembly into a fixed area and the flexible deformation area, and obtaining a connecting grid point between the fixed area and the flexible deformation area;
and determining a simulation calculation matrix of the flexible deformation area and setting a rotation center based on the connection grid points, the grid points in the fixed area and the grid points of the flexible deformation area.
Optionally, in any of the method embodiments described above, before determining, based on the pose information, a deformation parameter of a flexible body assembly corresponding to each frame of image in the at least one frame of image, the method further includes:
dividing the flexible body assembly into a fixed region, a rigid body deformation region and the flexible deformation region, and obtaining a first connecting grid point between the fixed region and the flexible deformation region and a second connecting grid point between the rigid body deformation region and the flexible deformation region;
and determining a simulation calculation matrix and a set rotation center of the flexible deformation region based on the first connection grid point, the second connection grid point and the grid point of the flexible deformation region.
Optionally, in any one of the method embodiments described above, the determining, based on the pose information and the keypoint information, a deformation parameter of a flexible body assembly corresponding to each frame of image in the at least one frame of image includes:
determining a movement variation and a rotation variation corresponding to each of the at least two bones based on the pose information and the keypoint information;
determining a deformation parameter of the flexible body assembly based on the amount of movement change and the amount of rotation change corresponding to each bone.
Optionally, in any one of the method embodiments described above, before determining, based on the pose information and the keypoint information, a deformation parameter of a flexible body assembly corresponding to each frame of image in the at least one frame of image, the method further includes:
defining at least two bones for the flexible body assembly, obtaining a positional relationship between a grid point in the flexible body assembly and the bones;
the determining of the deformation parameter of the flexible body assembly based on the amount of movement change and the amount of rotation change corresponding to each bone comprises:
and determining a deformation parameter of the flexible body assembly based on the position relation between the grid points in the flexible body assembly and the bones and the movement variation and the rotation variation corresponding to each bone.
Optionally, in any of the above method embodiments of the present application, the obtaining a positional relationship between the grid point in the flexible body assembly and the bone comprises:
performing a skinning operation on at least two bones in the flexible body assembly to obtain a positional relationship between a grid point in the flexible body assembly and the bones.
Optionally, in any one of the method embodiments described above, before determining, based on the pose information and the keypoint information, a deformation parameter of a flexible body assembly corresponding to each frame of image in the at least one frame of image, the method further includes:
determining articulation points between each two bones based on at least two bones included in the flexible body assembly and obtaining associated force information for each of the articulation points;
the determining, based on the pose information and the keypoint information, a movement variation and a rotation variation corresponding to each of the at least two bones comprises:
determining the magnitude and direction of the acting force received by each joint point based on the pose information, the key point information and the relevant acting force information of each joint point;
and determining the movement change quantity and the rotation change quantity corresponding to each bone of the at least two bones based on the magnitude and the direction of the acting force received by each joint point.
Optionally, in an embodiment of any one of the above methods of the present application, the acquiring pose information and/or keypoint information of a target image included in at least one frame of image in a video stream includes at least one of the following manners:
performing key point identification on the at least one frame of image by using a first neural network to obtain key point information;
performing key point identification on the at least one frame of image by using a first neural network to obtain key point information; determining pose information in the at least one frame of image based on the keypoint information;
and performing pose identification on the at least one frame of image based on a second neural network to obtain the pose information.
According to another aspect of the embodiments of the present application, there is provided a flexible body simulation apparatus, including:
the information acquisition module is used for acquiring pose information and/or key point information of a target object contained in at least one frame of image in the video stream;
the deformation parameter determining module is used for determining deformation parameters of the flexible body assembly corresponding to each frame of image in the at least one frame of image based on the pose information and/or the key point information;
and the component driving module is used for driving the flexible body component corresponding to the video stream based on the deformation parameter of the flexible body component corresponding to each frame of image.
Optionally, in any one of the apparatus embodiments described above in the present application, the target object includes a human face;
the deformation parameter determining module comprises:
the associated information acquisition unit is used for acquiring associated key point information from the key point information based on the flexible body component corresponding to each frame of image;
an action coefficient determining unit, configured to determine an action coefficient corresponding to the flexible body assembly based on the pose information and the associated key point information;
and the first parameter determining unit is used for determining the deformation parameter of the flexible body component based on the action coefficient and the expression base model corresponding to the flexible body component.
Optionally, in any one of the apparatus embodiments described above in the present application, the motion coefficient represents a motion completion degree corresponding to one motion;
the parameter determining unit is specifically configured to search a deformation parameter corresponding to the action coefficient from an expression base model corresponding to the flexible body assembly, where the deformation parameter includes a deformation amplitude corresponding to the action; and taking the deformation parameter corresponding to the action coefficient as the deformation parameter of the flexible body component.
Optionally, in any one of the apparatus embodiments described above in the present application, the apparatus further includes:
the first model establishing module is used for establishing the expression base models for all the flexible body components to be subjected to flexible deformation, wherein the expression base models quantize deformation amplitudes of the flexible body components, different deformation amplitudes correspond to different action coefficients, and each flexible body component corresponds to one expression base model.
Optionally, in any of the apparatus embodiments described herein above, the flexible body assembly comprises at least one mesh point;
the deformation parameter determining module comprises:
a position determination unit for determining a position of each grid point in the flexible body assembly in each frame image based on the pose information;
a second parameter determining unit, configured to determine a deformation parameter of the flexible body assembly based on a position of each grid point in each frame of image in the flexible body assembly.
Optionally, in any one of the apparatus embodiments described above, the position determining unit is specifically configured to, for each grid point in the flexible body assembly, obtain an acting force of the grid point based on the pose information of the current frame image and preset acting force information received by the grid point; determining the speed and the offset direction of the current frame image corresponding to the grid point based on the acting force of the grid point and the speed information of the previous frame image corresponding to the grid point; and determining the position of the grid point in the current frame image based on the speed and the offset direction of the grid point corresponding to the current frame image and the position information of the grid point corresponding to the previous frame image.
Optionally, in any one of the apparatus embodiments described above in the present application, the apparatus further includes:
and the second model establishing module is used for establishing a spring mass point model for the flexible body assembly and taking each grid point in the flexible body assembly as a mass point in the spring mass point model.
Optionally, in any apparatus embodiment of the present application, the deformation parameter determining module includes:
the simulation deformation unit is used for obtaining deformation parameters of a flexible deformation area in the flexible body assembly based on a simulation calculation matrix and a set rotation center;
and the third parameter determining unit is used for taking the deformation parameter of the flexible deformation area as the deformation parameter of the flexible body assembly.
Optionally, in any one of the apparatus embodiments described above in the present application, the apparatus further includes:
a first grid point processing module, configured to divide the flexible body assembly into a fixed region and the flexible deformation region, and obtain connection grid points between the fixed region and the flexible deformation region; and determining a simulation calculation matrix of the flexible deformation area and setting a rotation center based on the connection grid points, the grid points in the fixed area and the grid points of the flexible deformation area.
Optionally, in any one of the apparatus embodiments described above in the present application, the apparatus further includes:
the first grid point processing module is used for dividing the flexible body assembly into a fixed area, a rigid body deformation area and the flexible deformation area and obtaining a first connecting grid point between the fixed area and the flexible deformation area and a second connecting grid point between the rigid body deformation area and the flexible deformation area; and determining a simulation calculation matrix and a set rotation center of the flexible deformation region based on the first connection grid point, the second connection grid point and the grid point of the flexible deformation region.
Optionally, in any apparatus embodiment of the present application, the deformation parameter determining module includes:
a bone transformation determining unit, configured to determine, based on the pose information and the keypoint information, a movement variation and a rotation variation corresponding to each of the at least two bones;
a fourth parameter determining unit, configured to determine a deformation parameter of the flexible body assembly based on the movement variation and the rotation variation corresponding to each bone.
Optionally, in any one of the apparatus embodiments described above in the present application, the apparatus further includes:
a bone position determination module for defining at least two bones for the flexible body assembly, obtaining a positional relationship between the grid points in the flexible body assembly and the bones;
the fourth parameter determining unit is specifically configured to determine a deformation parameter of the flexible body assembly based on a position relationship between the grid point in the flexible body assembly and the bones, and a movement variation and a rotation variation corresponding to each bone.
Optionally, in any of the above apparatus embodiments of the present application, the bone position determination module is specifically configured to perform a skinning operation on at least two bones in the flexible body assembly to obtain a positional relationship between the grid points in the flexible body assembly and the bones.
Optionally, in any one of the apparatus embodiments described above in the present application, the apparatus further includes:
a force determination module for determining articulation points between each two bones based on at least two bones included in the flexible body assembly and obtaining related force information for each of the articulation points;
the bone transformation determining unit is specifically configured to determine the magnitude and direction of the acting force received by each joint point based on the pose information, the key point information, and the relevant acting force information of each joint point; and determining the movement change quantity and the rotation change quantity corresponding to each bone of the at least two bones based on the magnitude and the direction of the acting force received by each joint point.
Optionally, in any apparatus embodiment of the present application, the information obtaining module is specifically configured to implement at least one of the following:
performing key point identification on the at least one frame of image by using a first neural network to obtain key point information;
performing key point identification on the at least one frame of image by using a first neural network to obtain key point information; determining pose information in the at least one frame of image based on the keypoint information;
and performing pose identification on the at least one frame of image based on a second neural network to obtain the pose information.
According to another aspect of the embodiments of the present disclosure, there is provided an electronic device, including a processor, where the processor includes the flexible body simulation apparatus according to any one of the embodiments.
According to still another aspect of the embodiments of the present disclosure, there is provided an electronic device including: a memory for storing executable instructions;
and a processor in communication with the memory for executing the executable instructions to perform the operations of the flexible body simulation method of any of the above embodiments.
According to still another aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided for storing computer-readable instructions, which when executed perform the operations of the flexible body simulation method according to any one of the above embodiments.
According to yet another aspect of the embodiments of the present disclosure, there is provided a computer program product, which includes computer readable code, when the computer readable code is executed on a device, a processor in the device executes instructions for implementing the flexible body simulation method according to any one of the above embodiments.
Based on the flexible body simulation method and device, the electronic device and the computer-readable storage medium provided by the above embodiments of the present application, the pose information and/or the key point information of the target object contained in at least one frame of image in the video stream are acquired; determining deformation parameters of a flexible body assembly corresponding to each frame of image in at least one frame of image based on the pose information and/or the key point information; the flexible body assembly corresponding to the video stream is driven based on the deformation parameters of the flexible body assembly corresponding to each frame of image, the flexible body assembly is driven by obtaining the deformation parameters of the flexible body assembly, and simulation of bendable flexible bodies such as necks, ears, hairs and clothes is achieved, so that simulation actions of the flexible body assembly are more in line with real conditions.
The technical solution of the present application is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description, serve to explain the principles of the application.
The present application may be more clearly understood from the following detailed description with reference to the accompanying drawings, in which:
fig. 1 is a schematic flow chart of a flexible body simulation method according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of a flexible body simulation apparatus according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of an electronic device suitable for implementing a terminal device or a server according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Fig. 1 is a schematic flow chart of a flexible body simulation method according to an embodiment of the present application. As shown in fig. 1, the method of this embodiment includes:
step 110, acquiring pose information and/or key point information of a target object contained in at least one frame of image of the video stream.
The embodiment of the application can be applied to an object simulation method to realize dynamic simulation of a target object, for example, when a human body is taken as the target object, real-time simulation of an avatar according to human body actions can be realized, and swinging of clothes, waving of hair and the like can be simulated according to the human body actions; or when the tree is taken as a target object, the virtual image can be simulated in real time according to the action of the tree branches, and the swinging of the branches and leaves can be simulated according to the action of the tree branches; or, when the face is used as the target object, the virtual image can be controlled in real time according to the face expression, and therefore, the face in the image including the face in the video stream needs to be analyzed.
Alternatively, the pose information may be obtained directly based on a neural network, or obtained through processing based on the key point information, or obtained through other methods.
For example, in some optional embodiments, the first neural network is used to perform keypoint identification on at least one frame of image, and keypoint information is obtained; or,
performing key point identification on at least one frame of image by using a first neural network to obtain key point information; pose information in at least one frame of image is determined based on the keypoint information.
For different methods for obtaining the deformation parameters of the flexible body assembly, the method can be based on different information, can be based on only the key point information, or only the pose information, and can also be based on the key point information and the pose information, so that the embodiment of the application can select to obtain only the key point information, or obtain the pose information based on the key point information after obtaining the key point information. The technical means for specifically obtaining the key point information can adopt a common key point identification technology in the prior art, and the embodiment of the application does not limit the specific technical means for obtaining the key point information; the technical means for obtaining the pose information based on the key point information can adopt any technology which can be realized in the prior art, and the embodiment of the application does not limit the specific technical means for obtaining the pose information based on the key point information.
In other optional embodiments, the first neural network is used for performing face key point recognition on at least one frame of image to obtain key point information; and/or the presence of a gas in the gas,
and performing pose identification on at least one frame of image based on the second neural network to obtain pose information.
In the embodiment, the pose information is directly obtained by using the second neural network without key point identification, so that only key point information or only attitude information or key point information and attitude information can be obtained respectively. The technical means for specifically obtaining the key point information can adopt a common key point identification technology in the prior art, and the embodiment of the application does not limit the specific technical means for obtaining the key point information; the specific technical means for obtaining the pose information can adopt any realizable technology in the prior art, and the embodiment of the application is not limited to the specific technical means for obtaining the pose information.
And 120, determining deformation parameters of the flexible body assembly corresponding to each frame of image in at least one frame of image based on the pose information and/or the key point information.
The flexible body elements referred to in the embodiments of the present application are each flexibly deformable part of the avatar, for example, a flexible body of neck, ear, hair, etc. Optionally, when the target object is a face, the obtained pose information and/or key point information is face pose information and/or face key point information, and a face simulation method in the prior art can better track and migrate most expressions of the face, including eyebrows, eyes, a nose, lips, facial muscles, facial contours, and the like. However, many avatars not only contain the head but also have the neck and the upper body, and if only facial expressions are migrated, the entire model including the neck of the body will follow the rotation when the head rotates, which is obviously unreasonable. The resulting virtual image is unnatural because no consideration is given to the bending variations of the flexible body. In the embodiment of the application, the head in the virtual image is still simulated according to the original method, and the flexible body assembly is driven according to the deformation parameters thereof to realize flexible deformation. For example: simulation of head related hair. Since the avatar is different, the model of the human figure may have different hairstyles, the model of the animal figure may have facial beards, etc., it is necessary to simulate the deformation of the hair so that the hair may naturally follow the movement of the user when performing the head movement.
In addition, although the deformation of the virtual image is to migrate the expression from the human face, the model of the virtual image can have various categories, the virtual image can have many components which are not contained in the human head (for example, a clockwork of a robot model, blades of a plant model and the like) or components which are not contained in the human head but have small motion amplitude (for example, ears which can be flapped by an animal model and the like), and the application carries out driving simulation on the components in the virtual image according to the key point information of the human face and/or the pose information of the human face.
And step 130, driving the flexible body component corresponding to the video stream based on the deformation parameter of the flexible body component corresponding to each frame of image.
According to the flexible body simulation method provided by the embodiment of the application, the pose information and/or the key point information of the target object contained in at least one frame of image in the video stream are obtained; determining deformation parameters of a flexible body assembly corresponding to each frame of image in at least one frame of image based on the pose information and/or the key point information; the flexible body assembly corresponding to the video stream is driven based on the deformation parameters of the flexible body assembly corresponding to each frame of image, the flexible body assembly is driven by obtaining the deformation parameters of the flexible body assembly, and simulation of bendable flexible bodies such as necks, ears, hairs and clothes is achieved, so that simulation actions of the flexible body assembly are more in line with real conditions.
The flexible body simulation method provided by the embodiment of the application can meet various different flexible body deformation requirements, for example, a driving method of non-rigid registration can be applied to a neck area and the like, a driving method of dynamic bones can be applied to hair, animal ears, plant leaves and the like, a driving method of cloth simulation can be applied to clothes, pendant and the like which may appear, and an expression base driving method can be applied to a robot spring and the like. It is also possible to use different driving methods for different flexible body assemblies in an avatar, for example, a driving method of non-rigid registration for the neck and a driving method of dynamic bone for the hair when the avatar includes the neck and the hair.
In one or more alternative embodiments, step 120 includes:
acquiring associated key point information from the key point information based on the flexible body component corresponding to each frame of image;
determining an action coefficient corresponding to the flexible body assembly based on the pose information and the associated key point information;
and determining deformation parameters of the flexible body assembly based on the action coefficients and the expression base model corresponding to the flexible body assembly.
In this embodiment, the target object includes a face, and correspondingly, the key point information is face key point information, and the pose information is face pose information; there are some to be the change of the subassembly of flexible deformation in the virtual image based on facial expression emulation can be relevant with partial expression, for example, the rabbit ear is hung down under the default state, but the action of carrying the eyebrow can let the rabbit ear erect, and the beard is hung down under the default state, but the upwarp condition such as the beard of the upwarping of mouth corner. Therefore, in the embodiment of the application, associated key point information is obtained from the face key point information, and the associated key point information represents information of key points related to the change of the flexible body component; determining the action coefficient of the flexible body component by combining the face pose information and the associated key point information, wherein the action coefficient can be expressed as a numerical value; and searching in the corresponding relation based on the numerical value of the action coefficient through the corresponding relation defined in the preset expression base model to obtain the deformation parameter of the flexible body component, wherein the deformation parameter can represent the deformation amplitude of the flexible body component.
Optionally, the action coefficient represents an action completion degree corresponding to an action; the motion completion degree represents a proportion of completion of a motion, for example, the motion completion degree of the hand raising motion is set to 100% when the arm is at 90 degrees to the body, then the motion completion degree can be determined to be 50% when the arm is at 45 degrees to the body, and different motion completion degrees correspond to different deformation parameters in the expression base model.
Determining deformation parameters of the flexible body assembly based on the action coefficients and the expression base model corresponding to the flexible body assembly, wherein the deformation parameters comprise:
searching deformation parameters corresponding to the action coefficients from the expression base models corresponding to the flexible body components, wherein the deformation parameters comprise deformation amplitudes corresponding to the actions;
and taking the deformation parameter corresponding to the action coefficient as the deformation parameter of the flexible body component.
The embodiment of the application obtains corresponding deformation parameter from the expression base model based on the action coefficient, and the obtained deformation amplitude is used as the deformation parameter of the flexible body component, namely, the deformation amplitude of the flexible body component is determined, and the deformation amplitude can represent the change size of the shape and/or the position of the flexible body component, for example, the deformation amplitude represents the drooping amplitude of rabbit ears in the virtual image, and the dynamic simulation process of the virtual image corresponding to the video stream can be determined by obtaining the deformation parameter of the flexible body component corresponding to each frame image.
In order to obtain the deformation parameters based on the expression base model, before step 110, the method further includes:
and establishing an expression base model for all flexible body components to be subjected to flexible deformation.
The expression base model quantifies the deformation amplitude of the flexible body components, different deformation amplitudes correspond to different action coefficients, and each flexible body component corresponds to one expression base model.
Alternatively, a bilinear Principal Component Analysis (PCA) model is used to describe the three-dimensional shape of the flexible body assembly requiring flexible deformation, and constructing such PCA model requires a preset multi-group deformation expression base model which quantifies the size of the change of the mesh of the flexible body assembly under different actions and different angles of the head. The expression base model can be established in an off-line mode before the flexible body simulation is carried out, so that the expression base model is obtained, and the relation between the action coefficient and the deformation amplitude is determined through the expression base model.
In the embodiment of the application, the region to be flexibly deformed of the virtual image exists in the form of a flexible body assembly in the three-dimensional model (namely, is separated from the main body head model). And outputting the deformation generated under the action coefficient set to the flexible body assembly needing flexible deformation of the virtual image based on the obtained semantic action coefficient of the flexible body assembly, so that the flexible body assembly can perform corresponding flexible deformation simulation in real time according to the face image information input by the user.
Alternatively, in obtaining the deformation parameters of the flexible body assembly, a cloth simulation (i.e. a spring-mass point system) method may be adopted to record the information of mesh points of the flexible body assembly of the virtual image to be processed, and the mesh point information may include, but is not limited to, the distribution of points of the area to be simulated, the range of displaceable distances between two mesh points, the mutual acting force (e.g. spring acting force) between two mesh points, and the like. By establishing a physical model for each grid point and simulating the stress (which may include but is not limited to at least one of gravity, elastic force and other acting forces) of each grid point under different conditions and different environments, the motion track, the motion position and the like of each grid point are calculated, so that the flexible deformation effect required by the assembly can be simulated really.
In one or more alternative embodiments, step 120 includes:
determining the position of each grid point in each frame of image in the flexible body assembly based on the pose information;
a deformation parameter of the flexible body assembly is determined based on a position of each mesh point in the flexible body assembly at each time instant.
Each frame of image in at least one frame of image corresponds to the flexible body component at a moment so as to realize real-time simulation of the virtual image. Alternatively, for the flexible body component of the avatar to be processed, a spring particle model is defined based on mesh (mesh), wherein the spring particle model is a method for simulating the deformation of the object by using newton's law of motion. In the spring-mass-point model, each grid point of the flexible body component is defined as a mass point, and the grid line in the mesh is the spring between two grid points (mass points). The information that each particle needs to be defined in advance includes information such as damping, elasticity, length of rest (length of grid line when no external force is applied), and the like. The position and the speed of the same mass point in the flexible body assembly of the previous frame (possibly including the pose information of the head of the current frame image) are combined to analyze the stress of each mass point, including the elastic force and various possible external forces (for example, including gravity, air resistance and the like). Optionally, determining a position of each grid point in the flexible body assembly in each frame of image based on the pose information comprises:
for each grid point in the flexible body assembly, acquiring the acting force of the grid point based on the pose information of the current frame image and the preset acting force information received by the grid point;
determining the speed and the offset direction of the grid point corresponding to the current frame image based on the acting force of the grid point and the speed information of the previous frame image corresponding to the grid point;
and determining the positions of the grid points in the current frame image based on the speed and the offset direction of the grid points corresponding to the current frame image and the position information of the grid points corresponding to the previous frame image.
In the embodiment of the application, the preset acting force information (such as the acting force type, including gravity, elastic force, air resistance and the like, for example) received by each grid point is known, and the stressed condition of each grid point in the flexible body assembly corresponding to the current frame image can be known by combining the pose information of the current frame image, that is, the acting force of each grid point is obtained; acceleration of each grid point (mass point, mass of mass point can be considered as unit mass) can be obtained by newton's second law (F ═ m × a, F denotes the applied force, m denotes the mass of each grid point, and a denotes the acceleration) based on the applied force received by the grid points, acceleration of the grid point corresponding to the current frame image and the velocity at which the grid point corresponds to the previous frame image, the speed of the grid point corresponding to the current frame image can be obtained (the speed of the grid point corresponding to the previous frame image is added with the acceleration of the grid point corresponding to the current frame image), the moving distance of the grid point between the current frame image and the previous frame image can be determined according to the speed of the grid point corresponding to the current frame image and the time difference between the two frames, and the position of the grid point corresponding to the current frame image can be determined according to the position information of the grid point corresponding to the previous frame image and the offset direction of the grid point.
For the acquisition of the speed of the previous frame of image, the acceleration of each frame can be calculated from the first frame of the virtual image starting simulation, the speed of the grid point corresponding to each frame of image is the speed of the previous frame and is superposed with the speed variation between the two frames, and so on, the speed of the grid point corresponding to each frame of image can be obtained.
Optionally, before determining the deformation parameter of the flexible body assembly corresponding to each frame of image in at least one frame of image based on the pose information, the method further includes:
a spring mass point model is established for the flexible body assembly, and each grid point in the flexible body assembly is taken as a mass point in the spring mass point model.
In this embodiment, the force between every two grid points is used as a spring. According to the method, before virtual image simulation is carried out, a spring mass point model is established for the flexible body assembly, and each mass point is defined to comprise but not limited to at least one of damping information, elastic force information, resting length information and the like; the type of force applied to each grid point (mass point) is determined by a spring mass point model, and the external force applied to each grid point may include, but is not limited to, elastic force, gravity, air resistance, and the like.
In one or more alternative embodiments, step 120 includes:
obtaining deformation parameters of a flexible deformation area in the flexible body assembly based on the simulation calculation matrix and the set rotation center;
and taking the deformation parameters of the flexible deformation area as the deformation parameters of the flexible body assembly.
The flexible body assembly includes at least a flexible deformation region, optionally, the flexible body assembly may further include a fixation region, or the flexible body assembly may further include a fixation region or a rigid deformation region.
According to the embodiment of the application, the deformation parameters of the flexible deformation area are obtained through the preset analog calculation matrix and the set rotation center. In the process of the avatar simulation, optionally, the region to be processed is divided into three preset segments for processing: for the rigid body deformation region, the transmitted pose information is still applied to carry out rigid body deformation (any rigid body deformation technology in the prior art can be adopted); the fixed area is not subjected to any deformation treatment; and replacing the rotation information of the rigid body deformation by using the simulation calculation matrix and the set rotation center which are used in the simulation obtained by off-line in the area to be flexibly deformed so as to obtain the deformation parameters of the flexible deformation. So that a new position of each grid point in the grid of the whole component to be processed can be obtained.
And updating vertex data based on the obtained new position of each grid point, and combining the overall transformation information of the virtual image to realize the flexible deformation simulation of the mechanical energy of the face information in the input image in real time.
Optionally, before determining the deformation parameter of the flexible body assembly corresponding to each frame of image in at least one frame of image based on the pose information, the method further includes:
dividing the flexible body assembly into a fixed area and a flexible deformation area, and obtaining a connecting grid point between the fixed area and the flexible deformation area;
and determining a simulation calculation matrix of the flexible deformation area and setting a rotation center based on the grid points connecting the grid points, the grid points in the fixed area and the grid points in the flexible deformation area.
Or, dividing the flexible body assembly into a fixed region, a rigid body deformation region and a flexible deformation region, and obtaining a first connection grid point between the fixed region and the flexible deformation region and a second connection grid point between the rigid region and the flexible deformation region;
and determining a simulation calculation matrix of the flexible deformation area and setting a rotation center based on the first connecting grid point, the second connecting grid point and the grid point of the flexible deformation area.
Optionally, when obtaining the deformation parameters of the flexible body assembly, a non-rigid registration method may be adopted to divide the assembly to be processed of the model into two sections (including the fixed region and the flexible deformation region) or three sections (including the fixed region, the flexible deformation region and the rigid deformation region). Taking three segments as an example, knowing the position of the fixed region, the position of the rigid deformation region, and the rotation center defined by the flexible deformation region, adding constraints to the middle region to be flexibly deformed based on these information (for example, constraints may be expressed as limiting the maximum variable amplitude of the flexible deformation region by the position of the fixed region and the position of the rigid deformation region), thereby calculating the calculation matrix required by the region to be flexibly deformed. When the virtual image is simulated, the fixed area does not deform at any moment, the rigid deformation area does normal rigid deformation, and the area to be flexibly deformed obtains the position of the current moment through new deformation matrix calculation so as to achieve the real-time driving of the flexible deformation simulation of the assembly.
In one or more alternative embodiments, step 120 includes:
determining a movement variation and a rotation variation corresponding to each of at least two bones based on the pose information and the key point information;
optionally, determining elastic actions of at least two bones comprised by the simulated flexible body assembly based on the pose information and the keypoint information; determining a movement variation and a rotation variation corresponding to each of the at least two bones based on the elastic action of the at least two bones; the movement variation and the rotation variation are determined based on the elastic action, and are a physical calculation process, and the movement variation and the rotation variation are determined based on an initial force provided by face pose information, so that a skeleton node moves, but when the pose information cannot be obtained, if the skeleton is not completely stopped, the skeleton continues to move (generate deformation) under the action of other forces (such as gravity and the like) until the skeleton node stops.
A deformation parameter of the flexible body assembly is determined based on the amount of change in movement and the amount of change in rotation for each bone.
In the embodiment of the application, a plurality of bones are defined for the flexible body assembly, the simulated movement is realized based on the elastic movement of the bones, the flexible body assembly is more suitable for assemblies such as hairs, animal ears, plant leaves and the like, and after the plurality of bones are defined, joint point information of the bones of each area to be flexibly deformed is required to be defined, wherein the joint point information comprises information such as damping, elasticity, rigidity, inertia, gravity and the like. According to the key point information and the face pose information, starting from a first section of root skeleton (a skeleton which is connected with the face and is closer to the face in a plurality of defined skeletons is defined as the root skeleton, and a skeleton which is connected with the root skeleton and is far from the face is a leaf skeleton of the root skeleton), the elastic motion of each section of skeleton (the leaf skeleton of the root skeleton) is simulated downwards, and finally the movement variable quantity and the rotation variable quantity of each section of skeleton are obtained.
Optionally, before determining the deformation parameter of the flexible body assembly corresponding to each frame of image in at least one frame of image based on the pose information and the key point information, the method further includes:
defining at least two bones for the flexible body assembly, and obtaining the position relation between the grid points in the flexible body assembly and the bones;
determining a deformation parameter of the flexible body assembly based on the amount of change in movement and the amount of change in rotation corresponding to each bone, comprising:
and determining the deformation parameters of the flexible body assembly based on the position relation between the grid points in the flexible body assembly and the bones and the movement variation and the rotation variation corresponding to each bone.
Optionally, there may be a possibility of a blocking motion of the flexible body assembly in the avatar, for example, the lower half (the portion connected to the face) of the rabbit ear is generally stationary, and the upper half changes with the change of expression, for example, the eyebrow lifting motion may make the upper half of the rabbit ear erect, and the upper half of the rabbit ear is dropped in a default state, so that the present embodiment defines at least two bones for the flexible body assembly of this type for segment control, and defines at least two bones differently according to different flexible body assemblies, for example, defines two bones for the rabbit ear; after the skeleton is defined, in order to enable the flexible body assembly to realize flexible deformation, covering operation is carried out on the skeleton, and the covering operation is used for establishing the relation between each grid point in the flexible body assembly and each skeleton so as to determine the deformation parameter of the flexible body assembly based on the movement and rotation variation of the skeleton in the subsequent operation process.
Optionally, obtaining a positional relationship between the grid points and the bone in the flexible body assembly comprises:
skinning at least two bones in the flexible body assembly to obtain a positional relationship between the grid points and the bones in the flexible body assembly.
The skin is a form node for controlling the skeleton to the model, so that a reasonable binding effect is achieved, and the form node is an external contour. In the embodiment of the application, the flexible skin is realized by taking the grid points in the flexible body assembly as form nodes.
Optionally, before determining the deformation parameter of the flexible body assembly corresponding to each frame of image in at least one frame of image based on the pose information and the key point information, the method further includes:
determining a joint point between each two bones based on at least two bones included in the flexible body assembly, and obtaining related force information of each joint point;
wherein, there is a father-son relationship between the joint points of the skeleton, and the father joint points are connected with the son joint points in a downward extending order, for example, the shoulder joint point is the father joint point of the elbow joint point, and the elbow joint point is the son joint point of the shoulder joint point.
Determining a movement variation and a rotation variation corresponding to each of at least two bones based on the pose information and the keypoint information, comprising:
determining the magnitude and direction of the acting force received by each joint point based on the pose information, the key point information and the relevant acting force information of each joint point;
based on the magnitude and direction of the force received by each joint point, a change in movement and a change in rotation corresponding to each of the at least two bones is determined.
When controlling bones to perform elastic action, firstly acquiring the stress condition of each bone, and determining the change amount of movement and rotation of the bones according to the stress received by the bones, wherein the stress received by the bones can be defined by taking a grid point connected between every two bones as a joint point, and the activity condition of the bones is determined by the stress condition of the joint point, so that after a plurality of bones and joint points are defined in the embodiment of the application, the stress information of each joint point is defined, and the stress information of the joint point can include but is not limited to: damping, spring, stiffness, inertia, gravity, etc.; after the acting force information of the joint points is defined, the magnitude and the direction of various acting forces borne by each joint point can be determined based on the key point and the pose information, and finally the magnitude and the direction of the acting force received by each joint point are determined, so that the control of the elastic action of the skeleton is realized.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Fig. 2 is a schematic structural diagram of a flexible body simulation apparatus according to an embodiment of the present application. The apparatus of this embodiment may be used to implement the method embodiments described above in this application. As shown in fig. 2, the apparatus of this embodiment includes:
the information acquiring module 21 is configured to acquire pose information and/or keypoint information of a target object included in at least one frame of image in the video stream.
And the deformation parameter determining module 22 is configured to determine a deformation parameter of the flexible body assembly corresponding to each frame of image in at least one frame of image based on the pose information and/or the key point information.
And the component driving module 23 is configured to drive the flexible body component corresponding to the video stream based on the deformation parameter of the flexible body component corresponding to each frame of image.
The flexible body simulation apparatus provided in the above embodiment of the present application obtains pose information and/or key point information of a target object included in at least one frame of image in a video stream; determining deformation parameters of a flexible body assembly corresponding to each frame of image in at least one frame of image based on the pose information and/or the key point information; the flexible body assembly corresponding to the video stream is driven based on the deformation parameters of the flexible body assembly corresponding to each frame of image, the flexible body assembly is driven by obtaining the deformation parameters of the flexible body assembly, and simulation of bendable flexible bodies such as necks, ears, hairs and clothes is achieved, so that simulation actions of the flexible body assembly are more in line with real conditions.
In some alternative embodiments, the target object comprises a human face;
a deformation parameter determination module 22, comprising:
the associated information acquisition unit is used for acquiring associated key point information from the key point information based on the flexible body component corresponding to each frame of image;
the motion coefficient determining unit is used for determining motion coefficients corresponding to the flexible body assemblies based on the pose information and the associated key point information;
and the first parameter determining unit is used for determining the deformation parameter of the flexible body component based on the action coefficient and the expression base model corresponding to the flexible body component.
In this embodiment, the key point information is face key point information, and the pose information is face pose information; there are some to be the change of the subassembly of flexible deformation in the virtual image based on facial expression emulation can be relevant with partial expression, for example, the rabbit ear is hung down under the default state, but the action of carrying the eyebrow can let the rabbit ear erect, and the beard is hung down under the default state, but the upwarp condition such as the beard of the upwarping of mouth corner. Therefore, in the embodiment of the application, associated key point information is obtained from the face key point information, and the associated key point information represents information of key points related to the change of the flexible body component; determining the action coefficient of the flexible body component by combining the face pose information and the associated key point information, wherein the action coefficient can be expressed as a numerical value; and searching in the corresponding relation based on the numerical value of the action coefficient through the corresponding relation defined in the preset expression base model to obtain the deformation parameter of the flexible body component, wherein the deformation parameter can represent the deformation amplitude of the flexible body component.
Optionally, the action coefficient represents an action completion degree corresponding to an action;
the parameter determining unit is specifically used for searching deformation parameters corresponding to the action coefficients from the expression base models corresponding to the flexible body assemblies, and the deformation parameters comprise deformation amplitudes corresponding to actions; and taking the deformation parameter corresponding to the action coefficient as the deformation parameter of the flexible body component.
Optionally, the apparatus provided in this embodiment further includes:
and the first model establishing module is used for establishing an expression base model for all flexible body components to be subjected to flexible deformation.
The expression base model quantifies the deformation amplitude of the flexible body components, different deformation amplitudes correspond to different action coefficients, and each flexible body component corresponds to one expression base model.
In some optional embodiments, the flexible body assembly comprises at least one grid point;
a deformation parameter determination module 22, comprising:
a position determination unit for determining the position of each grid point in the flexible body assembly in each frame of image based on the pose information;
and the second parameter determining unit is used for determining the deformation parameter of the flexible body assembly based on the position of each grid point in each frame of image in the flexible body assembly.
Each frame of image in at least one frame of image corresponds to a flexible body component at a moment so as to realize real-time simulation of the virtual image. Alternatively, for the flexible body component of the avatar to be processed, a spring particle model is defined based on mesh (mesh), wherein the spring particle model is a method for simulating the deformation of the object by using newton's law of motion. In the spring-mass-point model, each grid point of the flexible body component is defined as a mass point, and the grid line in the mesh is the spring between two grid points (mass points). The information that each particle needs to be defined in advance includes information such as damping, elasticity, length of rest (length of grid line when no external force is applied), and the like. The position and the speed of the same mass point in the flexible body assembly of the previous frame (possibly including the pose information of the head of the current frame image) are combined to analyze the stress of each mass point, including the elastic force and various possible external forces (for example, including gravity, air resistance and the like).
Optionally, the position determining unit is specifically configured to, for each grid point in the flexible body assembly, obtain an acting force of the grid point based on the pose information of the current frame image and preset acting force information received by the grid point; determining the speed and the offset direction of the grid point corresponding to the current frame image based on the acting force of the grid point and the speed information of the previous frame image corresponding to the grid point; and determining the positions of the grid points in the current frame image based on the speed and the offset direction of the grid points corresponding to the current frame image and the position information of the grid points corresponding to the previous frame image.
Optionally, the apparatus further comprises:
and the second model establishing module is used for establishing a spring mass point model for the flexible body assembly, and taking each grid point in the flexible body assembly as a mass point in the spring mass point model.
In some optional embodiments, the deformation parameter determining module 22 includes:
the simulation deformation unit is used for obtaining deformation parameters of a flexible deformation area in the flexible body assembly based on the simulation calculation matrix and the set rotation center;
and the third parameter determining unit is used for taking the deformation parameter of the flexible deformation area as the deformation parameter of the flexible body assembly.
According to the embodiment of the application, the deformation parameters of the flexible deformation area are obtained through the preset analog calculation matrix and the set rotation center. In the process of the avatar simulation, optionally, the region to be processed is divided into three preset segments for processing: for the rigid body deformation region, the transmitted pose information is still applied to carry out rigid body deformation (any rigid body deformation technology in the prior art can be adopted); the fixed area is not subjected to any deformation treatment; and replacing the rotation information of the rigid body deformation by using the simulation calculation matrix and the set rotation center which are used in the simulation obtained by off-line in the area to be flexibly deformed so as to obtain the deformation parameters of the flexible deformation. So that a new position of each grid point in the grid of the whole component to be processed can be obtained.
Optionally, the apparatus provided in this embodiment of the present application further includes:
the first grid point processing module is used for dividing the flexible body assembly into a fixed area and a flexible deformation area and obtaining connecting grid points between the fixed area and the flexible deformation area; and determining a simulation calculation matrix of the flexible deformation area and setting a rotation center based on the grid points connecting the grid points, the grid points in the fixed area and the grid points in the flexible deformation area.
Optionally, the apparatus provided in this embodiment of the present application further includes:
the first grid point processing module is used for dividing the flexible body assembly into a fixed area, a rigid body deformation area and a flexible deformation area, and obtaining a first connecting grid point between the fixed area and the flexible deformation area and a second connecting grid point between the rigid body deformation area and the flexible deformation area; and determining a simulation calculation matrix of the flexible deformation area and setting a rotation center based on the first connecting grid point, the second connecting grid point and the grid point of the flexible deformation area.
In some optional embodiments, the deformation parameter determination module 22 includes:
a skeleton transformation determining unit configured to determine a movement variation and a rotation variation corresponding to each of at least two bones based on the pose information and the key point information;
and the fourth parameter determination unit is used for determining the deformation parameter of the flexible body assembly based on the movement variation and the rotation variation corresponding to each bone.
In the embodiment of the application, a plurality of bones are defined for the flexible body assembly, the simulated movement is realized based on the elastic movement of the bones, the flexible body assembly is more suitable for assemblies such as hairs, animal ears, plant leaves and the like, and after the plurality of bones are defined, joint point information of the bones of each area to be flexibly deformed is required to be defined, wherein the joint point information comprises information such as damping, elasticity, rigidity, inertia, gravity and the like. According to the key point information and the face pose information, starting from a first section of root skeleton (a skeleton which is connected with the face and is closer to the face in a plurality of defined skeletons is defined as the root skeleton, and a skeleton which is connected with the root skeleton and is far from the face is a leaf skeleton of the root skeleton), the elastic motion of each section of skeleton (the leaf skeleton of the root skeleton) is simulated downwards, and finally the movement variable quantity and the rotation variable quantity of each section of skeleton are obtained.
Optionally, the apparatus provided in this embodiment of the present application further includes:
a bone position determination module for defining at least two bones for the flexible body assembly, obtaining a positional relationship between the grid points in the flexible body assembly and the bones;
and the fourth parameter determining unit is specifically configured to determine a deformation parameter of the flexible body assembly based on a position relationship between the grid point in the flexible body assembly and the bones, and a movement variation and a rotation variation corresponding to each bone.
Optionally, the bone position determination module is specifically configured to perform a skinning operation on at least two bones in the flexible body assembly to obtain a positional relationship between the grid points in the flexible body assembly and the bones.
Optionally, the apparatus provided in this embodiment of the present application further includes:
a force determination module for determining joint points between each two bones based on at least two bones included in the flexible body assembly and obtaining related force information of each joint point;
the skeleton transformation determining unit is specifically used for determining the size and the direction of the acting force received by each joint point based on the pose information, the key point information and the related acting force information of each joint point; based on the magnitude and direction of the force received by each joint point, a change in movement and a change in rotation corresponding to each of the at least two bones is determined.
In some optional embodiments, the information obtaining module 21 is specifically configured to implement at least one of the following:
performing key point identification on the at least one frame of image by using a first neural network to obtain key point information;
performing key point identification on at least one frame of image by using a first neural network to obtain key point information; determining pose information in at least one frame of image based on the key point information;
and performing pose identification on at least one frame of image based on the second neural network to obtain pose information.
According to another aspect of the embodiments of the present disclosure, there is provided an electronic device including a processor, the processor including the flexible body simulation apparatus provided in any one of the above embodiments.
According to still another aspect of the embodiments of the present disclosure, there is provided an electronic device including: a memory for storing executable instructions;
and a processor in communication with the memory to execute the executable instructions to perform operations of the flexible body simulation method provided by any of the above embodiments.
According to still another aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided for storing computer-readable instructions, which when executed perform the operations of the flexible body simulation method provided in any one of the above embodiments.
According to yet another aspect of the embodiments of the present disclosure, there is provided a computer program product including computer readable code, which when run on a device, a processor in the device executes instructions for implementing the flexible body simulation method provided in any one of the above embodiments.
The embodiment of the disclosure also provides an electronic device, which may be a mobile terminal, a Personal Computer (PC), a tablet computer, a server, and the like. Referring now to fig. 3, a schematic diagram of an electronic device 300 suitable for use in implementing a terminal device or server of an embodiment of the present disclosure is shown: as shown in fig. 3, the electronic device 300 includes one or more processors, communication sections, and the like, for example: one or more Central Processing Units (CPUs) 301, and/or one or more image processors (acceleration units) 313, etc., which may perform various appropriate actions and processes according to executable instructions stored in a Read Only Memory (ROM)302 or loaded from a storage section 308 into a Random Access Memory (RAM) 303. The communication section 312 may include, but is not limited to, a network card, which may include, but is not limited to, an ib (infiniband) network card.
The processor may communicate with the rom302 and/or the ram303 to execute executable instructions, connect with the communication unit 312 through the bus 304, and communicate with other target devices through the communication unit 312, so as to perform operations corresponding to any one of the methods provided by the embodiments of the present disclosure, for example, acquiring pose information and/or key point information of a target object included in at least one frame of image in a video stream; determining deformation parameters of a flexible body assembly corresponding to each frame of image in at least one frame of image based on the pose information and/or the key point information; and driving the flexible body component corresponding to the video stream based on the deformation parameter of the flexible body component corresponding to each frame of image.
Further, in the RAM303, various programs and data necessary for the operation of the apparatus can also be stored. The CPU301, ROM302, and RAM303 are connected to each other via a bus 304. The ROM302 is an optional module in case of the RAM 303. The RAM303 stores or writes executable instructions into the ROM302 at runtime, which causes the central processing unit 301 to perform operations corresponding to the above-described communication method. An input/output (I/O) interface 305 is also connected to bus 304. The communication unit 312 may be integrated, or may be provided with a plurality of sub-modules (e.g., a plurality of IB network cards) and connected to the bus link.
The following components are connected to the I/O interface 305: an input portion 306 including a keyboard, a mouse, and the like; an output section 307 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 308 including a hard disk and the like; and a communication section 309 including a network interface card such as a LAN card, a modem, or the like. The communication section 309 performs communication processing via a network such as the internet. A drive 310 is also connected to the I/O interface 305 as needed. A removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 310 as necessary, so that a computer program read out therefrom is mounted into the storage section 308 as necessary.
It should be noted that the architecture shown in fig. 3 is only an optional implementation manner, and in a specific practical process, the number and types of the components in fig. 3 may be selected, deleted, added or replaced according to actual needs; in different functional component settings, separate settings or integrated settings may also be used, for example, the acceleration unit 313 and the CPU301 may be separately provided or the acceleration unit 313 may be integrated on the CPU301, the communication part may be separately provided or integrated on the CPU301 or the acceleration unit 313, and so on. These alternative embodiments are all within the scope of the present disclosure.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method illustrated in the flowchart, the program code may include instructions corresponding to performing the method steps provided by embodiments of the present disclosure, e.g., acquiring pose information and/or keypoint information of a target object contained in at least one image frame of a video stream; determining deformation parameters of a flexible body assembly corresponding to each frame of image in at least one frame of image based on the pose information and/or the key point information; and driving the flexible body component corresponding to the video stream based on the deformation parameter of the flexible body component corresponding to each frame of image. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 309, and/or installed from the removable medium 311. The operations of the above-described functions defined in the method of the present disclosure are performed when the computer program is executed by a Central Processing Unit (CPU) 301.
The methods and apparatus of the present application may be implemented in a number of ways. For example, the methods and apparatus of the present application may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present application are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present application may also be embodied as a program recorded in a recording medium, the program including machine-readable instructions for implementing a method according to the present application. Thus, the present application also covers a recording medium storing a program for executing the method according to the present application.
The description of the present application has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the application in the form disclosed. Many modifications and variations will be apparent to practitioners skilled in this art. The embodiment was chosen and described in order to best explain the principles of the application and the practical application, and to enable others of ordinary skill in the art to understand the application for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A flexible body simulation method, comprising:
acquiring pose information and/or key point information of a target object contained in at least one frame of image in a video stream;
determining deformation parameters of a flexible body assembly corresponding to each frame of image in the at least one frame of image based on the pose information and/or the key point information;
and driving the flexible body component corresponding to the video stream based on the deformation parameter of the flexible body component corresponding to each frame of image.
2. The method of claim 1, wherein the target object comprises a human face;
the determining, based on the pose information and the key point information, a deformation parameter of a flexible body assembly corresponding to each frame of image in the at least one frame of image includes:
acquiring associated key point information from the key point information based on the flexible body component corresponding to each frame of image;
determining an action coefficient corresponding to the flexible body assembly based on the pose information and the associated key point information;
and determining a deformation parameter of the flexible body assembly based on the action coefficient and the expression base model corresponding to the flexible body assembly.
3. The method of claim 1, wherein the flexible body assembly comprises at least one grid point;
the determining, based on the pose information, a deformation parameter of a flexible body assembly corresponding to each frame of image in the at least one frame of image includes:
determining a position of each grid point in the flexible body assembly in each frame of image based on the pose information;
determining a deformation parameter of the flexible body assembly based on a position of each grid point in each frame of image in the flexible body assembly.
4. The method according to claim 1, wherein the determining deformation parameters of the flexible body assembly corresponding to each frame of the at least one frame of image based on the pose information comprises:
obtaining deformation parameters of a flexible deformation area in the flexible body assembly based on a simulation calculation matrix and a set rotation center;
and taking the deformation parameter of the flexible deformation area as the deformation parameter of the flexible body assembly.
5. The method of claim 1, wherein determining deformation parameters of a flexible body assembly corresponding to each frame of the at least one frame of image based on the pose information and the keypoint information comprises:
determining a movement variation and a rotation variation corresponding to each of the at least two bones based on the pose information and the keypoint information;
determining a deformation parameter of the flexible body assembly based on the amount of movement change and the amount of rotation change corresponding to each bone.
6. The method according to any one of claims 1 to 5, wherein the acquiring pose information and/or key point information of the target image contained in at least one frame of image in the video stream comprises at least one of the following manners:
performing key point identification on the at least one frame of image by using a first neural network to obtain key point information;
performing key point identification on the at least one frame of image by using a first neural network to obtain key point information; determining pose information in the at least one frame of image based on the keypoint information;
and performing pose identification on the at least one frame of image based on a second neural network to obtain the pose information.
7. A flexible body simulation apparatus, comprising:
the information acquisition module is used for acquiring pose information and/or key point information of a target object contained in at least one frame of image in the video stream;
the deformation parameter determining module is used for determining deformation parameters of the flexible body assembly corresponding to each frame of image in the at least one frame of image based on the pose information and/or the key point information;
and the component driving module is used for driving the flexible body component corresponding to the video stream based on the deformation parameter of the flexible body component corresponding to each frame of image.
8. An electronic device, comprising: a memory for storing executable instructions;
and a processor in communication with the memory to execute the executable instructions to perform the operations of the flexible body simulation method of any one of claims 1 to 6.
9. A computer readable storage medium storing computer readable instructions which, when executed, perform the operations of the flexible body simulation method of any of claims 1 to 6.
10. A computer program product comprising computer readable code, characterized in that when the computer readable code is run on a device, a processor in the device executes instructions for implementing the flexible body simulation method of any of claims 1 to 6.
CN201910935599.0A 2019-09-29 2019-09-29 Flexible body simulation method and device, electronic equipment and computer readable storage medium Pending CN110705094A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910935599.0A CN110705094A (en) 2019-09-29 2019-09-29 Flexible body simulation method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910935599.0A CN110705094A (en) 2019-09-29 2019-09-29 Flexible body simulation method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN110705094A true CN110705094A (en) 2020-01-17

Family

ID=69198040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910935599.0A Pending CN110705094A (en) 2019-09-29 2019-09-29 Flexible body simulation method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110705094A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541963A (en) * 2020-11-09 2021-03-23 北京百度网讯科技有限公司 Three-dimensional virtual image generation method and device, electronic equipment and storage medium
CN114789470A (en) * 2022-01-25 2022-07-26 北京萌特博智能机器人科技有限公司 Method and device for adjusting simulation robot
CN115524997A (en) * 2022-09-28 2022-12-27 山东大学 Robot dynamic cloth operation method and system based on reinforcement and simulated learning
CN115935553A (en) * 2022-12-29 2023-04-07 深圳技术大学 Linear flexible body deformation state analysis method and related device
WO2023179292A1 (en) * 2022-03-21 2023-09-28 北京字跳网络技术有限公司 Virtual prop driving method and apparatus, electronic device and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108399383A (en) * 2018-02-14 2018-08-14 深圳市商汤科技有限公司 Expression moving method, device storage medium and program
CN109147017A (en) * 2018-08-28 2019-01-04 百度在线网络技术(北京)有限公司 Dynamic image generation method, device, equipment and storage medium
CN109978975A (en) * 2019-03-12 2019-07-05 深圳市商汤科技有限公司 A kind of moving method and device, computer equipment of movement
CN110139115A (en) * 2019-04-30 2019-08-16 广州虎牙信息科技有限公司 Virtual image attitude control method, device and electronic equipment based on key point

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108399383A (en) * 2018-02-14 2018-08-14 深圳市商汤科技有限公司 Expression moving method, device storage medium and program
CN109147017A (en) * 2018-08-28 2019-01-04 百度在线网络技术(北京)有限公司 Dynamic image generation method, device, equipment and storage medium
CN109978975A (en) * 2019-03-12 2019-07-05 深圳市商汤科技有限公司 A kind of moving method and device, computer equipment of movement
CN110139115A (en) * 2019-04-30 2019-08-16 广州虎牙信息科技有限公司 Virtual image attitude control method, device and electronic equipment based on key point

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
崔桐等: "一种用于力觉再现的柔性体变形仿真弹簧-质点模型", 《东南大学学报(自然科学版)》 *
戴振龙等: "基于MPEG-4的人脸表情图像变形研究", 《中国图象图形学报》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541963A (en) * 2020-11-09 2021-03-23 北京百度网讯科技有限公司 Three-dimensional virtual image generation method and device, electronic equipment and storage medium
CN112541963B (en) * 2020-11-09 2023-12-26 北京百度网讯科技有限公司 Three-dimensional avatar generation method, three-dimensional avatar generation device, electronic equipment and storage medium
CN114789470A (en) * 2022-01-25 2022-07-26 北京萌特博智能机器人科技有限公司 Method and device for adjusting simulation robot
WO2023179292A1 (en) * 2022-03-21 2023-09-28 北京字跳网络技术有限公司 Virtual prop driving method and apparatus, electronic device and readable storage medium
CN115524997A (en) * 2022-09-28 2022-12-27 山东大学 Robot dynamic cloth operation method and system based on reinforcement and simulated learning
CN115524997B (en) * 2022-09-28 2024-05-14 山东大学 Robot dynamic operation cloth method and system based on reinforcement and imitation learning
CN115935553A (en) * 2022-12-29 2023-04-07 深圳技术大学 Linear flexible body deformation state analysis method and related device
CN115935553B (en) * 2022-12-29 2024-02-09 深圳技术大学 Linear flexible body deformation state analysis method and related device

Similar Documents

Publication Publication Date Title
CN110705094A (en) Flexible body simulation method and device, electronic equipment and computer readable storage medium
US11270489B2 (en) Expression animation generation method and apparatus, storage medium, and electronic apparatus
KR102241153B1 (en) Method, apparatus, and system generating 3d avartar from 2d image
US12039454B2 (en) Microexpression-based image recognition method and apparatus, and related device
US20220157004A1 (en) Generating textured polygon strip hair from strand-based hair for a virtual character
US10945514B2 (en) Information processing apparatus, information processing method, and computer-readable storage medium
CN102458595B (en) The system of control object, method and recording medium in virtual world
KR101911133B1 (en) Avatar construction using depth camera
US20220076474A1 (en) Computer generated hair groom transfer tool
US7068277B2 (en) System and method for animating a digital facial model
CN108875633A (en) Expression detection and expression driving method, device and system and storage medium
CN110675475A (en) Face model generation method, device, equipment and storage medium
CN110837294A (en) Facial expression control method and system based on eyeball tracking
CN114630738A (en) System and method for simulating sensing data and creating perception
CN111062328A (en) Image processing method and device and intelligent robot
CN108388889A (en) Method and apparatus for analyzing facial image
CN112190921A (en) Game interaction method and device
Jaeckel et al. Facial behaviour mapping—From video footage to a robot head
CN101512603A (en) FACS solving in motion capture
KR102247481B1 (en) Device and method for generating job image having face to which age transformation is applied
JP2022505746A (en) Digital character blending and generation systems and methods
CN114677476A (en) Face processing method and device, computer equipment and storage medium
CN210121851U (en) Robot
CN113658319B (en) Gesture migration method and device between heterogeneous frameworks
US20240177389A1 (en) Apparatus and method for creating avatar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200117

RJ01 Rejection of invention patent application after publication