CN110378855B - Method for constructing character statistical model containing character shape, facial expression, gesture and gesture actions - Google Patents

Method for constructing character statistical model containing character shape, facial expression, gesture and gesture actions Download PDF

Info

Publication number
CN110378855B
CN110378855B CN201910648258.5A CN201910648258A CN110378855B CN 110378855 B CN110378855 B CN 110378855B CN 201910648258 A CN201910648258 A CN 201910648258A CN 110378855 B CN110378855 B CN 110378855B
Authority
CN
China
Prior art keywords
data set
model
character
file
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910648258.5A
Other languages
Chinese (zh)
Other versions
CN110378855A (en
Inventor
高飞
胡亮
张景华
虞义兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOWA (WUHAN) TECHNOLOGY Co.,Ltd.
Guangdong Guoshi Wine Industry Co.,Ltd.
Guangdong Yuwantang Traditional Chinese Medicine Co.,Ltd.
Original Assignee
Bowa Whale Shenzhen Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bowa Whale Shenzhen Technology Co ltd filed Critical Bowa Whale Shenzhen Technology Co ltd
Priority to CN201910648258.5A priority Critical patent/CN110378855B/en
Publication of CN110378855A publication Critical patent/CN110378855A/en
Application granted granted Critical
Publication of CN110378855B publication Critical patent/CN110378855B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for constructing a character statistical model comprising character shapes, facial expressions, gestures and gesture actions, which comprises the following steps: s1: acquiring corresponding data through an open source-based character model dataset; s2: fitting the open-source data set by using a mano model through an icp algorithm to obtain a new body data set containing a smpl topological structure; s3: fitting the head of the open source data by using a flame model through an icp algorithm to obtain a new data set containing a flame topological structure; s4: integrating the fitted body data and head data to obtain a new data set containing the same topological structure; s5: obtaining a body shape of the new data set through a machine learning pca algorithm; s6: generating a corresponding pkl file by comparing the new data sets; s7: converting a pkl file into an fbx file which can be used by maya for checking a 3D engine by writing fbx sdk codes; s8: the head and body training weights are smoothed by the Delta Mush algorithm.

Description

Method for constructing character statistical model containing character shape, facial expression, gesture and gesture actions
Technical Field
The invention relates to the technical field of computer image processing, in particular to a method for constructing a character statistical model comprising character shapes, facial expressions, gestures and gesture actions.
Background
Character pose capture and three-dimensional reconstruction have been one popular application in the fields of computer vision and computer graphics. The basic character statistical model SMPL has been widely used in the fields of computer graphics and deep learning. The recent head statistical model flame is also applied in terms of speech, and the hand model mano is also applied in terms of gesture detection. But all three are independent. After that, total-capture and subsequent work are respectively carried out on the body, the head and the hands, and the cost of the modeling operation from the video is high. The smpl-expression in the latest cvpr2019 integrates the head, hands and body into a unified statistical model, as does our statistical model, but our model has the greatest advantage over smpl-expression in that it is not high in training cost and has substantially similar effect as smpl-expression. To this end, we propose a method of constructing a statistical model of a character comprising character shapes, facial expressions, gestures, gesture actions.
Disclosure of Invention
The invention provides a method for constructing a character statistical model comprising character shapes, facial expressions, gestures and gesture actions, which aims to solve the problems in the background technology.
The invention provides a method for constructing a character statistical model comprising character shapes, facial expressions, gestures and gesture actions, which comprises the following steps:
s1: acquiring corresponding data through an open source-based character model dataset;
s2: fitting the open-source data set by using a mano model through an icp algorithm to obtain a new body data set containing a smpl topological structure;
s3: fitting the head of the open source data by using a flame model through an icp algorithm to obtain a new data set containing a flame topological structure;
s4: integrating the fitted body data and head data to obtain a new data set containing the same topological structure;
s5: obtaining a body shape of the new data set through a machine learning pca algorithm;
s6: migrating the corresponding point of the new data set to the topological structures of the mano and the flame to generate a statistical character statistical model in an initial state by comparing the parameters of the blend shape, the linking weights and the joint regress, and generating a corresponding pkl file;
s7: converting a pkl file into an fbx file which can be used by maya for checking a 3D engine by writing fbx sdk codes;
s8: smoothing the head and body by using a Delta Mush algorithm, and exporting the smoothed training weight into a mask file by writing a mask script;
s9: and replacing the learning weights of the initial model with the exported mask file to obtain a final model.
The method for constructing the figure statistical model comprising the figure, the facial expression, the gesture and the gesture actions has the beneficial effects that: the invention utilizes the combined action of the icp algorithm and the pca algorithm, thereby being capable of constructing the figure statistical model comprising figure, facial expression, gesture and gesture actions, being capable of distinguishing virtual figures remarkably, and having high efficiency and easy resolution. The technology for constructing the three-dimensional virtual character can be applied to 3D games, and the complete virtual character in the traditional games is replaced by the virtual character similar to the face of the user, so that the sense of realism and the interestingness can be increased, the user experience can be improved, and the current development needs can be met.
Detailed Description
The invention will be further illustrated with reference to specific examples.
The invention provides a method for constructing a character statistical model comprising character shapes, facial expressions, gestures and gesture actions, which comprises the following steps:
s1: acquiring corresponding data through an open source-based character model dataset;
s2: fitting the open-source data set by using a mano model through an icp algorithm to obtain a new body data set containing a smpl topological structure;
s3: fitting the head of the open source data by using a flame model through an icp algorithm to obtain a new data set containing a flame topological structure;
s4: integrating the fitted body data and head data to obtain a new data set containing the same topological structure;
s5: obtaining a body shape of the new data set through a machine learning pca algorithm;
s6: migrating the corresponding point of the new data set to the topological structures of the mano and the flame to generate a statistical character statistical model in an initial state by comparing the parameters of the blend shape, the linking weights and the joint regress, and generating a corresponding pkl file;
s7: converting a pkl file into an fbx file which can be used by maya for checking a 3D engine by writing fbx sdk codes;
s8: smoothing the head and body by using a Delta Mush algorithm, and exporting the smoothed training weight into a mask file by writing a mask script;
s9: and replacing the learning weights of the initial model with the exported mask file to obtain a final model.
The conventional linear blend skinning model deformation formula is:
Figure BDA0002134262450000041
wherein the method comprises the steps of
Figure BDA0002134262450000042
Is a static model, J is the spatial coordinates of key points of a model skeleton, < >>
Figure BDA0002134262450000043
The rotation information of each node is that W is the weight of the bone skin, N is the number of model points, and K is the number of bone points.
The model of the invention is as follows:
Figure BDA0002134262450000044
Figure BDA0002134262450000045
wherein the method comprises the steps of
Figure BDA0002134262450000046
Corresponding to traditional linear blend skinning is->
Figure BDA00021342624500000414
Except that it contains character shape compensation: />
Figure BDA0002134262450000047
Character pose compensation: />
Figure BDA0002134262450000048
And face surfaces of peopleAnd (3) condition compensation:
Figure BDA0002134262450000049
the other variables are in one-to-one correspondence with the traditional method.
Figure BDA00021342624500000410
S is the shape space of the pca of the person we learn from the new dataset, +.>
Figure BDA00021342624500000411
For its dimension we need only to give a vector of length to its dimension to get the figure compensation by the above formula. Similarly->
Figure BDA00021342624500000412
The same is true except that E is the expression space. And at
Figure BDA00021342624500000413
And P is the mapping relation between the gesture obtained by linear regression through a large amount of data and the interpolation value.
In summary, the invention utilizes the combined action of the icp algorithm and the pca algorithm, thereby being capable of constructing the figure statistical model comprising figure, facial expression, gesture and gesture actions, and being capable of distinguishing virtual figures and characters remarkably, and having high efficiency and easy resolution. The technology for constructing the three-dimensional virtual character can be applied to 3D games, and the complete virtual character in the traditional games is replaced by the virtual character similar to the face of the user, so that the sense of realism and the interestingness can be increased, the user experience can be improved, and the current development needs can be met.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should make equivalent substitutions or modifications according to the technical scheme of the present invention and the inventive concept thereof, and should be covered by the scope of the present invention.

Claims (1)

1. A method of constructing a statistical model of a character comprising character shapes, facial expressions, gestures, gesture actions, comprising the steps of:
s1: acquiring corresponding data through an open source-based character model dataset;
s2: fitting the open-source data set by using a mano model through an icp algorithm to obtain a new body data set containing a smpl topological structure;
s3: fitting the head of the open source data by using a flame model through an icp algorithm to obtain a new data set containing a flame topological structure;
s4: integrating the fitted body data and head data to obtain a new data set containing the same topological structure;
s5: obtaining a body shape of the new data set through a machine learning pca algorithm;
s6: migrating the corresponding point of the new data set to the topological structures of the mano and the flame to generate a statistical character statistical model in an initial state by comparing the parameters of the blend shape, the linking weights and the joint regress, and generating a corresponding pkl file;
s7: converting a pkl file into an fbx file which can be used by maya for checking a 3D engine by writing fbx sdk codes;
s8: smoothing the head and body by using a Delta Mush algorithm, and exporting the smoothed training weight into a mask file by writing a mask script;
s9: and replacing the learning weights of the initial model with the exported mask file to obtain a final model.
CN201910648258.5A 2019-07-18 2019-07-18 Method for constructing character statistical model containing character shape, facial expression, gesture and gesture actions Active CN110378855B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910648258.5A CN110378855B (en) 2019-07-18 2019-07-18 Method for constructing character statistical model containing character shape, facial expression, gesture and gesture actions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910648258.5A CN110378855B (en) 2019-07-18 2019-07-18 Method for constructing character statistical model containing character shape, facial expression, gesture and gesture actions

Publications (2)

Publication Number Publication Date
CN110378855A CN110378855A (en) 2019-10-25
CN110378855B true CN110378855B (en) 2023-04-25

Family

ID=68253718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910648258.5A Active CN110378855B (en) 2019-07-18 2019-07-18 Method for constructing character statistical model containing character shape, facial expression, gesture and gesture actions

Country Status (1)

Country Link
CN (1) CN110378855B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408433B (en) * 2021-06-22 2023-12-05 华侨大学 Intelligent monitoring gesture recognition method, device, equipment and storage medium
CN113408434B (en) * 2021-06-22 2023-12-05 华侨大学 Intelligent monitoring expression recognition method, device, equipment and storage medium
CN113408435B (en) * 2021-06-22 2023-12-05 华侨大学 Security monitoring method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005141305A (en) * 2003-11-04 2005-06-02 Advanced Telecommunication Research Institute International Face expression estimating device and method and its program
CN106778628A (en) * 2016-12-21 2017-05-31 张维忠 A kind of facial expression method for catching based on TOF depth cameras

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8599206B2 (en) * 2010-05-25 2013-12-03 Disney Enterprises, Inc. Systems and methods for animating non-humanoid characters with human motion data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005141305A (en) * 2003-11-04 2005-06-02 Advanced Telecommunication Research Institute International Face expression estimating device and method and its program
CN106778628A (en) * 2016-12-21 2017-05-31 张维忠 A kind of facial expression method for catching based on TOF depth cameras

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
真实感人脸的形状与表情空间;裴玉茹等;《计算机辅助设计与图形学学报》;20060520(第05期);全文 *

Also Published As

Publication number Publication date
CN110378855A (en) 2019-10-25

Similar Documents

Publication Publication Date Title
CN110378855B (en) Method for constructing character statistical model containing character shape, facial expression, gesture and gesture actions
JP7227292B2 (en) Virtual avatar generation method and device, electronic device, storage medium and computer program
Lewis et al. Practice and theory of blendshape facial models.
EP4207080A1 (en) Avatar generation method, apparatus and device, and medium
Jörg et al. Data-driven finger motion synthesis for gesturing characters
US20080158224A1 (en) Method for generating an animatable three-dimensional character with a skin surface and an internal skeleton
Yang et al. Curve skeleton skinning for human and creature characters
Yan et al. Inspiration transfer for intelligent design: A generative adversarial network with fashion attributes disentanglement
Shidujaman et al. “roboquin”: A mannequin robot with natural humanoid movements
Kim et al. Drivenshape: a data-driven approach for shape deformation
Ouzounis et al. Kernel projection of latent structures regression for facial animation retargeting
WO2023160074A1 (en) Image generation method and apparatus, electronic device, and storage medium
Liu et al. Modeling realistic clothing from a single image under normal guide
CN108416835B (en) Method and terminal for realizing special face effect
CN103700129A (en) Random human head and random human body 3D (three-dimensional) combination method
CN113239527B (en) Garment modeling simulation system and working method
Li et al. SwinGar: Spectrum-Inspired Neural Dynamic Deformation for Free-Swinging Garments
TW200818056A (en) Drivable simulation model combining curvature profile and skeleton and method of producing the same
Xia et al. Recent advances on virtual human synthesis
Roth et al. Avatar Embodiment, Behavior Replication, and Kinematics in Virtual Reality.
CN108198234B (en) Virtual character generating system and method capable of realizing real-time interaction
Ni et al. 3D face dynamic expression synthesis system based on DFFD
Na et al. Local shape blending using coherent weighted regions
KR101605740B1 (en) Method for recognizing personalized gestures of smartphone users and Game thereof
WO2023169023A1 (en) Expression model generation method and apparatus, device, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 518000 Times Financial Center 28A7, No. 4001 Shennan Avenue, Shatou Street, Futian District, Shenzhen, Guangdong Province

Patentee after: Guangdong Yuwantang Traditional Chinese Medicine Co.,Ltd.

Address before: 518000 Times Financial Center 28A7, No. 4001 Shennan Avenue, Shatou Street, Futian District, Shenzhen, Guangdong Province

Patentee before: Guangdong Guoshi Wine Industry Co.,Ltd.

Address after: 518000 Times Financial Center 28A7, No. 4001 Shennan Avenue, Shatou Street, Futian District, Shenzhen, Guangdong Province

Patentee after: Guangdong Guoshi Wine Industry Co.,Ltd.

Address before: 518000 Times Financial Center 28A7, No. 4001 Shennan Avenue, Shatou Street, Futian District, Shenzhen, Guangdong Province

Patentee before: Bowa whale (Shenzhen) Technology Co.,Ltd.

CP03 Change of name, title or address
TR01 Transfer of patent right

Effective date of registration: 20240114

Address after: Unit 701-15, 7th Floor, Building 4, North District, A5 Future Science and Technology City, No. 999 Gaoxin Avenue, Donghu New Technology Development Zone, Wuhan City, Hubei Province, 430000

Patentee after: BOWA (WUHAN) TECHNOLOGY Co.,Ltd.

Address before: 518000 Times Financial Center 28A7, No. 4001 Shennan Avenue, Shatou Street, Futian District, Shenzhen, Guangdong Province

Patentee before: Guangdong Yuwantang Traditional Chinese Medicine Co.,Ltd.

TR01 Transfer of patent right