CN109903360A - 3 D human face animation control system and its control method - Google Patents
3 D human face animation control system and its control method Download PDFInfo
- Publication number
- CN109903360A CN109903360A CN201711292466.3A CN201711292466A CN109903360A CN 109903360 A CN109903360 A CN 109903360A CN 201711292466 A CN201711292466 A CN 201711292466A CN 109903360 A CN109903360 A CN 109903360A
- Authority
- CN
- China
- Prior art keywords
- face
- tested
- expression
- module
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 230000014509 gene expression Effects 0.000 claims abstract description 153
- 230000008921 facial expression Effects 0.000 claims abstract description 96
- 238000012545 processing Methods 0.000 claims abstract description 28
- 230000001815 facial effect Effects 0.000 claims description 48
- 238000001514 detection method Methods 0.000 claims description 46
- 230000015572 biosynthetic process Effects 0.000 claims description 25
- 238000003786 synthesis reaction Methods 0.000 claims description 25
- 230000005540 biological transmission Effects 0.000 claims description 21
- 238000013135 deep learning Methods 0.000 claims description 8
- 230000002194 synthesizing effect Effects 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 description 18
- 230000000694 effects Effects 0.000 description 14
- 230000006870 function Effects 0.000 description 12
- 238000005457 optimization Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 238000013527 convolutional neural network Methods 0.000 description 7
- 230000008859 change Effects 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 5
- 238000013461 design Methods 0.000 description 5
- 210000001508 eye Anatomy 0.000 description 5
- 230000007935 neutral effect Effects 0.000 description 5
- 230000033001 locomotion Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 210000005252 bulbus oculi Anatomy 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000010195 expression analysis Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000000474 nursing effect Effects 0.000 description 1
- 238000005498 polishing Methods 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a 3 D human face animation control system and its control methods, wherein the 3 D human face animation control system drives the expression of a cartoon role based on a tested face expression data, wherein the 3 D human face animation control system includes an expression processing module and is communicatively connected in a network in a drive module of the expression processing module.The expression processing module provides the tested face expression data, and the drive module drives the expression of the cartoon role in real time in such a way that the tested face expression data for providing the expression processing module corresponds to the expression of the cartoon role.
Description
Technical field
The present invention relates to a computation vision technology more particularly to a 3 D human face animation control systems and its control method.
Background technique
With the development of computer vision field technology, three-dimensional face detection and identification are increasingly becoming one of hot technology.
The development of three-dimensional face detection and identification can not only meet the needs of to high level safety-security areas such as identity identifications, meanwhile, base
In human face detection and recognition technology application exploitation also in the experience of the mankind of enriching constantly, for example, Expression Recognition, is changed
Face cartoon special efficacy, human face expression dynamically track etc..In such applications, 3 D human face animation system is comprehensive expression of person, vision
Special efficacy is strong, numerous comprising technology.
In early days, 3 D human face animation system is applied to video display special effect making field.In general, operator can configure it is a series of
Sensor and motion capture equipment detect the action signal and countenance signal of acquisition itself, in turn, by utilizing these
Virtual animated character is mobile and expression is presented to control for signal.It is envisioned that in the actual operation process, the operator is necessary
It equips a large amount of sensor and action signal captures equipment, this is greatly to bear for operator, and either installation is gone back
It is to dismantle these equipment.In addition, all equipment need to be stably mounted at performer's body in order to ensure the stability of signal acquisition
Each position, with mobile or when doing expression presentation in performer, it is ensured that these equipment will not fall off or loosen from performer's body, prevent
There is error in stop signal acquisition failure.
Further, collected action signal and human face expression signal are handled by analysis, to control cartoon role
Movement.However, be limited to the difficulty and computational complexity of Processing Algorithm, the real-time of traditional 3 D human face animation system compared with
Difference.Correspondingly, during actual driving, operator has to that identical movement and look is repeated several times, it is expected
Animation effect.That is, traditional 3 D human face animation technology, the requirement to operator is high.Operator must must connect
By special training, meanwhile, in order to obtain desired animation effect, operator and later period animator must expend largely
Mental and physical efforts, polishing of modifying.It may also be noted that a large amount of sensor and expensive motion capture equipment will necessarily expend phase
When economic cost, therefore, the 3 D human face animation system of early stage is only applied to comparatively professional field, and in masses
MultiMedia Field existing application.
In recent years, three-dimensional face detection and the development of identification technology provide for the exploitation of 3 D human face animation systematic difference
A possibility that new.Specifically, human face expression model is rebuild by three-dimensional face detection and identification technology, and is known by human face expression
Not correspondingly to drive cartoon role expression shape change.It is envisioned that being based on this technical solution, it is only necessary to be set by man face image acquiring
Standby high-precision 3D facial expression animation system is just possibly realized.To which 3 D human face animation system can be applied to big numerous
Field of media really realizes " old times king Xie Tangqian swallow, fly into common people house ".
Summary of the invention
The another object of invention is to provide a 3 D human face animation control system and its control method, wherein the three-dimensional
Human face animation control system can be realized the tracking of high accuracy three-dimensional facial expression animation.
Another object of the present invention is to provide a 3 D human face animation control system and its control methods, wherein described three
Dimension human face animation control system only needs monocular camera module that can realize that high accuracy three-dimensional facial expression animation tracks.
The main purpose of the present invention is to provide a 3 D human face animation control system and its control methods, wherein described three
Signal collecting device needed for tieing up human face animation control system is only conventional camera module, thus compared to existing three-dimensional animation
System, cost reduce, and therefore, the 3 D human face animation control system can be applied more broadly.
Another object of the present invention is to provide a 3 D human face animation control system and its control methods, wherein described three
Dimension human face animation control system is easy to operate, and operator need to only provide countenance information, Bian Keshi before conventional camera module
Existing high accuracy three-dimensional facial expression animation effect, user experience are good.
Another object of the present invention is to provide a 3 D human face animation control system and its control methods, wherein described three
Dimension human face animation control system is easy to operate, and operator needs not move through special training, therefore the 3 D human face animation system
It can be applied to public MultiMedia Field.
Another object of the present invention is to provide a 3 D human face animation control system and its control methods, wherein described three
Tieing up human face animation system has relatively high real-time, so that operator can dynamically observe the expression of animated character in real time
Variation.
The another object of invention is to provide a 3 D human face animation control system and its control method, wherein even people
Face has biggish expression, and the 3 D human face animation system still is able to track three-dimensional face expression animation.
Another object of the present invention is to provide a 3 D human face animation control system and its control methods, wherein described three
It ties up human face animation control system and is based on the two-dimentional RGB image information progress three-dimensional face modeling collected of conventional camera module, and
Analysis is handled by the expression to the three-dimensional face model, generates high accuracy three-dimensional cartoon role expression in real time, to realize
To the real-time control driving function of cartoon role.Compared to traditional three-dimensional animation systems, the three-dimensional provided by the present invention
Human face animation control system is not necessarily to numerous hardware devices, and the technology of core is the design and optimization of algorithm, therefore, described
3 D human face animation control system cost is lowered, and has good user experience.
Another object of the present invention is to provide a 3 D human face animation control system and its control methods, wherein based on deep
Degree learning algorithm carries out facial feature points detection and identification, efficiency and effect significantly improve, to improve subsequent three-dimensional
The precision and effect of face modeling.
Another object of the present invention is to provide a 3 D human face animation control system and its control methods, wherein using letter
The linear variable shape model of change carries out three-dimensional face model reconstruction, rebuilds speed and is substantially accelerated, in favor of improving described three
Tie up the real-time of human face animation control system.
Another object of the present invention is to provide a 3 D human face animation control system and its control methods, wherein obtaining each
In the stage of weight ratio data shared by the neutral human face model of basic facial expression, multiple constraint conditions are added, so that closing
At expression it is more aobvious naturally smooth, promoted visual effect.
Another object of the present invention is to provide a 3 D human face animation control system and its control methods, wherein synthesizing
During expression model, universal parallel computing architecture CUDA (Compute Unified Device is introduced
Architecture), which makes image processor GPU (Graphics Processing Unit) be able to solve complicated meter
Calculation problem, accelerates arithmetic speed, further to promote the real-time control performance of the 3 D human face animation control system.
By following description, other advantages of the invention and feature will be become apparent, and can pass through right
The means and combination particularly pointed out in claim are accomplished.
One aspect under this invention, the present invention provides a 3 D human face animation control system, wherein the three-dimensional face
Animation control system drives the expression of a cartoon role based on a tested face expression data, wherein the 3 D human face animation control
System processed includes:
One expression processing module, wherein the expression processing module provides the tested face expression data;With
One drive module, wherein the drive module is communicatively connected in a network in the expression processing module, wherein described
Drive module corresponds to the table of the cartoon role with the tested face expression data for providing the expression processing module
The mode of feelings drives the expression of the cartoon role in real time.
According to one embodiment of present invention, the 3 D human face animation control system further comprises a Face datection
Module and it is communicatively connected in a network the three-dimensional facial reconstruction module in the face detection module, the expression processing module quilt
It is communicatively coupled with the three-dimensional facial reconstruction module, wherein the face detection module detects people in a tested facial image
The characteristic point of face, the three-dimensional facial reconstruction module establish a tested person according to the characteristic point of face in the tested facial image
Face three-dimensional module, wherein the expression processing module, which is based on the tested face three-dimensional module, provides the tested human face expression number
According to.
According to one embodiment of present invention, the 3 D human face animation control system further comprises an Image Acquisition mould
Block, wherein described image acquisition module is communicatively connected in a network in the face detection module, wherein the face detection module
Detect the tested facial image acquired by described image acquisition module.
According to one embodiment of present invention, described image acquisition module is a RGB camera module, wherein the RGB takes the photograph
As mould group acquires the tested facial image in a manner of shooting face.
According to one embodiment of present invention, described image acquisition module is communicatively connected in a network in a RGB camera module,
Wherein the RGB camera module acquires the tested facial image in a manner of shooting face.
According to one embodiment of present invention, the 3 D human face animation control system further comprises a FX Module,
Wherein the FX Module is communicatively connected in a network in a face detection module, wherein being not detected in the face detection module
When face, the FX Module shows preset special efficacy.
According to one embodiment of present invention, the face detection module detects 68 of face in the tested facial image
A characteristic point.
According to one embodiment of present invention, the expression processing module further comprises an expression transmission module and by can
It is communicatively coupled to an Expression synthesis module of the expression transmission module, wherein the expression transmission module is for generating one group
The basic facial expression model of specific three dimensional model based on tested face, the Expression synthesis module is for calculating the basic facial expression
The weight ratio that model respectively accounts for, wherein the weight ratio that the basic facial expression model respectively accounts for is the tested face expression data.
According to one embodiment of present invention, it is described tested for being tested the basic facial expression model of the specific three dimensional model of face
The three-dimensional face model of face in its natural state.
Other side under this invention, the present invention further provides a 3 D human face animation control methods comprising step
It is rapid:
Acquire the two-dimentional RGB image of a tested face;
The two-dimentional RGB image of the tested face is detected and identified to obtain a tested human face characteristic point data;
Based on the tested human face characteristic point data, the three-dimensional face model of a tested face is generated;
One group of basic facial expression model of transmission standard faceform to the tested face the three-dimensional under natural expression
Faceform, to generate the basic facial expression model of one group of three-dimensional face model based on tested face;
Calculate each institute of three-dimensional face model that the basic facial expression model based on tested face synthesizes the tested face
The weight ratio data accounted for;With
It, should with synthesis by weight ratio data difference assignment in the expression model of one group of respective numbers of a cartoon role
The expression model of cartoon role.
According to one embodiment of present invention, described to detect and identify the two-dimentional RGB image of the tested face to obtain
The step of one tested human face characteristic point data, is executed based on deep learning algorithm.
According to one embodiment of present invention, the 3 D human face animation control method based on single camera module further includes
Step:
Acquire the tested tested face two dimension RGB image of face in its natural state;
The two-dimentional RGB image of the tested face is detected and identified to obtain a tested human face characteristic point data;
Based on the tested human face characteristic point data, the three-dimensional face model of a tested face in its natural state is generated;
With
Store the three-dimensional face model of the tested face in its natural state.
According to one embodiment of present invention, the 3 D human face animation control method based on single camera module further includes
Step:
Call the three-dimensional face model of the tested face stored in advance in its natural state.
According to one embodiment of present invention, the expression model step for synthesizing the cartoon role further comprises the steps of:
There is provided the cartoon role one group of basic facial expression model;
It is dynamic to generate this by the expression weight ratio data assignment calculated in the basic facial expression model of the cartoon role
Draw the synthesis expression of role.
Other side under this invention, the present invention further provides a 3 D human face animation control methods, wherein described
Control method includes the following steps:
(a) a tested face expression data is provided;With
(b) in such a way that the tested face expression data to be corresponded to the expression of a cartoon role described in driving in real time
The expression of cartoon role.
According to one embodiment of present invention, further comprise step in the step (a):
(a.1) characteristic point of face in a tested facial image is detected;
(a.2) a tested face three-dimensional module is established according to the characteristic point of face in the tested facial image;And
(a.3) the tested face expression data is provided based on the tested human face three-dimensional model.
According to one embodiment of present invention, further comprise step before the step (a.1): acquiring the quilt
Survey facial image.
According to one embodiment of present invention, in the above-mentioned methods, through a RGB camera module in a manner of shooting face
Acquire the tested facial image.
According to one embodiment of present invention, in the above-mentioned methods, further comprise step:
Generate the basic facial expression model of one group of specific three dimensional model based on tested face;With
The weight ratio that the basic facial expression model respectively accounts for is calculated, wherein the weight ratio that the basic facial expression model respectively accounts for is institute
State tested face expression data.
According to one embodiment of present invention, it is described tested for being tested the basic facial expression model of the specific three dimensional model of face
The three-dimensional face model of face in its natural state.
By the understanding to subsequent description and attached drawing, further aim of the present invention and advantage will be fully demonstrated.
These and other objects of the invention, feature and advantage, by following detailed descriptions, drawings and claims are obtained
To fully demonstrate.
Detailed description of the invention
Fig. 1 is a 3 D human face animation control system block diagram schematic diagram of a preferred embodiment according to the present invention.
Fig. 2 is the three-dimensional face mould according to the 3 D human face animation control system of aforementioned present invention preferred embodiment
Type rebuilds block diagram representation.
Fig. 3 is a series of according to a generation of the 3 D human face animation control system of aforementioned present invention preferred embodiment
The block diagram representation of the expression model of tested face.
Fig. 4 is the driving cartoon role according to the 3 D human face animation control system of aforementioned present invention preferred embodiment
Block diagram representation.
Fig. 5 is dynamic according to a three-dimensional face of the 3 D human face animation control system of aforementioned present invention preferred embodiment
Draw control method block diagram representation.
Fig. 6 is the generation tested person according to the 3 D human face animation control system of aforementioned present invention preferred embodiment
The block diagram representation of the three-dimensional face model of face in its natural state.
Fig. 7 is the driving animation according to the 3 D human face animation control system of aforementioned present invention preferred embodiment
The step block diagram representation of role.
Fig. 8 is illustrated according to the operating effect of the 3 D human face animation control system of aforementioned present invention preferred embodiment
Figure.
Fig. 9 A to Fig. 9 F is the specific example of the 3 D human face animation control system of preferred embodiment under this invention.
Specific embodiment
It is described below for disclosing the present invention so that those skilled in the art can be realized the present invention.It is excellent in being described below
Embodiment is selected to be only used as illustrating, it may occur to persons skilled in the art that other obvious modifications.It defines in the following description
Basic principle of the invention can be applied to other embodiments, deformation scheme, improvement project, equivalent program and do not carry on the back
Other technologies scheme from the spirit and scope of the present invention.
It will be understood by those skilled in the art that in exposure of the invention, term " longitudinal direction ", " transverse direction ", "upper",
The orientation of the instructions such as "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom" "inner", "outside" or position are closed
System is to be based on the orientation or positional relationship shown in the drawings, and is merely for convenience of description of the present invention and simplification of the description, without referring to
Show or imply that signified device or element must have a particular orientation, be constructed and operated in a specific orientation, therefore above-mentioned art
Language is not considered as limiting the invention.
It is understood that term " one " is interpreted as " at least one " or " one or more ", i.e., in one embodiment, unitary
The quantity of part can be one, and in a further embodiment, the quantity of the element can be it is multiple, term " one " cannot understand
For the limitation to quantity.
If Fig. 1 is extremely as shown in figure 8, a 3 D human face animation control system of a first preferred embodiment according to the present invention
1 is elucidated with, wherein real-time expression signal of the 3 D human face animation control system 1 based on operator, in real-time drive system
The expression animation of the cartoon role created, real-time is high, and interest is strong and operating performance is good, so that the three-dimensional face
Animation system can be widely used in public MultiMedia Field.
More specifically, the 3 D human face animation control system 1 is based on tested facial image and carries out three-dimensional face model
It rebuilds, and by carrying out Expression analysis to the three-dimensional face model, high accuracy three-dimensional animation table has been provided with real-time reconstruction
Feelings model is realized and is driven to the real-time control of the cartoon role in this way.That is, in the three-dimensional face
During animation control system 1 operates, the information of required acquisition only includes the image information of tested face, that is, institute of the present invention
Sensor device needed for the 3 D human face animation control system 1 provided is camera module, and quantity is one.Cause
And compared to traditional 3 D human face animation system, hardware device quantity needed for the 3 D human face animation system is few.It should be appreciated that
, the core technology of the 3 D human face animation control system 1 is algorithm design and optimizes, therefore advantage of lower cost.
Further, operator need to only provide required countenance signal before camera module, can real-time synchronization control animation angle
Color, operation difficulty is low and operating experience more preferably.
As shown in Figure 1, the 3 D human face animation control system 1 includes a figure in presently preferred embodiment of the invention
As acquisition module 10, a face detection module 20, a three-dimensional facial reconstruction module 30, an expression processing module 40 and a driving mould
Block 50, described image acquisition module 10, the face detection module 20, the three-dimensional facial reconstruction module 30, at the expression
Reason module 40 and the drive module 50 are communicatively coupled between each other.Wherein, described image acquisition module 10 is to acquire
The image information of tested face, such as described image acquisition module 10 be used to acquire the two-dimentional RGB image information of tested face.
The face detection module 20 is communicatively coupled with described image acquisition module 10, and is provided with detecting and identifying the tested person
The characteristic point of face, such as the face detection module 20 can obtain the two dimension of face from described image acquisition module 10
RGB image information, and determine based on the two-dimentional RGB image information characteristic point of face.In the three-dimensional face of the invention
In this specific embodiment of animation control system 1, the face detection module 20 can be based on the two-dimentional RGB image letter
Breath accurately detects out n characteristic point of face, such as 68 characteristic points.The three-dimensional facial reconstruction module 30 is based on the people
The characteristic point data of face provided by face detection module 20 carries out tested human face three-dimensional model and rebuilds.The expression processing module
40 analyze the expression data of the tested face based on three-dimensional face model constructed by the three-dimensional facial reconstruction module 30,
The expression data is synchronized to a cartoon role by the drive module 50, to create the expression of the cartoon role in real time, is passed through
Such mode, the expression animation of real-time control cartoon role.
In presently preferred embodiment of the invention, described image acquisition module 10 is implemented as a camera module, such as but
It is not limited to RGB camera module, to acquire the two-dimentional RGB image of tested face.Although, it will be appreciated that the camera shooting
The type of mould group is unrestricted, it is only necessary to meet the image that can acquire tested face.It is preferable, however, that the RGB camera shooting
There is mould group relatively high pixel value to provide image quality higher original so as to the extraction for subsequent human face characteristic point
Beginning image data.In a further embodiment, described image acquisition module 10 also can be to be communicatively connected in a network in the RGB
The mode of camera module acquires the two-dimentional RGB image of tested face.Such as it shows in attached drawing 9A and is adopted by described image
Collect the two-dimentional RGB image about tested face that module 10 acquires.
Further, the tested facial image is received by the face detection module 20, to be examined by the face
It surveys 20 analysis of module processing and obtains a tested human face characteristic point data, with reference to attached drawing 9B and Fig. 9 C.Those skilled in the art answers
Know, facial feature points detection technology, i.e., according to the facial image of input, the characteristic point of facial key is automatically positioned out, such as eye
Eyeball, nose, corners of the mouth point and face each section profile point.However, it is contemplated that face facial expression, posture, age etc. tested
Body oneself factor and shooting angle, overcover (for example, glasses), illumination condition, the influence of the extraneous factors such as shooting background pass
The face recognition algorithms of system, for example, locality preserving projections (Locality Preserving Projections, LPP), it is main at
Analysis (PCA) etc., has been increasingly difficult to meet design requirement.On the one hand, traditional face recognition algorithms are artificial programming
Design, however, since face recognition technology impact factor is mixed and disorderly and human face data is in nonlinear Distribution, engineer's algorithm mostly
Be increasingly difficult to the complexity with adaptive algorithm, and the face recognition algorithms precision of artificial Programming and accuracy compared with
It is low.On the other hand, the face recognition algorithms of engineer are often computationally intensive, and formula is complicated, cause to detect human face characteristic point institute
The time needed is longer, and then influences the real-time performance of the whole 3 D human face animation control system 1.
In the preferred implementation of the invention, the face detection module 20 is based on depth learning technology and carries out Face datection
With feature point alignment, efficiency is compared traditional method with effect and is significantly improved.Those skilled in the art will be appreciated that, depth
Study is a kind of method based on to data progress representative learning in machine learning, with non-supervisory formula or the feature of Semi-supervised
Study and hierarchical nature improve algorithm to substitute artificial acquisition feature.That is, the face based on depth learning technology
Algorithm formula used by detection module 20 is spontaneously formed by the data nursing to the face detection module 20, is not necessarily to
Artificial programming.To, on the one hand, human brain can be liberated, on the other hand, which is fed certainly by big data
Hair is formed, and relative accuracy is higher, and efficiency is faster.
Particularly, the face detection module 20 passes through deep learning network (Multi-task Convolutional
Neural Networks, MTCNN) human face region of the tested face image data is detected, and 68 spies are precisely located out
Point is levied, wherein the characteristic point includes eyes, nose, corners of the mouth point and face each section profile point etc., with reference to attached drawing 9C.This
The technical staff in field will be appreciated that deep learning network is well-known technique, utilize the cascade convolutional neural networks of multilayer
(Convolutional Neural Network, CNN) from coarse to fine, accurately detects out the face area in tested facial image
Domain, and the human face characteristic point of respective numbers is precisely located out.It is noted that in the present invention, the Face datection mould
The selected human face characteristic point quantity of block 20 is not limitation of the invention.That is, according to actual needs, the face inspection
Corresponding adjustment, such as 72 or 80 etc. can be made by surveying detection human face characteristic point quantity needed for module 20.
Further, the tested human face characteristic point data generate a three-dimensional people for the three-dimensional facial reconstruction module 30
Face model, with reference to attached drawing 9D.More specifically, in presently preferred embodiment of the invention, the three-dimensional facial reconstruction module 30
Three-dimensional face model reconstruction is carried out based on deformable model technique (Morphable Mode Technology).The skill of this field
Art personnel will be appreciated that the main points of deformable model technique are the faceform using database, quasi- as basis vector
Symphysis is at three-dimensional face model.Existing deformable model technique is during three-dimensional face model is rebuild, first with number
According to the faceform in library as basis vector, and combine shape vector Si in face database and texture Ti and unknown
Parameter alpha and β.In specific fit procedure, the 3D model that first random initializtion α and β is generated at random, and 3D model is thrown
Shadow obtains new two-dimension human face image to two-dimensional surface, and then utilizes the face two of newly-generated two-dimension human face image and input
Picture construction loss function is tieed up, by this method, carries out cycle iteration until final convergence effect meets preset required precision.
, it will be appreciated that existing deformable model technique is to be fitted using nonlinear equation, it is computationally intensive and take a long time.
Correspondingly, in the present invention, the traditional deformable model algorithm of 30 pairs of the three-dimensional facial reconstruction module carries out excellent
Change, existing non-linear variable shape energy equation is substituted with simplified linear variable shape model energy equation, thus the three-dimensional
Human face rebuilding module 30 can carry out three-dimensional face model building with comparatively faster speed, to guarantee the 3 D human face animation
System has preferable real-time performance.More specifically, the deformable model algorithm of optimization is with the faceform of database
Basis vector, and only select shape vector Si as Fitted parameter.In the calculating process of human face rebuilding, first random initializtion shape
Initial value corresponding to shape vector Si to generate a 3D faceform at random, and by the 3D faceform project to two-dimensional surface with
New face two dimensional image is obtained, and then utilizes the face two dimensional image of the human face characteristic point and input of new two-dimension human face image
Human face characteristic point constructs contrast function, by this method, carries out cycle iteration until final convergence effect meets preset precision
It is required that., it will be appreciated that in the present invention, with the tested human face characteristic point number acquired in the face detection module 20
According to, substitute the entire tested face image data building contrast function in existing deformable model algorithm so that this this it is right
It is simplified than function, so as to further improve the efficiency of fitting.In addition, those skilled in the art should be readily apparent that,
Face characteristic point data can be more direct and brightly characterizes tested human facial expression information, that is to say, that by the optimization can
The distorted pattern algorithm three-dimensional face model generated relatively more truly restores tested human face expression.
Further, the expression processing module 40 is based on the three-dimensional constructed by the three-dimensional facial reconstruction module 30
Faceform's analysis processing obtains a tested face expression data, wherein tested person's face expression data quilt, further, together
It walks in a cartoon role, to generate the expression model of the cartoon role in real time, so that the expression of the tested face is by real time
It is synchronized with the cartoon role.That is, when the expression animation of tested face is input to the 3 D human face animation control system 1
When, the cartoon role can, based on the expression animation of the tested face, real-time dynamicly show the table for being synchronized with tested face
Feelings animation.
More specifically, as shown in Figure 1, the expression processing module 40 includes that an expression transmission module 41 and an expression are closed
At module 42, wherein base of the expression transmission module 41 to generate one group of neutral human face model based on tested face
This expression model, the weight ratio that the Expression synthesis module 42 is respectively accounted for calculate the basic facial expression model., it will be appreciated that
In the present invention, the tested face expression data is each shared weight ratio data of the basic facial expression model, correspondingly,
In follow-up phase, which is applied to generate the expression model of the cartoon role in real time, drives the animation to realize
The purpose of role animation.Particularly, in presently preferred embodiment of the invention, the expression transmission module 41 is to generate 72
The basic facial expression model of neutral human face model based on tested face, correspondingly, the Expression synthesis model is to calculate
Each shared weight ratio of 72 basic facial expression models.Certainly, those skilled in the art knows, the basic facial expression model
Quantity is not limitation of the invention, can carry out corresponding change adjustment according to actual needs.
Correspondingly, in the base using neutral human face model of the expression transmission module 41 production based on tested face
During this expression model, the expression transmission module 41 is by the base table of one group of pre-stored standard three-dimensional faceform
Feelings model transmits (Deformation Transfer) algorithm by deformation and this group of expression is respectively transferred to based on tested person
The three-dimensional face model in its natural state of face, it is in this way, complete to generate one group one group based on tested face
Whole expression model.That is, in the present invention, the neutral human face model based on tested face is described tested
The three-dimensional face model in its natural state of face.Particularly, in presently preferred embodiment of the invention, the expression transmission
Module 41 is by the basic facial expression of 72 pre-stored standard three-dimensional models, by deforming transmission algorithm (Deformation
Transfer 72 expressions) are transmitted separately to the three-dimensional face model in its natural state based on tested face, at this point, one
The complete 72 complete expression models based on tested face of group are produced completion.
It is noted that carrying out expression transmission to generate one group of one group of complete expression model based on tested face
Before, the three-dimensional face model in its natural state of the tested face should be first prefabricated., it will be appreciated that in the present invention,
The three-dimensional face model of the tested face in its natural state can equally pass through described image acquisition module 10, the face inspection
It surveys module 20 and the three-dimensional facial reconstruction module 30 is created, and be previously stored in 3 D human face animation control system
In system 1, for generating one group of complete expression model based on tested face.It is also possible that the tested face exists
Three-dimensional face model under natural conditions can temporarily make, at this point, the 3 D human face animation control system 1 further includes one initial
The change stage, wherein in initial phase, the operator is required that image in its natural state acquires mould by described image
Block 10 is acquired, and generates the tested person by the face detection module 20 and the three-dimensional facial reconstruction module 30 in turn
The three-dimensional face model in its natural state of face, thus in subsequent expression model transmission generation phase, the tested person
The three-dimensional face model in its natural state of face can be called, as shown in Figure 3.
Further, three-dimensional face model and the tested person of the Expression synthesis module 42 based on the tested face
One group of basic facial expression model of face, analytical calculation obtain the expression data of the tested face.More specifically, the expression is closed
Basic facial expression model foundation energy equation at module 42 based on tested face, to calculate each shared power of 72 expression models
Compare again.Those skilled in the art will be appreciated that the energy equation includes basic cost function, wherein when cost function equation
After both sides are set up, the weight ratio theoretical value of 72 expression models is calculated, however, theoretical and actual due to existing
The feature of deviation and human eye vision, the expression transition of synthesis are often not smooth natural enough.Therefore, in the present invention, the energy
Equation is also integrated with all multi-constraint conditions other than basic cost function, so that looking natural for synthesis is smooth.Also
It is to say, in presently preferred embodiment of the invention, the Expression synthesis module 42 further includes an optimization module, wherein the optimization
Module is integrated with all multi-constraint conditions, is finely adjusted to the expression model of synthesis so that the expression synthesized more meets human eye vision
Feature.It is noted that in some embodiments of the invention, the parameter of the optimization module can be manually entered adjustment, with
Meet the visual demand of specific crowd.It is the result obtained after the Expression synthesis module 42 synthesizes with reference to attached drawing 9E.
In addition, the Expression synthesis module 42 introduces universal parallel computing architecture during synthesizing expression model
CUDA (Compute Unified Device Architecture), wherein the universal parallel computing architecture makes image processor
GPU (Graphics Processing Unit) is able to solve complicated computational problem, arithmetic speed is accelerated, thus described three
The real-time control performance of dimension human face animation control system 1 is further promoted.
Those skilled in the art should be easily understood that in the present invention, the Expression synthesis module 42 is the three-dimensional people
The core of face animation control system 1, performance dramatically affect the whole effect of the 3 D human face animation control system 1
Fruit.The tested face expression data acquired in the Expression synthesis module 42 is the core for generating cartoon role expression model
Data, thus it is guaranteed that the high-precision of the tested face expression data, is to ensure that the 3 D human face animation system has Gao Fang
The key of true property and high reproducibility.It should be pointed out that each functional module of the three-dimensional face system provided by the present invention is all
By optimizing, and it is all linked with one another, so as to guarantee the tested face expression data precision with higher.More specifically
It says, firstly, the face detection module 20 carries out facial feature points detection and identification, effect and essence using deep learning algorithm
Degree significantly improves.Secondly, the face of deformable model technique and optimization of the three-dimensional face module based on optimization
Characteristic point data carries out reconstructing three-dimensional model, so that the precision of the three-dimensional face model is also correspondingly improved.Finally,
The expression processing module 40 is correspondingly optimized, so that the precision of the tested face expression data is able to effectively
Guarantee.
Further, the drive module 50 of the 3 D human face animation control system 1 utilizes the tested face table
Feelings data (each shared weight ratio data of the i.e. described basic facial expression model) generate the three-dimensional expression model of the cartoon role in real time,
In this way, the expression driving of real-time perfoming cartoon role.More specifically, the drive module 50 is according to corresponding
Cartoon role makes one group of cartoon role expression model of respective numbers, in turn, by the tested face expression data assignment in
The cartoon role expression model, to synthesize the three-dimensional expression model of the cartoon role.Particularly, in the preferred reality of the invention
It applies in example, the drive module 50 makes 72 basic facial expression models of the cartoon role according to corresponding cartoon role, and then will
The 72 expression weight ratio data assignment calculated to the cartoon role 72 basic facial expression models, to generate the cartoon role
Synthesize expression., it will be appreciated that the expression of the cartoon role can also generate accordingly when the expression of tested face changes
Variation, in this way, realizes the animation control effect of the cartoon role.
It is noted that in the present invention, the type of the cartoon role can be liked voluntarily selecting according to operator individual
It selects.Correspondingly, the 3 D human face animation control system 1 includes a cartoon role storage unit 51, wherein a series of animation angles
Color and corresponding basic facial expression model are stored in the cartoon role storage unit 51, and the cartoon role storage unit
51 can be edited, to increase new cartoon role or delete old cartoon role, thus the Selective type energy of the cartoon role
Meet the hobby demand of different people, and can grow with each passing hour.
In addition, the 3 D human face animation control system 1 provided by the present invention can be applied to various clients, such as
Computer desktop or mobile device end, or even can be integrated in Maya, 3DMax, as modeling in Unity software, Rendering software
Interactive animation expanded application.Certainly, the 3 D human face animation control system 1 can also be applied to other purposes, here no longer
It repeats.
In addition, the 3 D human face animation control system 1 further includes a FX Module 60, wherein 60 quilt of the FX Module
It is communicatively coupled with the face detection module 20, wherein when the face detection module 20 is when being not detected portrait
It waits, the FX Module 60 is able to carry out various three-dimensional face special efficacys and shows, such as, but not limited to star Morphing
Face Changing, dynamic textures, the automatic expression demonstration of star's model etc., with reference to shown in attached drawing 8 and attached drawing 9F.
Correspondingly, if Fig. 5 is to as shown in fig. 7, a 3 D human face animation control method based on single camera module is elucidated with,
Wherein the 3 D human face animation control method comprising steps of
Acquire the two-dimentional RGB image of a tested face;
The two-dimentional RGB image of the tested face is detected and identified to obtain a tested human face characteristic point data;
Based on the tested human face characteristic point data, the three-dimensional face model of a tested face is generated;
One group of basic facial expression model of transmission standard faceform to the tested face the three-dimensional under natural expression
Faceform, to generate the basic facial expression model of one group of three-dimensional face model based on tested face;
Calculate each institute of three-dimensional face model that the basic facial expression model based on tested face synthesizes the tested face
The weight ratio data accounted for;
It, should with synthesis by weight ratio data difference assignment in the expression model of one group of respective numbers of a cartoon role
The expression model of cartoon role.
In 3 D human face animation control method provided by the present invention, the outer signals acquisition equipment uniquely needed is normal
Camera module is advised, its object is to acquire the two-dimentional RGB image of tested face.Therefore, compared to traditional 3 D human face animation
System, hardware device quantity needed for the 3 D human face animation system provided by the present invention is few and core technology scheme is to calculate
The design and optimization of method, therefore advantage of lower cost.Meanwhile operator need to only provide necessary expression letter before camera module
Number, can real-time synchronization control cartoon role, operation difficulty is low and operating experience more preferably.
In the step of obtaining the tested human face characteristic point data, a face detection module 20 is based on deep learning algorithm
Execute the step.More specifically, the face detection module 20 passes through deep learning network (Multi-task
Convolutional Neural Networks, MTCNN) the detection tested face image data human face region, and it is accurate
68 characteristic points are oriented on ground, wherein the characteristic point includes eyes, nose, corners of the mouth point and face each section profile point etc..
Those skilled in the art will be appreciated that deep learning network technology is well-known technique, utilize the cascade convolutional Neural net of multilayer
Network (Convolutional Neural Network, CNN) from coarse to fine, accurately detects out the face in tested facial image
Region, and the human face characteristic point of respective numbers is precisely located out.
The step of three-dimensional face model for generating the tested face, is executed by a three-dimensional facial reconstruction module 30,
Three-dimensional face modeling is carried out based on deformable model technique and the tested human face characteristic point data.It is to be pointed out that described three
30 pairs of human face rebuilding module traditional deformable model algorithms of dimension optimize, with simplified linear variable shape model energy side
Journey substitutes existing non-linear variable shape energy equation, so that the three-dimensional facial reconstruction module 30 can be with comparatively faster
Speed carries out three-dimensional face model building, to guarantee the real-time performance of the 3 D human face animation system.
Transmission standard faceform one group of basic facial expression model to the tested face three under natural expression
Tie up faceform, with generate one group of three-dimensional face model based on tested face basic facial expression model the step of in, an expression
Transmission module 41 transmits (Deformation by the basic facial expression model of one group of pre-stored standard three-dimensional model, by deformation
Transfer) this group of expression is respectively transferred to the three-dimensional face model in its natural state based on tested face by algorithm,
In this way, to generate one group of one group of complete expression model based on tested face.
It is noted that in the present invention, the three-dimensional face model of the tested face in its natural state can be same
By described image acquisition module 10, the face detection module 20 and the three-dimensional facial reconstruction module 30 are created, and
It is previously stored in the 3 D human face animation control system 1, for generating one group of complete table based on tested face
Feelings model.It is also possible that the three-dimensional face model of the tested face in its natural state can temporarily make, that is,
It says, the 3 D human face animation control system 1 further includes an initial phase, wherein in initial phase, the operator
It is required that image information in its natural state by described image acquisition module 10, and passes through the face detection module 20 in turn
The three-dimensional face model in its natural state that the tested face is generated with the three-dimensional facial reconstruction module 30, thus rear
In the stage that continuous expression model generates, the three-dimensional face model in its natural state of the tested face, which can be called, goes forward side by side
The transmission of row expression.
Correspondingly, the 3 D human face animation control method further comprises the steps of:
Acquire the tested tested face two dimension RGB image of face in its natural state;
The two-dimentional RGB image of the tested face is detected and identified to obtain a tested human face characteristic point data;
Based on the tested human face characteristic point data, the three-dimensional face model of a tested face in its natural state is generated;
With
Store the three-dimensional face model of the tested face in its natural state.
Alternatively, the 3 D human face animation control method further comprises the steps of:
Call the three-dimensional face model of the tested face stored in advance in its natural state;
Further, the basic facial expression model described in the calculating based on tested face synthesizes the three-dimensional of the tested face
The step of each shared weight ratio data of faceform, as performed by an Expression synthesis module 42.More specifically, the expression
Basic facial expression model foundation energy equation of the synthesis module 42 based on tested face, it is each shared to calculate 72 expression models
Weight ratio.Those skilled in the art will be appreciated that the energy equation includes basic cost function, wherein when cost function equation
Both sides set up after, the weight ratio theoretical value of 72 expression models is calculated, however, theoretical and practical due to existing
Deviation and human eye vision feature, the expression transition of synthesis is often not smooth natural enough.Therefore, in the present invention, the energy
Equation is measured other than basic cost function, is also integrated with all multi-constraint conditions, so that looking natural for synthesis is smooth.
The expression model of one group of respective numbers by weight ratio data difference assignment in a cartoon role, to close
At the cartoon role expression model the step of, as performed by a drive module 50.More specifically, the drive module 50
According to corresponding cartoon role, one group of cartoon role expression model of respective numbers is made, in turn, by the tested human face expression number
According to assignment in the cartoon role expression model, to synthesize the three-dimensional expression model of the cartoon role.Particularly, of the invention
In the preferred embodiment, the drive module 50 makes 72 basic facial expression moulds of the cartoon role according to corresponding cartoon role
Type, and then by 72 basic facial expression models of the 72 expression weight ratio data assignment calculated to the cartoon role, it is somebody's turn to do with generating
The synthesis expression of cartoon role., it will be appreciated that the expression of the cartoon role also can when the expression of tested face changes
Corresponding variation is generated, in this way, realizes the animation control effect of the cartoon role
Correspondingly, the expression model step for synthesizing the cartoon role further comprises the steps of:
There is provided the cartoon role one group of basic facial expression model;
It is dynamic to generate this by the expression weight ratio data assignment calculated in the basic facial expression model of the cartoon role
Draw the synthesis expression of role.
Other side under this invention, the present invention further provides a 3 D human face animation control methods, wherein described
Control method includes the following steps:
(a) a tested face expression data is provided;With
(b) in such a way that the tested face expression data to be corresponded to the expression of a cartoon role described in driving in real time
The expression of cartoon role.
Further, further comprise step in the step (a):
(a.1) characteristic point of face in a tested facial image is detected;
(a.2) a tested face three-dimensional module is established according to the characteristic point of face in the tested facial image;And
(a.3) the tested face expression data is provided based on the tested human face three-dimensional model.
It can thus be seen that the object of the invention can be efficiently accomplished sufficiently.It is used to explain the present invention function and structure principle
The embodiment is absolutely proved and is described, and the present invention is not by the limit based on the change on these embodiment basis
System.Therefore, the present invention includes all modifications covered within appended claims claimed range and spirit.
Claims (20)
1. a 3 D human face animation control system, wherein the 3 D human face animation control system is based on a tested human face expression number
According to the expression of one cartoon role of driving characterized by comprising
One expression processing module, wherein the expression processing module provides the tested face expression data;With
One drive module, wherein the drive module is communicatively connected in a network in the expression processing module, wherein the driving
Module corresponds to the expression of the cartoon role with the tested face expression data that provides the expression processing module
Mode drives the expression of the cartoon role in real time.
2. control system according to claim 1, further comprise a face detection module and be communicatively connected in a network in
One three-dimensional facial reconstruction module of the face detection module, the expression processing module are communicatively connected in a network in the three-dimensional
Human face rebuilding module, wherein the face detection module detects the characteristic point of face in a tested facial image, the three-dimensional people
Face rebuilds module and establishes a tested face three-dimensional module according to the characteristic point of face in the tested facial image, wherein the table
Feelings processing module is based on the tested face three-dimensional module and provides the tested face expression data.
3. control system according to claim 2 further comprises an image capture module, wherein described image acquires mould
Block is communicatively connected in a network in the face detection module, wherein face detection module detection is by described image acquisition module
The tested facial image of acquisition.
4. control system according to claim 3, wherein described image acquisition module is a RGB camera module, wherein institute
It states RGB camera module and acquires the tested facial image in a manner of shooting face.
5. control system according to claim 3, wherein described image acquisition module is communicatively connected in a network takes the photograph in a RGB
As mould group, wherein the RGB camera module acquires the tested facial image in a manner of shooting face.
6. control system according to claim 2 further comprises a FX Module, wherein the FX Module can be led to
It is connected to letter a face detection module, wherein when face is not detected in the face detection module, the FX Module exhibition
Show preset special efficacy.
7. control system according to claim 2, wherein the face detection module detects in the tested facial image
68 characteristic points of face.
8. control system according to claim 1, wherein the expression processing module further comprises expression transmission mould
Block and it is communicatively connected in a network the Expression synthesis module in the expression transmission module, wherein the expression transmission module is used for
The basic facial expression model of one group of specific three dimensional model based on tested face is generated, the Expression synthesis module is described for calculating
The weight ratio that basic facial expression model respectively accounts for, wherein the weight ratio that the basic facial expression model respectively accounts for is the tested human face expression number
According to.
9. control system according to claim 8, wherein the basic facial expression model of the specific three dimensional model of tested face is
The three-dimensional face model of the tested face in its natural state.
10. a 3 D human face animation control method, which is characterized in that comprising steps of
Acquire the two-dimentional RGB image of a tested face;
The two-dimentional RGB image of the tested face is detected and identified to obtain a tested human face characteristic point data;
Based on the tested human face characteristic point data, the three-dimensional face model of a tested face is generated;
One group of basic facial expression model of transmission standard faceform to the tested face the three-dimensional face under natural expression
Model, to generate the basic facial expression model of one group of three-dimensional face model based on tested face;
Shared by the three-dimensional face model that the calculating basic facial expression model based on tested face synthesizes the tested face is each
Weight ratio data;With
By weight ratio data difference assignment in the expression model of one group of respective numbers of a cartoon role, to synthesize the animation
The expression model of role.
11. 3 D human face animation control method according to claim 10, wherein described detect and identify the tested person
The step of two-dimentional RGB image of face is to obtain a tested human face characteristic point data is executed based on deep learning algorithm.
12. 3 D human face animation control method according to claim 10, wherein the three-dimensional based on single camera module
Human face animation control method further comprises the steps of:
Acquire the tested tested face two dimension RGB image of face in its natural state;
The two-dimentional RGB image of the tested face is detected and identified to obtain a tested human face characteristic point data;
Based on the tested human face characteristic point data, the three-dimensional face model of a tested face in its natural state is generated;With
Store the three-dimensional face model of the tested face in its natural state.
13. 3 D human face animation control method according to claim 10, wherein the three-dimensional based on single camera module
Human face animation control method further comprises the steps of:
Call the three-dimensional face model of the tested face stored in advance in its natural state.
14. 3 D human face animation control method according to claim 10, wherein the expression for synthesizing the cartoon role
Model step further comprises the steps of:
There is provided the cartoon role one group of basic facial expression model;
By the expression weight ratio data assignment calculated in the basic facial expression model of the cartoon role, to generate the animation angle
The synthesis expression of color.
15. a 3 D human face animation control method, which is characterized in that the control method includes the following steps:
(a) a tested face expression data is provided;With
(b) animation is driven in real time in such a way that the tested face expression data to be corresponded to the expression of a cartoon role
The expression of role.
16. 3 D human face animation control method according to claim 15, wherein further being wrapped in the step (a)
Include step:
(a.1) characteristic point of face in a tested facial image is detected;
(a.2) a tested human face three-dimensional model is established according to the characteristic point of face in the tested facial image;And
(a.3) the tested face expression data is provided based on the tested human face three-dimensional model.
17. 3 D human face animation control method according to claim 16, wherein before the step (a.1), into one
Step is comprising steps of the image of the acquisition tested face in its natural state.
18. 3 D human face animation control method according to claim 17, wherein in the above-mentioned methods, being taken the photograph by a RGB
As mould group acquires the tested facial image in a manner of shooting face.
19. any 3 D human face animation control method in 5 to 18 according to claim 1, wherein in the above-mentioned methods, into
One step comprising steps of
Generate the basic facial expression model of one group of specific three dimensional model based on tested face;With
The weight ratio that the basic facial expression model respectively accounts for is calculated, wherein the weight ratio that the basic facial expression model respectively accounts for is the quilt
Survey human face expression data.
20. 3 D human face animation control method according to claim 19, wherein the specific three dimensional model of tested face
Basic facial expression model is the three-dimensional face model of the tested face in its natural state.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711292466.3A CN109903360A (en) | 2017-12-08 | 2017-12-08 | 3 D human face animation control system and its control method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711292466.3A CN109903360A (en) | 2017-12-08 | 2017-12-08 | 3 D human face animation control system and its control method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109903360A true CN109903360A (en) | 2019-06-18 |
Family
ID=66940187
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711292466.3A Pending CN109903360A (en) | 2017-12-08 | 2017-12-08 | 3 D human face animation control system and its control method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109903360A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111768479A (en) * | 2020-07-29 | 2020-10-13 | 腾讯科技(深圳)有限公司 | Image processing method, image processing apparatus, computer device, and storage medium |
CN116664731A (en) * | 2023-06-21 | 2023-08-29 | 华院计算技术(上海)股份有限公司 | Face animation generation method and device, computer readable storage medium and terminal |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102479388A (en) * | 2010-11-22 | 2012-05-30 | 北京盛开互动科技有限公司 | Expression interaction method based on face tracking and analysis |
CN103942822A (en) * | 2014-04-11 | 2014-07-23 | 浙江大学 | Facial feature point tracking and facial animation method based on single video vidicon |
US20150035825A1 (en) * | 2013-02-02 | 2015-02-05 | Zhejiang University | Method for real-time face animation based on single video camera |
CN104780339A (en) * | 2015-04-16 | 2015-07-15 | 美国掌赢信息科技有限公司 | Method and electronic equipment for loading expression effect animation in instant video |
CN106228119A (en) * | 2016-07-13 | 2016-12-14 | 天远三维(天津)科技有限公司 | A kind of expression catches and Automatic Generation of Computer Animation system and method |
CN106600667A (en) * | 2016-12-12 | 2017-04-26 | 南京大学 | Method for driving face animation with video based on convolution neural network |
CN106709975A (en) * | 2017-01-11 | 2017-05-24 | 山东财经大学 | Interactive three-dimensional human face expression animation editing method and system and extension method |
CN107368778A (en) * | 2017-06-02 | 2017-11-21 | 深圳奥比中光科技有限公司 | Method for catching, device and the storage device of human face expression |
-
2017
- 2017-12-08 CN CN201711292466.3A patent/CN109903360A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102479388A (en) * | 2010-11-22 | 2012-05-30 | 北京盛开互动科技有限公司 | Expression interaction method based on face tracking and analysis |
US20150035825A1 (en) * | 2013-02-02 | 2015-02-05 | Zhejiang University | Method for real-time face animation based on single video camera |
CN103942822A (en) * | 2014-04-11 | 2014-07-23 | 浙江大学 | Facial feature point tracking and facial animation method based on single video vidicon |
CN104780339A (en) * | 2015-04-16 | 2015-07-15 | 美国掌赢信息科技有限公司 | Method and electronic equipment for loading expression effect animation in instant video |
CN106228119A (en) * | 2016-07-13 | 2016-12-14 | 天远三维(天津)科技有限公司 | A kind of expression catches and Automatic Generation of Computer Animation system and method |
CN106600667A (en) * | 2016-12-12 | 2017-04-26 | 南京大学 | Method for driving face animation with video based on convolution neural network |
CN106709975A (en) * | 2017-01-11 | 2017-05-24 | 山东财经大学 | Interactive three-dimensional human face expression animation editing method and system and extension method |
CN107368778A (en) * | 2017-06-02 | 2017-11-21 | 深圳奥比中光科技有限公司 | Method for catching, device and the storage device of human face expression |
Non-Patent Citations (5)
Title |
---|
何钦政等: "基于Kinect的人脸表情捕捉及动画模拟***研究", 《图学学报》, vol. 37, no. 03, 15 June 2016 (2016-06-15), pages 290 - 295 * |
唐晶晶: "ULSee人脸跟踪技术入驻LINE相机动态贴纸", 《计算机与网络》 * |
唐晶晶: "ULSee人脸跟踪技术入驻LINE相机动态贴纸", 《计算机与网络》, vol. 43, no. 06, 26 March 2017 (2017-03-26), pages 79 * |
马利庄等: "《MATLAB计算机视觉与机器认知》", 北京航空航天大学出版社 * |
马利庄等: "《MATLAB计算机视觉与机器认知》", 北京航空航天大学出版社, pages: 266 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111768479A (en) * | 2020-07-29 | 2020-10-13 | 腾讯科技(深圳)有限公司 | Image processing method, image processing apparatus, computer device, and storage medium |
CN116664731A (en) * | 2023-06-21 | 2023-08-29 | 华院计算技术(上海)股份有限公司 | Face animation generation method and device, computer readable storage medium and terminal |
CN116664731B (en) * | 2023-06-21 | 2024-03-29 | 华院计算技术(上海)股份有限公司 | Face animation generation method and device, computer readable storage medium and terminal |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106600667B (en) | Video-driven face animation method based on convolutional neural network | |
Ersotelos et al. | Building highly realistic facial modeling and animation: a survey | |
CN101916454B (en) | Method for reconstructing high-resolution human face based on grid deformation and continuous optimization | |
Vlasic et al. | Articulated mesh animation from multi-view silhouettes | |
US8624901B2 (en) | Apparatus and method for generating facial animation | |
CN104915978B (en) | Realistic animation generation method based on body-sensing camera Kinect | |
CN113421328B (en) | Three-dimensional human body virtual reconstruction method and device | |
CN104658038A (en) | Method and system for producing three-dimensional digital contents based on motion capture | |
CN101520902A (en) | System and method for low cost motion capture and demonstration | |
WO2021063271A1 (en) | Human body model reconstruction method and reconstruction system, and storage medium | |
CN110298916A (en) | A kind of 3 D human body method for reconstructing based on synthesis depth data | |
Ping et al. | Computer facial animation: A review | |
Zhu et al. | Facescape: 3d facial dataset and benchmark for single-view 3d face reconstruction | |
CN112530005A (en) | Three-dimensional model linear structure recognition and automatic restoration method | |
CN109903360A (en) | 3 D human face animation control system and its control method | |
CN102693549A (en) | Three-dimensional visualization method of virtual crowd motion | |
CN116612256B (en) | NeRF-based real-time remote three-dimensional live-action model browsing method | |
Zhang et al. | Anatomy-based face reconstruction for animation using multi-layer deformation | |
Straka et al. | Rapid skin: estimating the 3D human pose and shape in real-time | |
CN106960467A (en) | A kind of face reconstructing method and system with bone information | |
US20220076409A1 (en) | Systems and Methods for Building a Skin-to-Muscle Transformation in Computer Animation | |
Lim et al. | Rapid 3D avatar creation system using a single depth camera | |
Mavzuna | MODELING OF TEXT RECOGNITION IN IMAGES | |
CN115797569B (en) | Dynamic generation method and system for high-precision degree twin facial expression action subdivision | |
Ma et al. | Value evaluation of human motion simulation based on speech recognition control |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190618 |