CN113066155A - 3D expression processing method and device - Google Patents

3D expression processing method and device Download PDF

Info

Publication number
CN113066155A
CN113066155A CN202110310236.5A CN202110310236A CN113066155A CN 113066155 A CN113066155 A CN 113066155A CN 202110310236 A CN202110310236 A CN 202110310236A CN 113066155 A CN113066155 A CN 113066155A
Authority
CN
China
Prior art keywords
expression
animation
animation frame
data
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110310236.5A
Other languages
Chinese (zh)
Inventor
陆晓飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fantawild Animation Inc
Original Assignee
Fantawild Animation Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fantawild Animation Inc filed Critical Fantawild Animation Inc
Priority to CN202110310236.5A priority Critical patent/CN113066155A/en
Publication of CN113066155A publication Critical patent/CN113066155A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a 3D expression processing method and a device, wherein the method comprises the steps of obtaining an animation frame sequence of a human face; extracting motion characteristic point groups of the human face according to the animation frame sequence combination, and performing data standardization on each extracted motion characteristic point group to obtain expression capture data; taking the peak value of the animation frame as a dividing line, carrying out rhythm division on the expression capturing data, and extracting animation frame segments of each expression; separating data channels of each frame of animation frame of each expression according to the motion characteristics of the facial expression and facial motion structure; and reconstructing the missing animation frames in the animation frame segments of various expressions. According to the scheme, each frame of animation frame of each expression is subjected to data splitting optimization from a single attribute to form a multi-attribute channel for driving a final effect, and finally data migration optimization is carried out according to a motion curve of the data, so that not only is a lower reconstruction error achieved, but also the time for repairing optimization can be reduced, and the repairing efficiency and quality are improved.

Description

3D expression processing method and device
Technical Field
The invention relates to the technical field of facial expression manufacturing, in particular to a 3D expression processing method and device.
Background
With the development of computer technology and the deepening of facial expression research, facial expression detection and analysis technology is widely applied to many fields such as visual communication, robot research, movie and television animation, games and the like. In the field of game animation, as the audience continuously pursues visual effects, facial expression capturing technology plays an increasingly important role in the motion creation of virtual characters. The application of the facial expression capturing technology is the key for obtaining vivid visual effects in the movie and television animation and game industries, and how to generate more vivid and natural three-dimensional expression animations by using collected expression capturing data is a hot problem in current research.
1. Facial expression making principle
Although the present movie and animation field has a plurality of methods for making facial expressions, and the software, equipment and technology used are different, the principles of making and forming facial expressions are all derived from the research results of face recognition and detection in computer graphics. The theoretical basis of many research results is mainly the following:
(1) facial expression animation based on model unit
FACE motion coding system FACE, which is used to build the relationship between facial expressions and local feature changes. FACE defines 44 basic movement units (abbreviated as AU) that cause facial movement according to the type and movement characteristics of facial muscles, and defines six basic expressions, i.e., anger, annoyance, fear, joy, sadness, surprise.
(2) Facial expression animation based on image texture
The method mainly comprises the steps of projecting an obtained dynamic image to a two-dimensional UV texture by performing topology processing on a three-dimensional model of a role, realizing dynamic frame-by-frame matching of pixel levels between the UV texture and a virtual model, and realizing transmission and matching of motion information through pixel change of image feature points from frame to frame, thereby realizing animation of facial expressions.
(3) Facial expression animation based on data driving
The data-driven facial expression implementation method is visual, and ideal facial animation is realized mainly through the existing facial expression and action data. The motion information of the marking points is acquired through external camera equipment by pasting or drawing the marking points on the face. After the action data is optimized, the action data is endowed to the virtual role, so that the purpose of driving the virtual role through the action data is achieved.
2. Existing facial expression making scheme
Fusion deformation: in three-dimensional software such as maya, 3dmax and the like which are commonly used in the movie and animation industry, the method for realizing the production of facial expressions by using a fusion deformation related tool is a commonly used and convenient mode at present. And the facial expression animation is quickly realized through transition and fusion of various postures.
Binding the facial skeleton: at present, a role action is realized by a common method. For the production of facial expressions, the skeleton binding mode is one of the common methods which can be used independently, and is also an indispensable expression adjustment and repair method for other expression animation production methods. Firstly, facial skeleton is created, then skinning is executed, the control of skeleton on the head model is realized, and the range and precision adjustment of skeleton on skin control are realized by adjusting the weight value of skeleton on skin component points.
Expression capture data: the facial expression capturing technology based on the Marker points is based on optical motion capturing equipment, and the Marker points with the light reflecting characteristics on the faces are located and tracked, and the motion information of the Marker points is stored and transferred to virtual characters.
3. Disadvantages of existing facial expression making schemes
Although the expression data acquisition hardware and technology are more mature and can obtain good facial expressions, the production efficiency of the facial expressions is improved, but the expression information between Marker points is ignored, only single-channel data transmission is realized, and the more detailed facial expression effect cannot be realized.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: A3D expression processing method and device are provided, and the purpose is to achieve a fine facial expression effect.
In order to solve the technical problems, the invention adopts the technical scheme that: A3D expression processing method comprises the following steps,
s10, acquiring an animation frame sequence of the human face;
s20, extracting the motion characteristic point groups of the human face according to the animation frame sequence combination, and performing data standardization on each extracted motion characteristic point group to obtain expression capture data;
s30, taking the peak value of the animation frame as a dividing line, carrying out rhythm division on the expression capture data, and extracting the animation frame segment of each expression;
s40, separating data channels of each frame of animation frame of each expression according to the motion characteristics of the facial expression and facial motion structure;
and S50, reconstructing the animation frames missing in the animation frame segments of various expressions.
Further, step S50 specifically includes,
for the animation frames missing in the animation frame segments of various expressions, the sparse representation coefficient of the part, which is not missing, of the animation frames on the characters consisting of the complete frames is firstly obtained, and then the missing information in the animation frames is reconstructed by utilizing the sparse representation coefficient and the dictionary.
Further, step S20 specifically includes,
and converting the animation frame sequence into a graph from numerical values, acquiring animation numerical values which are closest to and farthest from the intersection points according to the intersection points of the graph, dividing the maximum distance by the minimum distance and then dividing the minimum distance by the numerical values of the intersection points, and finally solving the average value of the intersection points and the value of the intersection points to subtract to obtain expression capturing data.
Further, step S30 specifically includes,
and extracting an animation frame sequence with a preset length, wherein the animation frame sequence comprises various expression types, and the animation frame of each expression is divided from the animation frame sequence by taking the peak value of the animation frame as a dividing line.
Further, step S40 specifically includes,
according to the motion characteristics of the facial expressions and facial motion structure, each frame of the animation frame is split into a plurality of characteristic attributes, and a multi-attribute channel of each expression is formed.
Another technical solution adopted by the present invention is a 3D expression processing apparatus, comprising,
the expression data acquisition module is used for acquiring an animation frame sequence of the human face;
the motion characteristic extraction module is used for extracting motion characteristic point groups of the human face according to the animation frame sequence combination and carrying out data standardization on each extracted motion characteristic point group to obtain expression capture data;
the rhythm segmentation module is used for performing rhythm segmentation on the expression capture data by taking the peak value of the animation frame as a segmentation line and extracting animation frame segments of each expression;
the data channel separation module is used for carrying out data channel separation on each frame of animation frame of each expression according to the motion characteristics of the facial expression and the facial motion structure;
and the animation frame reconstruction module is used for reconstructing the missing animation frames in the animation frame segments of various expressions.
Further, the animation frame reconstruction module is specifically configured to,
for the animation frames missing in the animation frame segments of various expressions, the sparse representation coefficient of the part, which is not missing, of the animation frames on the characters consisting of the complete frames is firstly obtained, and then the missing information in the animation frames is reconstructed by utilizing the sparse representation coefficient and the dictionary.
Further, the motion feature extraction module is specifically configured to,
and converting the animation frame sequence into a graph from numerical values, acquiring animation numerical values which are closest to and farthest from the intersection points according to the intersection points of the graph, dividing the maximum distance by the minimum distance and then dividing the minimum distance by the numerical values of the intersection points, and finally solving the average value of the intersection points and the value of the intersection points to subtract to obtain expression capturing data.
Further, the rhythm segmentation module is specifically configured to,
and extracting an animation frame sequence with a preset length, wherein the animation frame sequence comprises various expression types, and the animation frame of each expression is divided from the animation frame sequence by taking the peak value of the animation frame as a dividing line.
Furthermore, the data channel separation module is specifically configured to,
according to the motion characteristics of the facial expressions and facial motion structure, each frame of the animation frame is split into a plurality of characteristic attributes, and a multi-attribute channel of each expression is formed.
The invention has the beneficial effects that: the method comprises the steps of extracting motion characteristic points from an animation frame sequence of a human face, performing rhythm segmentation on expression capture data by using the extracted characteristics as key points, performing data splitting optimization on each frame of animation frame of each expression from a single attribute to form a multi-attribute channel for driving a final effect, and performing data migration optimization according to a motion curve of data.
Drawings
The following detailed description of the invention refers to the accompanying drawings.
FIG. 1 is a flow chart of a 3D expression processing method according to an embodiment of the present invention;
FIG. 2 is a block diagram of a 3D expression processing apparatus according to an embodiment of the present invention;
FIG. 3 is a schematic block diagram of a computer apparatus of an embodiment of the present invention;
fig. 4 is a dotted distribution diagram of expression data according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As shown in fig. 1, the first embodiment of the present invention is: A3D expression processing method comprises the following steps of S10, obtaining an animation frame sequence of a human face;
in this step, expression data is collected by a hardware expression collection device, such as an animoji application of MOVA, Dynamixyz, Faceware, Iphone.
S20, extracting the motion characteristic point groups of the human face according to the animation frame sequence combination, and performing data standardization on each extracted motion characteristic point group to obtain expression capture data;
step S20 specifically includes converting the animation frame sequence from a numerical value to a graph, obtaining animation numerical values closest to and farthest from the intersection point according to the intersection point of the graph, dividing the maximum distance by the minimum distance by the numerical value of the intersection point, and finally obtaining an average value of the intersection point by a mean function and subtracting the average value of the intersection point from the value of the intersection point to obtain expression capture data.
S30, taking the peak value of the animation frame as a dividing line, carrying out rhythm division on the expression capture data, and extracting the animation frame segment of each expression;
the step S30 specifically includes extracting an animation frame sequence with a preset length, where the animation frame sequence includes various expression types, and dividing the animation frame of each expression from the animation frame sequence by using a peak value of the animation frame as a dividing line.
In this step, the rational segmentation of the emotion capture data is the basis for the motion synthesis process. In order to improve the quality of capturing motion data during data capture, a long sequence of data is often captured at a time, the sequence including various expression types, such as anger, annoyance, fear, joy, sadness, surprise. For example, the character often has emotion in the speaking process, and the conversion from sadness to joy can be used as the basis for separating rhythms according to the peak value of the animation frame. The rhythm segmentation can extract the motion segments with specific semantics from the long motion sequence, which is not only better favorable for storage, but also can improve the reuse efficiency of data, and the data segments with specific semantics are more flexible to use than the long complex motion sequence.
S40, separating data channels of each frame of animation frame of each expression according to the motion characteristics of the facial expression and facial motion structure;
step S40 specifically includes splitting each frame of animation frame into a plurality of feature attributes according to the motion characteristics of the facial expressions and the facial motion structure, and forming a multi-attribute channel for each expression.
In this step, as shown in fig. 4, the point P may be linearly represented by surrounding points only (j ═ 1, 2.. multidot., k), and also for each motion sequence, the expression data of each frame may be regarded as a point in the high-dimensional space, while for each simple motion sequence, all frames in the motion sequence form a cluster in the high-dimensional space, which is scattered around the center of the motion sequence. Due to the continuity of expressive motion and similarity of pose, a certain frame pose in the motion data can be represented linearly by a certain motion pose in the present motion sequence or a certain frame in the motion sequence similar to the motion type. The data can be split and optimized by a single attribute to form a multi-attribute channel for driving the final effect. For example, expression data of smile is divided into 6 sets of data of right and left upper lips, right and left lower lips, and right and left cheeks.
And S50, reconstructing the animation frames missing in the animation frame segments of various expressions.
Wherein, the step S50 specifically includes,
for the animation frames missing in the animation frame segments of various expressions, the sparse representation coefficient of the part, which is not missing, of the animation frames on the characters consisting of the complete frames is firstly obtained, and then the missing information in the animation frames is reconstructed by utilizing the sparse representation coefficient and the dictionary.
In the step, the average value of the animation values before and after the lost frame is obtained by a motion characteristic extraction mode, and the average value is compared with the data which is close to the average value in the animation sequence values, and then the intermediate value is obtained.
In the embodiment, a novel simulation framework is provided based on the utilization of the sparse theory, and a data migration model is extracted based on original data to encode the spatial and temporal deformation details. The method comprises the steps of extracting motion characteristic points from an animation frame sequence of a human face, performing rhythm segmentation on expression capture data by using the extracted characteristics as key points, performing data splitting optimization on each frame of animation frame of each expression from a single attribute to form a multi-attribute channel for driving a final effect, and performing data migration optimization according to a motion curve of data.
As shown in fig. 2, another embodiment of the present invention is a 3D expression processing apparatus, including,
the expression data acquisition module 10 is used for acquiring an animation frame sequence of a human face;
the motion feature extraction module 20 is configured to extract motion feature point groups of the human face according to the animation frame sequence combination, and perform data standardization on each extracted motion feature point group to obtain expression capture data;
the rhythm segmentation module 30 is used for performing rhythm segmentation on the expression capture data by taking the peak value of the animation frame as a segmentation line, and extracting animation frame segments of each expression;
the data channel separation module 40 is used for performing data channel separation on each frame of animation frames of each expression according to the motion characteristics of the facial expressions and facial motion structure;
and the animation frame reconstruction module 50 is configured to reconstruct the missing animation frames in the animation frame segments of various expressions.
Wherein the animation frame reconstruction module 50 is specifically configured to,
for the animation frames missing in the animation frame segments of various expressions, the sparse representation coefficient of the part, which is not missing, of the animation frames on the characters consisting of the complete frames is firstly obtained, and then the missing information in the animation frames is reconstructed by utilizing the sparse representation coefficient and the dictionary.
Wherein the motion feature extraction module 20 is specifically configured to,
and converting the animation frame sequence into a graph from numerical values, acquiring animation numerical values which are closest to and farthest from the intersection points according to the intersection points of the graph, dividing the maximum distance by the minimum distance and the numerical values of the intersection points, and finally solving the average value of the intersection points and the value of the intersection points by using a mean function to obtain expression capturing data.
Wherein the tempo segmentation module 30 is specifically configured to,
and extracting an animation frame sequence with a preset length, wherein the animation frame sequence comprises various expression types, and the animation frame of each expression is divided from the animation frame sequence by taking the peak value of the animation frame as a dividing line.
Wherein the data channel separation module 40 is specifically configured to,
according to the motion characteristics of the facial expressions and facial motion structure, each frame of the animation frame is split into a plurality of characteristic attributes, and a multi-attribute channel of each expression is formed.
It should be noted that, as can be clearly understood by those skilled in the art, the specific implementation process of the 3D expression processing apparatus may refer to the corresponding description in the foregoing method embodiment, and for convenience and brevity of description, no further description is provided herein.
The 3D expression processing apparatus may be implemented in the form of a computer program that can be run on a computer device as shown in fig. 3.
Referring to fig. 3, fig. 3 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device 500 may be a terminal or a server, where the terminal may be an electronic device with a communication function, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a personal digital assistant, and a wearable device. The server may be an independent server or a server cluster composed of a plurality of servers.
Referring to fig. 3, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032 comprises program instructions that, when executed, cause the processor 502 to perform a 3D expression processing method.
The processor 502 is used to provide computing and control capabilities to support the operation of the overall computer device 500.
The internal memory 504 provides an environment for running the computer program 5032 in the non-volatile storage medium 503, and when the computer program 5032 is executed by the processor 502, the processor 502 may be caused to execute a 3D expression processing method.
The network interface 505 is used for network communication with other devices. Those skilled in the art will appreciate that the configuration shown in fig. 3 is a block diagram of only a portion of the configuration associated with the present application and does not constitute a limitation of the computer device 500 to which the present application may be applied, and that a particular computer device 500 may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The processor 502 is configured to run the computer program 5032 stored in the memory to implement the 3D expression processing method.
It should be understood that in the embodiment of the present Application, the Processor 502 may be a Central Processing Unit (CPU), and the Processor 502 may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will be understood by those skilled in the art that all or part of the flow of the method implementing the above embodiments may be implemented by a computer program instructing associated hardware. The computer program includes program instructions, and the computer program may be stored in a storage medium, which is a computer-readable storage medium. The program instructions are executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present invention also provides a storage medium. The storage medium may be a computer-readable storage medium. The storage medium stores a computer program, wherein the computer program comprises program instructions. The program instructions, when executed by the processor, cause the processor to perform the 3D expression processing method described above.
The storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk, which can store various computer readable storage media.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, various elements or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the invention can be merged, divided and deleted according to actual needs. In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a terminal, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A3D expression processing method is characterized in that: comprises the following steps of (a) carrying out,
s10, acquiring an animation frame sequence of the human face;
s20, extracting the motion characteristic point groups of the human face according to the animation frame sequence combination, and performing data standardization on each extracted motion characteristic point group to obtain expression capture data;
s30, taking the peak value of the animation frame as a dividing line, carrying out rhythm division on the expression capture data, and extracting the animation frame segment of each expression;
s40, separating data channels of each frame of animation frame of each expression according to the motion characteristics of the facial expression and facial motion structure;
and S50, reconstructing the animation frames missing in the animation frame segments of various expressions.
2. The 3D expression processing method according to claim 1, characterized in that: the step S50 specifically includes the steps of,
for the animation frames missing in the animation frame segments of various expressions, the sparse representation coefficient of the part, which is not missing, of the animation frames on the characters consisting of the complete frames is firstly obtained, and then the missing information in the animation frames is reconstructed by utilizing the sparse representation coefficient and the dictionary.
3. The 3D expression processing method according to claim 1, characterized in that: the step S20 specifically includes the steps of,
and converting the animation frame sequence into a graph from numerical values, acquiring animation numerical values which are closest to and farthest from the intersection points according to the intersection points of the graph, dividing the maximum distance by the minimum distance and then dividing the minimum distance by the numerical values of the intersection points, and finally solving the average value of the intersection points and the value of the intersection points to subtract to obtain expression capturing data.
4. The 3D expression processing method according to claim 1, characterized in that: the step S30 specifically includes the steps of,
and extracting an animation frame sequence with a preset length, wherein the animation frame sequence comprises various expression types, and the animation frame of each expression is divided from the animation frame sequence by taking the peak value of the animation frame as a dividing line.
5. The 3D expression processing method according to claim 1, characterized in that: the step S40 specifically includes the steps of,
according to the motion characteristics of the facial expressions and facial motion structure, each frame of the animation frame is split into a plurality of characteristic attributes, and a multi-attribute channel of each expression is formed.
6. A3D expression processing apparatus, characterized in that: comprises the steps of (a) preparing a mixture of a plurality of raw materials,
the expression data acquisition module is used for acquiring an animation frame sequence of the human face;
the motion characteristic extraction module is used for extracting motion characteristic point groups of the human face according to the animation frame sequence combination and carrying out data standardization on each extracted motion characteristic point group to obtain expression capture data;
the rhythm segmentation module is used for performing rhythm segmentation on the expression capture data by taking the peak value of the animation frame as a segmentation line and extracting animation frame segments of each expression;
the data channel separation module is used for carrying out data channel separation on each frame of animation frame of each expression according to the motion characteristics of the facial expression and the facial motion structure;
and the animation frame reconstruction module is used for reconstructing the missing animation frames in the animation frame segments of various expressions.
7. The 3D expression processing apparatus according to claim 6, wherein: the animation frame reconstruction module is specifically configured to,
for the animation frames missing in the animation frame segments of various expressions, the sparse representation coefficient of the part, which is not missing, of the animation frames on the characters consisting of the complete frames is firstly obtained, and then the missing information in the animation frames is reconstructed by utilizing the sparse representation coefficient and the dictionary.
8. The 3D expression processing apparatus according to claim 6, wherein: the motion feature extraction module is specifically configured to,
and converting the animation frame sequence into a graph from numerical values, acquiring animation numerical values which are closest to and farthest from the intersection points according to the intersection points of the graph, dividing the maximum distance by the minimum distance and then dividing the minimum distance by the numerical values of the intersection points, and finally solving the average value of the intersection points and the value of the intersection points to subtract to obtain expression capturing data.
9. The 3D expression processing apparatus according to claim 6, wherein: the tempo segmentation module is particularly adapted to,
and extracting an animation frame sequence with a preset length, wherein the animation frame sequence comprises various expression types, and the animation frame of each expression is divided from the animation frame sequence by taking the peak value of the animation frame as a dividing line.
10. The 3D expression processing apparatus according to claim 6, wherein: the data channel separation module is specifically configured to,
according to the motion characteristics of the facial expressions and facial motion structure, each frame of the animation frame is split into a plurality of characteristic attributes, and a multi-attribute channel of each expression is formed.
CN202110310236.5A 2021-03-23 2021-03-23 3D expression processing method and device Pending CN113066155A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110310236.5A CN113066155A (en) 2021-03-23 2021-03-23 3D expression processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110310236.5A CN113066155A (en) 2021-03-23 2021-03-23 3D expression processing method and device

Publications (1)

Publication Number Publication Date
CN113066155A true CN113066155A (en) 2021-07-02

Family

ID=76561557

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110310236.5A Pending CN113066155A (en) 2021-03-23 2021-03-23 3D expression processing method and device

Country Status (1)

Country Link
CN (1) CN113066155A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001034776A (en) * 1999-07-22 2001-02-09 Fujitsu Ltd Animation editing system and storage medium where animation editing program is recorded
CN1920886A (en) * 2006-09-14 2007-02-28 浙江大学 Video flow based three-dimensional dynamic human face expression model construction method
US20160275341A1 (en) * 2015-03-18 2016-09-22 Adobe Systems Incorporated Facial Expression Capture for Character Animation
WO2017094527A1 (en) * 2015-12-04 2017-06-08 日本電産株式会社 Moving image generating system and moving image display system
CN108876879A (en) * 2017-05-12 2018-11-23 腾讯科技(深圳)有限公司 Method, apparatus, computer equipment and the storage medium that human face animation is realized
CN111460945A (en) * 2020-03-25 2020-07-28 亿匀智行(深圳)科技有限公司 Algorithm for acquiring 3D expression in RGB video based on artificial intelligence
US20200302668A1 (en) * 2018-02-09 2020-09-24 Tencent Technology (Shenzhen) Company Limited Expression animation data processing method, computer device, and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001034776A (en) * 1999-07-22 2001-02-09 Fujitsu Ltd Animation editing system and storage medium where animation editing program is recorded
CN1920886A (en) * 2006-09-14 2007-02-28 浙江大学 Video flow based three-dimensional dynamic human face expression model construction method
US20160275341A1 (en) * 2015-03-18 2016-09-22 Adobe Systems Incorporated Facial Expression Capture for Character Animation
WO2017094527A1 (en) * 2015-12-04 2017-06-08 日本電産株式会社 Moving image generating system and moving image display system
CN108876879A (en) * 2017-05-12 2018-11-23 腾讯科技(深圳)有限公司 Method, apparatus, computer equipment and the storage medium that human face animation is realized
US20200302668A1 (en) * 2018-02-09 2020-09-24 Tencent Technology (Shenzhen) Company Limited Expression animation data processing method, computer device, and storage medium
CN111460945A (en) * 2020-03-25 2020-07-28 亿匀智行(深圳)科技有限公司 Algorithm for acquiring 3D expression in RGB video based on artificial intelligence

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张剑;: "融合SFM和动态纹理映射的视频流三维表情重建", 计算机辅助设计与图形学学报, no. 06, 15 June 2010 (2010-06-15), pages 45 - 54 *

Similar Documents

Publication Publication Date Title
CN110390704B (en) Image processing method, image processing device, terminal equipment and storage medium
CN111652828B (en) Face image generation method, device, equipment and medium
US9460539B2 (en) Data compression for real-time streaming of deformable 3D models for 3D animation
US20210241498A1 (en) Method and device for processing image, related electronic device and storage medium
CN102945361B (en) Feature based point vector and the facial expression recognizing method of texture deformation energy parameter
US20220398797A1 (en) Enhanced system for generation of facial models and animation
JP2024501986A (en) 3D face reconstruction method, 3D face reconstruction apparatus, device, and storage medium
US11887232B2 (en) Enhanced system for generation of facial models and animation
US20220398795A1 (en) Enhanced system for generation of facial models and animation
CN115049799A (en) Method and device for generating 3D model and virtual image
JP7393388B2 (en) Face editing method, device, electronic device and readable storage medium
CN100487732C (en) Method for generating cartoon portrait based on photo of human face
CN112221145A (en) Game face model generation method and device, storage medium and electronic equipment
CN110717978B (en) Three-dimensional head reconstruction method based on single image
Sun et al. Human 3d avatar modeling with implicit neural representation: A brief survey
CN113223125A (en) Face driving method, device, equipment and medium for virtual image
CN112241708A (en) Method and apparatus for generating new person image from original person image
CN113066155A (en) 3D expression processing method and device
Kawai et al. Data-driven speech animation synthesis focusing on realistic inside of the mouth
CN116385606A (en) Speech signal driven personalized three-dimensional face animation generation method and application thereof
Ferrari et al. The Florence multi-resolution 3D facial expression dataset
CN113160360A (en) Animation data production method, device, equipment and storage medium
Yan et al. Pose-Driven Compression for Dynamic 3D Human via Human Prior Models
Gong et al. Dynamic facial expression synthesis driven by deformable semantic parts
KR100544684B1 (en) A feature-based approach to facial expression cloning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination