CN109829965B - Action processing method and device of face model, storage medium and electronic equipment - Google Patents

Action processing method and device of face model, storage medium and electronic equipment Download PDF

Info

Publication number
CN109829965B
CN109829965B CN201910145480.3A CN201910145480A CN109829965B CN 109829965 B CN109829965 B CN 109829965B CN 201910145480 A CN201910145480 A CN 201910145480A CN 109829965 B CN109829965 B CN 109829965B
Authority
CN
China
Prior art keywords
face
model
expression
characteristic information
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910145480.3A
Other languages
Chinese (zh)
Other versions
CN109829965A (en
Inventor
王宇鹭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910145480.3A priority Critical patent/CN109829965B/en
Publication of CN109829965A publication Critical patent/CN109829965A/en
Application granted granted Critical
Publication of CN109829965B publication Critical patent/CN109829965B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a method and a device for processing actions of a face model, storage equipment and an electronic device. The method comprises the following steps: acquiring facial expression characteristic information of a target object; determining the expression actions of each local model in the face model of the target object according to the facial expression characteristic information; and controlling the corresponding local model according to the expression actions of each local model in the face model so as to form the expression actions of the face model. By adopting the technical scheme, the embodiment of the application controls the expression actions of each local model in the face model by collecting the facial expression characteristic information of the target object, forms the same expression actions as the target object, enriches the expression of the face model and improves the individuation and the difference of the expression of the face model.

Description

Action processing method and device of face model, storage medium and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of electronic equipment, in particular to a method and a device for processing actions of a face model, a storage medium and electronic equipment.
Background
Three-dimensional modeling is one of the most valuable applications in the field of computer graphics, and three-dimensional models generated by three-dimensional modeling are also widely used in various fields.
The existing three-dimensional model is generally a static model or only performs fixed and single simple actions, so that a user cannot feel as if the user were in the scene, and meanwhile, the existing face model adopts a fixed expression mode when in action, and thousands of people are on one side.
Disclosure of Invention
The embodiment of the application provides a method and a device for processing actions of a face model, a storage medium and electronic equipment, and the expression actions of the face model are enhanced.
In a first aspect, an embodiment of the present application provides a method for processing actions of a face model, including:
acquiring facial expression characteristic information of a target object;
determining the expression actions of each local model in the face model of the target object according to the facial expression characteristic information;
and controlling the corresponding local model according to the expression actions of each local model in the face model so as to form the expression actions of the face model.
In a second aspect, an embodiment of the present application provides an action processing apparatus of a face model, including:
the expression feature information acquisition module is used for acquiring facial expression feature information of the target object;
the expression action determining module is used for determining the expression actions of each local model in the face model of the target object according to the facial expression characteristic information;
And the expression control module is used for controlling the corresponding local model according to the expression actions of each local model in the face model so as to form the expression actions of the face model.
In a third aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program that, when executed by a processor, implements a method for processing actions of a face model according to embodiments of the present application.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and capable of running on the processor, where the processor implements a method for processing actions of a face model according to an embodiment of the present application when the processor executes the computer program.
According to the action processing method of the face model, face expression characteristic information of a target object is obtained; determining the expression actions of each local model in the face model of the target object according to the facial expression characteristic information; and controlling the corresponding local model according to the expression actions of each local model in the face model so as to form the expression actions of the face model. By adopting the technical scheme, the embodiment of the application controls the expression actions of each local model in the face model by collecting the facial expression characteristic information of the target object, forms the same expression actions as the target object, enriches the expression of the face model and improves the individuation and the difference of the expression of the face model.
Drawings
Fig. 1 is a flow chart of a motion processing method of a face model according to an embodiment of the present application;
fig. 2 is a flow chart of another motion processing method of a face model according to an embodiment of the present application;
fig. 3 is a flow chart of another motion processing method of a face model according to an embodiment of the present application;
fig. 4 is a flow chart of another motion processing method of a face model according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an action processing device of a face model according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
The technical solution of the present application is further described below by means of specific embodiments in conjunction with the accompanying drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present application are shown in the drawings.
Before discussing exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts steps as a sequential process, many of the steps may be implemented in parallel, concurrently, or with other steps. Furthermore, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Fig. 1 is a flow chart of a method for processing actions of a face model according to an embodiment of the present application, where the method may be performed by an action processing device of the face model, where the device may be implemented by software and/or hardware, and may generally be integrated in an electronic device. As shown in fig. 1, the method includes:
step 101, facial expression characteristic information of a target object is obtained.
And 102, determining the expression actions of each local model in the face model of the target object according to the facial expression characteristic information.
And 103, controlling the corresponding local model according to the expression actions of each local model in the face model so as to form the expression actions of the face model.
For example, the electronic device in the embodiment of the application may include smart devices such as a mobile phone, a tablet computer, and a computer.
The target object is an expression reference object of the model, for example, can be to acquire facial expression characteristic information of the user a, control the three-dimensional facial model of the user a to form the same expression as the user a, enhance individuation and accuracy of the expression action of the facial model, and avoid the spread of the expression of the facial model.
Optionally, obtaining facial expression feature information of the target object includes: acquiring a face image of the target object; identifying each key region in the face image, wherein the key region comprises a five sense organ region and a face cheek region; and extracting expression characteristic information of each key region in the face image. The face image of the target object may be obtained based on a double-sided camera, a structured light camera, a depth camera, or other devices, where the face image includes depth information, and the face image may be acquired in real time or may be prestored in an electronic device. And carrying out image recognition on the acquired face image, and determining each key area in the face image, wherein the key areas can comprise a left eye area, a right eye area, a left eyebrow area, a right eyebrow area, a nose area, a mouth area, a left ear area, a right ear area, a left face cheek area and a right face cheek area. Alternatively, the key region recognition may be performed based on a pre-trained neural network model, where the trained neural network model has a function of dividing key regions of a face, that is, a face image is input into the pre-trained neural network model, and an output result thereof is a division result of each key region in the face image.
After each key region in the face image is identified, the expression characteristic information of each key region in the face image is determined, correspondingly, the expression characteristic information of each key region is static expression characteristic information, and can be formed by combining spatial position information of characteristic points in the key region, and the characteristic points of the mouth region can be exemplified, but not limited to left and right mouth corners, upper lip center points, lower lip center points and the like. The spatial position information of the feature points may include horizontal position information and depth information.
Optionally, the obtaining facial expression feature information of the target object may further be: acquiring a face video of the target object; identifying each key region in the face video, wherein the key region comprises a five sense organ region and a face cheek region; and extracting expression characteristic information of each key region in the face video. Correspondingly, the expression characteristic information of each key area is dynamic expression characteristic information. In this embodiment, by sequentially identifying key regions in each video frame in the face video, determining expression feature information of the key regions in each video frame, and forming dynamic expression feature information according to the sequence of the video frames by continuous expression feature information.
The face model of the target object is a three-dimensional face model created according to the face characteristic information of the target object, each local model in the face model corresponds to the face area of the target object one by one, the face model at least comprises a five-sense organ model and a face cheek model, for example, a left eye model in the face model corresponds to the left eye area of the target object, and a mouth model in the face model corresponds to the mouth area of the target object. The facial expression characteristics of each face region of the target subject are used for determining the corresponding facial expression actions in the face model, and the facial expression actions of the mouth model can comprise, but are not limited to smiles, laughter, evil laughter, skimming mouth, tongue spitting head and the like. If the facial expression feature information is static expression feature information, the static expression action corresponding to the local model can be determined, and if the facial expression feature information is dynamic expression feature information, the dynamic expression action corresponding to the local model can be determined. It should be noted that, the expression actions of each local model determined in the embodiment carry expression parameters (such as spatial position information of feature points) of the target object, instead of a unified expression mode, so that individuality and variability of facial model expression are improved.
After the determined expression actions of the local models, the corresponding local models are controlled to finish the expression actions, so that the expression same as the target object is formed on the face model, the expression actions of the face model are enriched, and the usability of the face model is improved. By way of example, the facial model formed by the embodiment of the application can display the rich expression actions of the target user, for example, in the video call process, the three-dimensional state and expression of the target object are displayed by the three-dimensional model, so that the call object has the realistic sense of being in the scene; for example, in electronic equipment such as trial-assembling mirrors, when commodities such as cosmetics are tried out by building a three-dimensional face model, the same expression can be displayed in the three-dimensional face model along with the expression of a target object, so that the sense of reality of trial-assembling of a user is improved.
According to the action processing method of the face model, facial expression characteristic information of a target object is obtained, the expression actions of all local models in the face model of the target object are determined according to the facial expression characteristic information, and the corresponding local models are controlled according to the expression actions of all the local models in the face model, so that the expression actions of the face model are formed. By adopting the scheme, the facial expression characteristic information of the target object is acquired, the expression actions of each local model in the facial model are controlled, the same expression actions as the target object are formed, the expression of the facial model is enriched, and the individuation and the difference of the expression of the facial model are improved.
Fig. 2 is a flow chart of another method for processing actions of a face model according to an embodiment of the present application, referring to fig. 2, the method of the present embodiment includes the following steps:
step 201, acquiring a face image of the target object.
Step 202, identifying each key area in the face image, wherein the key areas comprise a five-sense organ area and a face cheek area.
And 203, extracting static expression characteristic information of each key region in the face image.
And 204, comparing the static expression characteristic information of any key region with at least one expression action mode of the corresponding local model, and determining the current expression action mode of the corresponding local model according to the comparison result.
Step 205, identifying spatial position information of at least one feature point of the any key region in the static expression feature information of the any key region.
And 206, controlling the corresponding local model according to the spatial position information of at least one feature point of any key region and the current expression action mode of the corresponding local model to form a static expression action of the face model.
For each local model, a plurality of expression action modes are prestored in the electronic device, each expression action mode is traversed, the similarity between the static expression characteristic information of the local model and each expression action mode is determined, and the current expression action mode of the local model is determined according to the similarity, and the expression action mode with the highest similarity can be determined as the current expression action mode of the local model. The static expression characteristic information of the key region may be an image or expression parameter of the key region (for example, spatial position information of feature points in the key region image), specifically, after each key region in the face image is identified, the face image may be segmented based on each key region, and similarity is determined based on matching between the segmented key region image and expression action patterns of the corresponding local model; the similarity can also be determined based on the matching of the expression parameters of the key region and the expression parameters in the expression action mode of the corresponding local model.
In this embodiment, a plurality of feature points are stored in each key region, where the feature points of the key region may be set according to the user's requirement, and the feature points of the mouth region may be selected in the key region in advance by the user, for example, but not limited to, left and right mouth corners, an upper lip center point, a lower lip center point, and the like. The feature points of the key region can be selected in the middle in the edge recognition result by carrying out edge recognition on the key region, wherein the selection quantity of the feature points is increased in the edge turning point region, and the selection quantity of the feature points is reduced in the edge smooth region. For each key region, the more the number of feature points, the higher the expression control accuracy, and correspondingly, the fewer the number of feature points, the lower the expression control accuracy.
In this embodiment, the face image includes spatial position information of each feature point, specifically, the spatial position information of each feature point of the target object may be determined according to the horizontal position information and depth information of each feature point in the face image, and the distance between the face image acquisition device and the face image; the spatial position information of each feature point can be determined according to the horizontal position information and depth information of each feature point in the face image and the proportion of the face image and the face model.
For any local model in the face model, setting the local model as a current expression action mode, determining a spatial position difference value according to the spatial position information of each characteristic point in the corresponding key region and the spatial position information of each characteristic point in the local model under the current expression action mode, and adjusting the corresponding characteristic point according to the spatial position difference value to form the same static expression as the target object. In the embodiment, because the uniformity of the expression action mode is strong, the individuation and the difference of the static expression are improved through the adjustment of the spatial position information of each characteristic point in the face image to the current expression action mode.
It should be noted that, in some embodiments, the steps 204-206 may further be: identifying the spatial position information of at least one feature point of any key region in the static expression feature information of any key region; and adjusting the spatial positions of the feature points in the corresponding local model according to the spatial position information of at least one feature point of any key region, so that the spatial positions of the feature points in the local model are the same as the spatial position information of the corresponding feature points of the key region or the proportion of the spatial positions is the same, and forming the static expression action of the face model.
According to the action processing method of the face model, through determining the current expression action mode of each local model and the spatial position information of the plurality of feature points of the corresponding key area in the face image, after each local area is set to be the current expression action mode, parameter adjustment is conducted on each local model based on the spatial position information of the feature points, personalized static expression of a target object is formed, individuation and difference of the face model are improved, and usability of the face model is provided.
Fig. 3 is a flow chart of another action processing method of a face model provided in the embodiment of the present application, where the embodiment is an alternative of the foregoing embodiment, and correspondingly, as shown in fig. 3, the method of the embodiment includes the following steps:
step 301, acquiring a face video of the target object.
Step 302, identifying each key area in the face video, wherein the key areas comprise a five sense organs area and a face cheek area.
Step 303, extracting dynamic expression characteristic information of each key region in the face video.
Step 304, identifying the spatial position change track of at least one feature point in any key region according to the dynamic expression feature information of the key region.
And 305, determining the spatial position change track of at least one feature point in any key region as the expected spatial position change track of the corresponding feature point of the corresponding local model.
And 306, controlling the spatial positions of the corresponding feature points of any local model according to the expected spatial position change track of each feature point to form a dynamic expression action of the face model.
The face video may be pre-stored in the electronic device, or may be acquired in real time, and exemplary, the face video of the target object is acquired based on a device such as a double-sided camera, a structured light camera, or a depth camera, and the acquired video data is sent to the electronic device according to a fixed interval time, where the video acquisition device may be disposed on the electronic device, or may be disposed in another electronic device.
In this embodiment, a face region may be previously captured from each video frame of a video captured by a target object, so as to obtain a face video of the target object, and each key region in each face video frame may be identified. The dynamic expression characteristic information of the key region comprises the spatial position information of each characteristic point in the key region, the spatial position information of continuous change of each characteristic point is obtained according to the continuity of the video frame, and the spatial position change track of the characteristic point is further obtained. The method comprises the steps that the local model in the face model and a key region corresponding to a face image comprise the same feature points, a spatial position change track of the feature points in the key region is determined to be an expected spatial position change track of the corresponding feature points in the local model, the expected spatial position change track comprises spatial position information of each feature point in the local model, which changes along with time, namely, the expected spatial position change track comprises a time stamp and spatial position information corresponding to the time stamp, optionally, the expected spatial position change track comprises a plurality of data packets, wherein each data packet comprises the spatial position information of each feature point with the same time stamp, and accordingly, the spatial position of the corresponding feature point can be controlled according to the expected spatial position change track of each feature point by sequentially reading the data packets corresponding to the time stamp according to time sequence, and the feature points in the local model are adjusted according to the spatial position information of each feature point in the data packets to form dynamic expression actions.
According to the action processing method of the face model, through collecting the face video, expected space position change tracks of all feature points in all local models in the face model are obtained, space position change of the feature points in the face model is controlled, the same dynamic expression as a target object is formed, expression actions and sense of reality of the face model are enriched, and usability of the face model is improved.
Fig. 4 is a flow chart of another action processing method of a face model provided in the embodiment of the present application, where the embodiment is an alternative of the foregoing embodiment, and correspondingly, as shown in fig. 4, the method of the embodiment includes the following steps:
step 401, acquiring face feature information of the target object, and creating a face model of the target object according to the face feature information.
Step 402, facial expression characteristic information of a target object is obtained.
And step 403, determining the expression actions of each local model in the face model of the target object according to the facial expression characteristic information.
And step 404, controlling the corresponding local model according to the expression actions of each local model in the face model so as to form the expression actions of the face model.
The method can be to acquire a face image or a face image of a target object, perform face recognition based on at least one video frame in the face image or the face image, and determine face feature information in the face image or the video frame. The face feature information may be a structural parameter including each face region, and the face region may include, but is not limited to, eyebrows, eyes, nose, mouth, ears, cheeks, and the like, and the structural parameter may include, but is not limited to, length, width, height, diameter, color, position, and depth. Optionally, the face video or the face image of the target object is obtained based on the equipment such as the double-sided camera, the structured light camera or the depth camera, and the face characteristic information of the target object can be determined by carrying out face recognition on the face video or the face image and according to the distance between the equipment and the target object.
In this embodiment, the creating of the face model of the target object based on the face feature information may be sequentially creating a local model based on feature information of each face region in the face feature information, for example, a face contour model, an eyebrow model, an eye model, a nose model, a mouth model, an ear model, and a cheek model, and obtaining the face model of the target object based on the combination of the local models. The step of creating the face model of the target object based on the face feature information may be to perform parameter adjustment on the created model (for example, the face model of other users or the historical face model of the target object) based on the face feature information, so as to obtain the face model of the target object. For example, creating a face model of the target object according to the face feature information includes: matching in a model database according to the face characteristic information, and determining a reference face model of the target object, wherein the model database comprises at least one created model; determining a local model to be adjusted in the reference face model according to the face characteristic information; determining a standard local model according to the characteristic information of the target object corresponding to the local model to be adjusted; and updating a local model to be adjusted in the reference face model based on the standard local model, and generating the face model of the target object.
The electronic equipment is provided with a model database which is used for storing a created historical face model, carrying out parameter matching on the face characteristic information and the created model stored in the model database, and determining the similarity of the face characteristic information and the created model; and determining a reference face model of the target object according to the similarity. Specifically, the similarity between the face feature information and the created model may be determined as follows: dividing the face characteristic information into a plurality of standard local characteristic information according to the local area of the face of the target object; based on the plurality of standard local feature information, respectively matching the standard local feature information with the corresponding local model in the created model, and determining the number of the local models successfully matched; and generating the similarity between the face feature information and the created models according to the number of the successfully matched local models, wherein the similarity is positively correlated with the number of the successfully matched local models, and correspondingly, the created models with the maximum similarity or face models in the created models can be determined to be the reference face models of the target objects.
Optionally, determining the standard local model according to the feature information of the target object corresponding to the local model to be adjusted may be: determining a parameter adjustment value according to the local characteristic information of the local model to be adjusted and the local characteristic information of the target object, and adjusting the local model to be adjusted based on the parameter adjustment value so as to update the local model to be adjusted into a standard local model; or modeling according to the characteristic information of the target object corresponding to the local model to be adjusted to generate the standard local model; or the standard local model matched with the local characteristic information of the target object can be determined by matching the characteristic information of the target object corresponding to the local model to be adjusted in the model database.
In the embodiment, the reference face model of the target object is determined through the similarity, and parameter adjustment is performed on the basis of the existing model, so that the creation efficiency of the face model is improved, the created model is fully utilized, repeated creation of the same local model for many times is avoided, and the creation process of the face model is simplified.
After the face model is created, the expression characteristic information of the target object is acquired in real time, and the face model is controlled to form the same expression action according to the expression characteristic information of the target object for display. In the video call process of the user a and the user B, the first electronic device of the user a (the video acquisition device in the first electronic device or the video acquisition device associated with the first electronic device) acquires the face video of the user a, the face video of the user a is generated into the second electronic device of the user B, the second electronic device of the user B creates a three-dimensional face model of the user a according to the face feature information in the face video of the user a, and controls the three-dimensional face model of the user a to form the same expression action according to the face feature information in the face video of the user a, so that the user B can see the humanized accurate expression action of the user a in the electronic device in real time, and correspondingly, the second electronic device of the user B (the video acquisition device in the second electronic device or the video acquisition device associated with the second electronic device) acquires the face video of the user B and sends the face video of the user B to the first electronic device of the user a, and the three-dimensional face model with the expression action of the user B is formed in the first electronic device of the user a, so that the user experience of the user in the video call process is improved, and the user experience in the user is improved.
According to the action processing method of the face model, the facial expression characteristic information of the target object is collected, the expression actions of each local model in the face model are controlled, the same expression actions as the target object are formed, the expressions of the face model are enriched, and individuation and difference of the facial expression of the face model are improved.
Fig. 5 is a block diagram of a motion processing device for a face model according to an embodiment of the present application, where the device may be implemented by software and/or hardware, and is generally integrated in an electronic device, and may control an expression of the face model by executing a motion processing method for the face model of the electronic device. As shown in fig. 5, the apparatus includes: the system comprises an expression characteristic information acquisition module 501, an expression action determination module 502 and an expression control module 503.
The expression feature information obtaining module 501 is configured to obtain facial expression feature information of a target object;
the expression action determining module 502 is configured to determine expression actions of each local model in the face model of the target object according to the facial expression feature information;
the expression control module 503 is configured to control the corresponding local model according to the expression actions of each local model in the face model, so as to form the expression actions of the face model.
According to the action processing device of the face model, provided by the embodiment of the application, the facial expression characteristic information of the target object is collected, the expression actions of each local model in the face model are controlled, the same expression actions as the target object are formed, the expression of the face model is enriched, and individuation and difference of the expression of the face model are improved.
On the basis of the above embodiment, the expression feature information acquisition module 501 is configured to:
acquiring a face image or a face video of the target object;
identifying each key region in the face image or the face video, wherein the key region comprises a five sense organs region and a face cheek region;
and extracting expression characteristic information of each key region in the face image or the face video.
On the basis of the above embodiment, the expression feature information of each key region includes static expression feature information or dynamic expression feature information.
On the basis of the above embodiment, the expression action determining module 502 includes:
the expression action pattern matching unit is used for comparing the static expression characteristic information of any key area with at least one expression action pattern of the corresponding local model;
And the current expression action mode determining unit is used for determining the current expression action mode of the corresponding local model according to the comparison result.
On the basis of the above embodiment, the expression control module 503 includes:
a spatial position information determining unit, configured to identify spatial position information of at least one feature point of the any one key region in the static expression feature information of the any one key region;
the first expression action processing module is used for controlling the corresponding local model according to the spatial position information of at least one characteristic point of any key region and the current expression action mode of the corresponding local model to form a static expression action of the face model.
On the basis of the above embodiment, the expression action determining module 502 includes:
the spatial position change track determining unit is used for identifying the spatial position change track of at least one feature point in any key region according to the dynamic expression feature information of the any key region;
and the expected spatial position change track determining unit is used for determining the spatial position change track of at least one characteristic point in any key region as the expected spatial position change track of the corresponding characteristic point of the corresponding local model.
On the basis of the embodiment, the expected spatial position change track includes spatial position information of each feature point in the local model changing with time.
The expression control module 503 is configured to:
and controlling the spatial positions of the corresponding feature points of any local model according to the expected spatial position change track of each feature point to form a dynamic expression action of the face model.
On the basis of the above embodiment, the method further comprises:
the face characteristic information acquisition module is used for acquiring face characteristic information of the target object;
and the face model creation module is used for creating a face model of the target object according to the face characteristic information.
On the basis of the above embodiment, the face model creation module is configured to:
matching in a model database according to the face characteristic information, and determining a reference face model of the target object, wherein the model database comprises at least one created model;
determining a local model to be adjusted in the reference face model according to the face characteristic information;
determining a standard local model according to the characteristic information of the target object corresponding to the local model to be adjusted;
and updating a local model to be adjusted in the reference face model based on the standard local model, and generating the face model of the target object.
The present embodiments also provide a storage medium containing computer executable instructions for performing a method of action processing of a face model when executed by a computer processor, the method comprising:
acquiring facial expression characteristic information of a target object;
determining the expression actions of each local model in the face model of the target object according to the facial expression characteristic information;
and controlling the corresponding local model according to the expression actions of each local model in the face model so as to form the expression actions of the face model.
Storage media-any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk or tape devices; computer system memory or random access memory, such as DRAM, DDRRAM, SRAM, EDORAM, rambus (Rambus) RAM, etc.; nonvolatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in a first computer system in which the program is executed, or may be located in a second, different computer system connected to the first computer system through a network such as the internet. The second computer system may provide program instructions to the first computer for execution. The term "storage medium" may include two or more storage media that may reside in different locations (e.g., in different computer systems connected by a network). The storage medium may store program instructions (e.g., embodied as a computer program) executable by one or more processors.
Of course, the storage medium containing the computer executable instructions provided in the embodiments of the present application is not limited to the operation of processing the motion of the face model as described above, and may also perform the related operations in the method of processing the motion of the face model provided in any embodiment of the present application.
The embodiment of the application provides electronic equipment, and the action processing device of the face model provided by the embodiment of the application can be integrated in the electronic equipment. Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 600 may include: the face model motion processing system comprises a memory 601, a processor 602 and a computer program stored in the memory 601 and capable of being run by the processor 602, wherein the processor 602 implements the face model motion processing method according to the embodiment of the application when executing the computer program.
According to the electronic equipment provided by the embodiment of the application, the facial expression characteristic information of the target object is acquired, the expression actions of each local model in the facial model are controlled, the same expression actions as the target object are formed, the expression of the facial model is enriched, and individuation and difference of the expression of the facial model are improved.
Fig. 7 is a schematic structural diagram of another electronic device according to an embodiment of the present application. The electronic device may include: a housing (not shown in the figure), a memory 701, a central processing unit (central processing unit, CPU) 702 (also called a processor, hereinafter referred to as CPU), a circuit board (not shown in the figure), and a power supply circuit (not shown in the figure). The circuit board is arranged in a space surrounded by the shell; the CPU702 and the memory 701 are provided on the circuit board; the power supply circuit is used for supplying power to each circuit or device of the electronic equipment; the memory 701 is configured to store executable program codes; the CPU702 executes a computer program corresponding to the executable program code stored in the memory 701 by reading the executable program code to realize the steps of:
acquiring facial expression characteristic information of a target object;
determining the expression actions of each local model in the face model of the target object according to the facial expression characteristic information;
and controlling the corresponding local model according to the expression actions of each local model in the face model so as to form the expression actions of the face model.
The electronic device further includes: peripheral interfaces 703, RF (Radio Frequency) circuitry 705, audio circuitry 706, speakers 711, power management chip 708, input/output (I/O) subsystems 709, other input/control devices 710, touch screen 712, other input/control devices 710 and external ports 704, which communicate through one or more communication buses or signal lines 707.
It should be understood that the illustrated electronic device 700 is merely one example of an electronic device, and that the electronic device 700 may have more or fewer components than shown in the figures, may combine two or more components, or may have different configurations of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The electronic device for performing the action processing operation on the face model provided in this embodiment will be described in detail, and the electronic device is exemplified by a mobile phone.
The memory 701 may be accessed by the CPU702, the peripheral interface 703, etc., and the memory 701 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other volatile solid state memory devices.
A peripheral interface 703, said peripheral interface 703 may connect input and output peripherals of the device to the CPU702 and the memory 701.
I/O subsystem 709, which I/O subsystem 709 may connect input and output peripherals on the device, such as touch screen 712 and other input/control devices 710, to peripheral interface 703. The I/O subsystem 709 may include a display controller 7091 and one or more input controllers 7092 for controlling other input/control devices 710. Among other things, one or more input controllers 7092 receives electrical signals from other input/control devices 710 or sends electrical signals to other input/control devices 710, which other input/control devices 710 may include physical buttons (push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels. It should be noted that the input controller 7092 may be connected to any of the following: a keyboard, an infrared port, a USB interface, and a pointing device such as a mouse.
A touch screen 712, the touch screen 712 being an input interface and an output interface between the consumer electronic device and the user, displaying visual output to the user, which may include graphics, text, icons, video, and the like.
The display controller 7091 in the I/O subsystem 709 receives electrical signals from the touch screen 712 or transmits electrical signals to the touch screen 712. The touch screen 712 detects a contact on the touch screen, and the display controller 7091 converts the detected contact into an interaction with a user interface object displayed on the touch screen 712, i.e., implements a human-computer interaction, and the user interface object displayed on the touch screen 712 may be an icon running a game, an icon networked to a corresponding network, or the like. It is noted that the device may also include a light mouse, which is a touch sensitive surface that does not display a visual output, or an extension of a touch sensitive surface formed by a touch screen.
The RF circuit 705 is mainly used for establishing communication between the mobile phone and the wireless network (i.e. network side), so as to implement receiving and transmitting between the mobile phone and the wireless network. Such as sending and receiving short messages, emails, etc. Specifically, the RF circuit 705 receives and transmits RF signals, also referred to as electromagnetic signals, the RF circuit 705 converts electrical signals to electromagnetic signals or electromagnetic signals to electrical signals, and communicates with a communication network and other devices through the electromagnetic signals. RF circuitry 705 may include known circuitry for performing these functions including, but not limited to, an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC (COder-DECoder) chipset, a subscriber identity module (Subscriber Identity Module, SIM), and so forth.
An audio circuit 706 is mainly used to receive audio from the peripheral interface 703, convert the audio into an electrical signal, and send the electrical signal to the speaker 711.
A speaker 711 for reproducing a voice signal received from the wireless network by the mobile phone through the RF circuit 705 into sound and playing the sound to the user.
The power management chip 708 is used to power and power manage the hardware connected to the CPU702, I/O subsystem and peripheral interfaces.
The action processing device, the storage medium and the electronic equipment of the face model provided in the above embodiments can execute the action processing method of the face model provided in any embodiment of the application, and have the corresponding functional modules and beneficial effects of executing the method. Technical details not described in detail in the above embodiments may be referred to the action processing method of the face model provided in any embodiment of the present application.
Note that the above is only a preferred embodiment of the present application and the technical principle applied. Those skilled in the art will appreciate that the present application is not limited to the particular embodiments described herein, but is capable of numerous obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the present application. Therefore, while the present application has been described in connection with the above embodiments, the present application is not limited to the above embodiments, but may include many other equivalent embodiments without departing from the spirit of the present application, the scope of which is defined by the scope of the appended claims.

Claims (8)

1. The motion processing method of the face model is characterized by comprising the following steps of:
acquiring a face image or a face video of a target object;
identifying each key region in the face image or the face video, wherein the key region comprises a five sense organs region and a face cheek region; the expression characteristic information of each key region comprises static expression characteristic information or dynamic expression characteristic information;
extracting expression characteristic information of each key region in the face image or the face video;
comparing the static expression characteristic information of any key region with at least one expression action mode of a corresponding local model; determining a current expression action mode of the corresponding local model according to the comparison result; or, identifying the space position change track of at least one feature point in any key region according to the dynamic expression feature information of the key region; determining the spatial position change track of at least one feature point in any key region as an expected spatial position change track of a corresponding feature point of a corresponding local model;
and controlling the corresponding local model according to the expression actions of each local model in the face model to form the expression actions of the face model.
2. The method of claim 1, wherein controlling the corresponding partial model according to the expression actions of each partial model in the face model to form the expression actions of the face model comprises:
identifying the spatial position information of at least one feature point of any key region in the static expression feature information of any key region;
and controlling the corresponding local model according to the spatial position information of at least one feature point of any key region and the current expression action mode of the corresponding local model to form a static expression action of the face model.
3. The method according to claim 1, wherein the desired spatial position change track comprises spatial position information of each feature point in the local model changing with time; controlling the corresponding local model according to the expression actions of each local model in the face model to form the expression actions of the face model, comprising:
and controlling the spatial positions of the corresponding feature points of any local model according to the expected spatial position change track of each feature point to form a dynamic expression action of the face model.
4. The method as recited in claim 1, further comprising:
Acquiring face characteristic information of the target object;
and creating a face model of the target object according to the face characteristic information.
5. The method of claim 4, wherein creating a face model of the target object from the face feature information comprises:
matching in a model database according to the face characteristic information, and determining a reference face model of the target object, wherein the model database comprises at least one created model;
determining a local model to be adjusted in the reference face model according to the face characteristic information;
determining a standard local model according to the characteristic information of the target object corresponding to the local model to be adjusted;
and updating a local model to be adjusted in the reference face model based on the standard local model, and generating the face model of the target object.
6. An action processing apparatus of a face model, comprising:
the expression characteristic information acquisition module is used for acquiring a face image or a face video of the target object;
identifying each key region in the face image or the face video, wherein the key region comprises a five sense organs region and a face cheek region; the expression characteristic information of each key region comprises static expression characteristic information or dynamic expression characteristic information; extracting expression characteristic information of each key region in the face image or the face video;
The expression action determining module is used for comparing the static expression characteristic information of any key area with at least one expression action mode of the corresponding local model; determining a current expression action mode of the corresponding local model according to the comparison result; or, identifying the space position change track of at least one feature point in any key region according to the dynamic expression feature information of the key region; determining the spatial position change track of at least one feature point in any key region as an expected spatial position change track of a corresponding feature point of a corresponding local model;
the expression control module is used for controlling the corresponding local model according to the expression actions of each local model in the face model so as to form the expression actions of the face model.
7. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the action processing method of a face model as claimed in any one of claims 1-5.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable by the processor, the processor implementing a method of motion processing of a face model according to any one of claims 1-5 when the computer program is executed by the processor.
CN201910145480.3A 2019-02-27 2019-02-27 Action processing method and device of face model, storage medium and electronic equipment Active CN109829965B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910145480.3A CN109829965B (en) 2019-02-27 2019-02-27 Action processing method and device of face model, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910145480.3A CN109829965B (en) 2019-02-27 2019-02-27 Action processing method and device of face model, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN109829965A CN109829965A (en) 2019-05-31
CN109829965B true CN109829965B (en) 2023-06-27

Family

ID=66864622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910145480.3A Active CN109829965B (en) 2019-02-27 2019-02-27 Action processing method and device of face model, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN109829965B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111405361B (en) * 2020-03-27 2022-06-14 咪咕文化科技有限公司 Video acquisition method, electronic equipment and computer readable storage medium
CN111638784B (en) * 2020-05-26 2023-07-18 浙江商汤科技开发有限公司 Facial expression interaction method, interaction device and computer storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330371A (en) * 2017-06-02 2017-11-07 深圳奥比中光科技有限公司 Acquisition methods, device and the storage device of the countenance of 3D facial models
CN107368778A (en) * 2017-06-02 2017-11-21 深圳奥比中光科技有限公司 Method for catching, device and the storage device of human face expression
WO2018137455A1 (en) * 2017-01-25 2018-08-02 迈吉客科技(北京)有限公司 Image interaction method and interaction apparatus
CN108525305A (en) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108646920A (en) * 2018-05-16 2018-10-12 Oppo广东移动通信有限公司 Identify exchange method, device, storage medium and terminal device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107180446B (en) * 2016-03-10 2020-06-16 腾讯科技(深圳)有限公司 Method and device for generating expression animation of character face model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018137455A1 (en) * 2017-01-25 2018-08-02 迈吉客科技(北京)有限公司 Image interaction method and interaction apparatus
CN107330371A (en) * 2017-06-02 2017-11-07 深圳奥比中光科技有限公司 Acquisition methods, device and the storage device of the countenance of 3D facial models
CN107368778A (en) * 2017-06-02 2017-11-21 深圳奥比中光科技有限公司 Method for catching, device and the storage device of human face expression
CN108525305A (en) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108646920A (en) * 2018-05-16 2018-10-12 Oppo广东移动通信有限公司 Identify exchange method, device, storage medium and terminal device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于Candide-3模型的人脸表情动画***设计与实现;张泽强等;《福建电脑》;20160225(第02期);全文 *

Also Published As

Publication number Publication date
CN109829965A (en) 2019-05-31

Similar Documents

Publication Publication Date Title
US11354825B2 (en) Method, apparatus for generating special effect based on face, and electronic device
CN110110118B (en) Dressing recommendation method and device, storage medium and mobile terminal
WO2020233464A1 (en) Model training method and apparatus, storage medium, and device
CN112379812B (en) Simulation 3D digital human interaction method and device, electronic equipment and storage medium
US20170111614A1 (en) Communication using interactive avatars
US20200098156A1 (en) Virtual Reality Presentation of Eye Movement and Eye Contact
CN110555507B (en) Interaction method and device for virtual robot, electronic equipment and storage medium
CN108646920A (en) Identify exchange method, device, storage medium and terminal device
CN111324253B (en) Virtual article interaction method and device, computer equipment and storage medium
CN112396679B (en) Virtual object display method and device, electronic equipment and medium
CN111045511B (en) Gesture-based control method and terminal equipment
US10636417B2 (en) Method and apparatus for performing voice recognition on basis of device information
AU2014268811A1 (en) Attributing user action based on biometric identity
CN106936995A (en) A kind of control method of mobile terminal frame per second, device and mobile terminal
JP7209851B2 (en) Image deformation control method, device and hardware device
CN110263617B (en) Three-dimensional face model obtaining method and device
CN110794964A (en) Interaction method and device for virtual robot, electronic equipment and storage medium
CN109829965B (en) Action processing method and device of face model, storage medium and electronic equipment
CN109829982B (en) Model matching method, device, terminal equipment and storage medium
CN108563327A (en) Augmented reality method, apparatus, storage medium and electronic equipment
CN112581358A (en) Training method of image processing model, image processing method and device
CN112669422A (en) Simulated 3D digital human generation method and device, electronic equipment and storage medium
CN109903375B (en) Model generation method and device, storage medium and electronic equipment
CN108491780A (en) Image landscaping treatment method, apparatus, storage medium and terminal device
CN114904279A (en) Data preprocessing method, device, medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant