CN117437334A - Expression coefficient determining method, device, equipment and storage medium - Google Patents

Expression coefficient determining method, device, equipment and storage medium Download PDF

Info

Publication number
CN117437334A
CN117437334A CN202210832412.6A CN202210832412A CN117437334A CN 117437334 A CN117437334 A CN 117437334A CN 202210832412 A CN202210832412 A CN 202210832412A CN 117437334 A CN117437334 A CN 117437334A
Authority
CN
China
Prior art keywords
face
expression
determining
dimensional
neutral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210832412.6A
Other languages
Chinese (zh)
Inventor
谢宗生
王乃洲
朱勋沐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shiyuan Artificial Intelligence Innovation Research Institute Co Ltd
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shiyuan Artificial Intelligence Innovation Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd, Guangzhou Shiyuan Artificial Intelligence Innovation Research Institute Co Ltd filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN202210832412.6A priority Critical patent/CN117437334A/en
Publication of CN117437334A publication Critical patent/CN117437334A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an expression coefficient determining method, an expression coefficient determining device, expression coefficient determining equipment and a storage medium. The method comprises the following steps: acquiring a color image to be detected and a depth image to be detected corresponding to the color image to be detected; determining a three-dimensional face key point set and a three-dimensional face point cloud according to the color image to be detected, the depth image to be detected and the reconstructed neutral form face model corresponding to the color image to be detected; determining an expression coefficient set according to the three-dimensional face key point set, the three-dimensional face point cloud and the reconstructed neutral form face model; the reconstructed neutral surface model is a three-dimensional variable face model which only comprises shape features and corresponds to the face contained in the color image to be detected. According to the technical scheme, the accuracy and the precision of determining the expression coefficient are improved, the use of complex expression capturing equipment is not needed, and the cost of capturing the expression is reduced.

Description

Expression coefficient determining method, device, equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an expression coefficient determining method, apparatus, device, and storage medium.
Background
The expression capturing refers to a technology of capturing expression information from a source face through a color camera, a depth camera or other sensors, the obtained expression information is usually represented by means of expression groups and expression coefficients, and the obtained expression information can be reproduced on virtual characters (such as cartoon characters, game characters and the like) so as to achieve the purpose of driving the virtual characters, and the method has important application value in the fields of movies, games, virtual reality and the like.
The existing expression capturing technology comprises the modes of face key point fitting, prediction through a deep convolution network and the like. The method based on the face key point fitting generally adopts the steps of detecting a plurality of two-dimensional face key points on a face by a key point detection algorithm, then directly aligning the two-dimensional face key points with key points in a three-dimensional deformable face model, and then solving a surface condition coefficient by means of QR decomposition, a least square method and the like. The mode of predicting through the depth convolution network is to input the face image or the two-dimensional face key points into the depth convolution network model, and directly predict the expression coefficient of each expression group.
However, the existing expression capturing scheme can only capture simple expressions, because the key points in the face model are aligned only through the two-dimensional face key points, the capturing effect of large-amplitude expressions is poor, meanwhile, the capturing precision of fine expressions such as frowning, skimming and the like is low, while professional expression capturing equipment is better in effect, expensive equipment is required to be deployed, and the application range is greatly limited because the wearing of the equipment can influence the movement of the face.
Disclosure of Invention
The invention provides an expression coefficient determining method, an expression coefficient determining device and an expression coefficient solving method, which are used for realizing the alignment of key points and the expression coefficient solving based on the key points under the three-dimensional condition by combining a two-dimensional color image with a three-dimensional depth image, reducing the cost of capturing the expression, and improving the capturing accuracy of a large-amplitude expression and the capturing accuracy of a fine expression.
In a first aspect, an embodiment of the present invention provides an expression coefficient determining method, including:
acquiring a color image to be detected and a depth image to be detected corresponding to the color image to be detected;
determining a three-dimensional face key point set and a three-dimensional face point cloud according to the color image to be detected, the depth image to be detected and the reconstructed neutral form face model corresponding to the color image to be detected;
determining an expression coefficient set according to the three-dimensional face key point set, the three-dimensional face point cloud and the reconstructed neutral form face model;
the reconstructed neutral surface model is a three-dimensional variable face model which only comprises shape features and corresponds to the face contained in the color image to be detected.
In a second aspect, an embodiment of the present invention further provides an expression coefficient determining apparatus, including:
The image acquisition module is used for acquiring a color image to be detected and a depth image to be detected corresponding to the color image to be detected;
the point set point cloud determining module is used for determining a three-dimensional face key point set and a three-dimensional face point cloud according to the color image to be detected, the depth image to be detected and the reconstructed neutral form face model corresponding to the color image to be detected;
the expression coefficient determining module is used for determining an expression coefficient set according to the three-dimensional face key point set, the three-dimensional face point cloud and the reconstructed neutral form face model;
the reconstructed neutral surface model is a three-dimensional variable face model which only comprises shape features and corresponds to the face contained in the color image to be detected.
In a third aspect, an embodiment of the present invention further provides an expression coefficient determining apparatus, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor such that the at least one processor is capable of implementing the method of determining an emoticon in accordance with any of the embodiments of the invention.
In a fourth aspect, embodiments of the present invention further provide a computer readable storage medium storing computer instructions for causing a processor to implement the expression factor determining method of any embodiment of the present invention when executed.
The embodiment of the invention provides an expression coefficient determining method, an expression coefficient determining device, expression coefficient determining equipment and an expression coefficient determining storage medium, wherein a color image to be detected and a depth image to be detected corresponding to the color image to be detected are obtained; determining a three-dimensional face key point set and a three-dimensional face point cloud according to the color image to be detected, the depth image to be detected and the reconstructed neutral form face model corresponding to the color image to be detected; determining an expression coefficient set according to the three-dimensional face key point set, the three-dimensional face point cloud and the reconstructed neutral form face model; the reconstructed neutral surface model is a three-dimensional variable face model which only comprises shape features and corresponds to the face contained in the color image to be detected. By adopting the technical scheme, when the face in the acquired image is required to be subjected to the expression coefficient determination, a two-dimensional color image to be detected, a three-dimensional depth image to be detected and a reconstructed neutral surface model corresponding to the color image to be detected are combined at first, face key points under two-dimensional conditions in the color image to be detected are determined, a corresponding three-dimensional face key point set under a three-dimensional space is determined, further, a face required to be subjected to the expression coefficient determination is determined according to the determined three-dimensional face key point set, three-dimensional point cloud data corresponding to the reconstructed neutral surface model for helping to carry out the expression coefficient determination are determined under the three-dimensional space, further, the three-dimensional face key point set and the three-dimensional face point cloud are respectively subjected to the expression coefficient solving according to the reconstructed neutral surface model only comprising shape features, and further, the expression coefficient set is determined according to the solving result. The problem that the accuracy is low when the facial expression coefficient is solved only by relying on the facial key points in the two-dimensional image and the three-dimensional variable facial model or only by using the deep convolutional neural network is solved, and the large-amplitude expression and the fine expression are difficult to capture simultaneously is solved, the accuracy and the accuracy of the facial expression coefficient determination are improved, the use of complex expression capturing equipment is not required, and the cost of the expression capturing is reduced.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an expression factor determining method according to a first embodiment of the present invention;
fig. 2 is a flowchart of an expression factor determining method in a second embodiment of the present invention;
FIG. 3 is a schematic flow chart of determining a three-dimensional face key point set and a three-dimensional face point cloud according to a corresponding relationship between a middle three-dimensional face key point set and a middle three-dimensional face point cloud and a reconstructed neutral form face model corresponding to a color image to be detected in a second embodiment of the present invention;
FIG. 4 is a schematic flow chart of determining a rough expression coefficient set corresponding to a preset expression base set by performing expression coefficient solving on a rough surface model according to a three-dimensional human face key point set in a second embodiment of the present invention;
FIG. 5 is a schematic flow chart of determining a fine expression coefficient set corresponding to a preset expression base set by performing expression coefficient solving on a fine-table face model according to a three-dimensional face point cloud in a second embodiment of the present invention;
FIG. 6 is a flow chart of a method for constructing a face model with a reconstructed neutral appearance in a second embodiment of the present invention;
FIG. 7 is a schematic flow chart of determining a shape coefficient set corresponding to a preset shape base set according to a neutral face key point set and a neutral face point cloud for performing shape coefficient solution on a neutral surface model in a second embodiment of the present invention;
fig. 8 is a schematic structural diagram of an expression factor determining apparatus in a third embodiment of the present invention;
fig. 9 is a schematic diagram of an expression factor determination apparatus in a fourth embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of an expression factor determining method according to a first embodiment of the present invention, where the method may be performed by an expression factor determining device, where the expression factor determining device may be implemented by software and/or hardware, and the expression factor determining device may be configured on a computer device, where the computer device may be a notebook, a desktop computer, an intelligent tablet, or the like.
As shown in fig. 1, the method for determining an expression coefficient according to the first embodiment of the present invention specifically includes the following steps:
s101, acquiring a color image to be detected and a depth image to be detected corresponding to the color image to be detected.
In this embodiment, the color image to be detected is specifically understood as an image acquired by a color camera of the color camera and the depth camera which are placed together and calibrated for a face of a person to be expression captured. The depth image to be detected can be specifically understood as an image acquired by a depth camera of a color camera and a depth camera which are placed together and calibrated for a face of a person needing expression capture. It is clear that the color image to be detected and the depth image to be detected are acquired by the same visual angle. For example, the color image to be detected and the depth image to be detected may be acquired by a Kinect camera of microsoft.
Specifically, when the expression capturing needs to be performed on the target user, the target user is shot by the color camera and the depth camera which are placed together and calibrated at the same visual angle, the image shot by the color camera is determined to be a color image to be detected, and the image shot by the depth camera is determined to be a depth image to be detected.
S102, determining a three-dimensional face key point set and a three-dimensional face point cloud according to the color image to be detected, the depth image to be detected and the reconstructed neutral surface model corresponding to the color image to be detected.
The reconstructed neutral surface model is a three-dimensional variable face model which only comprises shape features and corresponds to the face contained in the color image to be detected.
In this embodiment, reconstructing the neutral appearance face model may be understood as specifically reconstructing the initial three-dimensional variable face model (3D Morphable Face Model,3DMM) that does not include any trend according to the neutral appearance face image corresponding to the face included in the color image to be detected, to obtain the three-dimensional variable face model that is adapted to the face in the color image to be detected and includes only the shape feature in the face but no expression feature. The three-dimensional face key point set can be specifically understood as a set of key points in a face, which needs to be subjected to expression capturing, in a color image to be detected and a depth image to be detected in a three-dimensional space. The three-dimensional face point cloud can be specifically understood as a set of all points acquired in a face needing expression capture in a three-dimensional space.
Specifically, the position of a face is determined according to a color image to be detected, face key points of the color image to be detected at the position of the face are extracted, so that a face key point set under two dimensions is obtained, the face key point set under two dimensions is converted into a three-dimensional space according to the calibration relation between the depth image to be detected and the color image to be detected, and the face key point set is matched with a reconstructed neutral surface face model in the dimensions, offsets and other directions, so that a three-dimensional face key point set corresponding to the face key point set under two dimensions is obtained. And meanwhile, determining the position range of the face needing to be subjected to expression capturing under the three-dimensional space according to the three-dimensional face key point set, and matching all points in the range with the reconstructed neutral surface model in the directions of size, offset and the like to obtain the corresponding three-dimensional face point cloud.
The three-dimensional variable face model may include model construction modes such as BFM, LSFM, FLAME, and in the embodiment of the present invention, a BFM model is taken as an example, and the modeling modes are as follows:
wherein S represents a facial model with facial shape features and expression features,face model representing neutral form, S i Shape base representing different face shape features, a i Representing a shape of the substrate S i The weight of (a) is the shape factor, N is the number of shape bases, B j Expression substrate for representing different facial expression characteristics, w j Expression base B j I.e. the expression coefficient, M is the number of expression bases.
Further, the three-dimensional variable face model provided in the embodiment of the present invention only including shape features may be expressed as:
wherein S is id The face model only contains shape features in order to correspond to the face contained in the color image to be detected.
According to the embodiment of the invention, the face key points obtained under the two-dimensional condition are combined with the corresponding depth information to be converted into the three-dimensional face key points, the determination of the three-dimensional face point cloud corresponding to the face position in the three-dimensional space is realized according to the three-dimensional face key points, the corresponding three-dimensional face key points and the three-dimensional face point cloud are matched to the size and the pose required by the subsequent expression coefficient according to the reconstructed neutral surface model, the accuracy of the face key point-face model matching process is improved, the requirement of capturing the current face expression is met by reconstructing the neutral surface model, and the accuracy of the expression coefficient determination is improved.
S103, determining an expression coefficient set according to the three-dimensional face key point set, the three-dimensional face point cloud and the reconstructed neutral form face model.
Specifically, the three-dimensional face key point set and the three-dimensional face point cloud are sequentially and correspondingly constructed according to the reconstructed neutral surface model, the distance between the three-dimensional variable face models with unknown expression coefficients is determined, the expression coefficients in the three-dimensional identifiable model are solved in a mode of shortest distance, the expression coefficient corresponding to the minimum distance between the three-dimensional face key point set and the three-dimensional variable face model is obtained, the expression coefficient corresponding to the minimum distance between the three-dimensional face point cloud and the three-dimensional variable face model is also obtained, and the set of each expression coefficient is determined to be the expression coefficient set corresponding to the face to be captured in the color image to be detected.
According to the technical scheme, a color image to be detected and a depth image to be detected corresponding to the color image to be detected are obtained; determining a three-dimensional face key point set and a three-dimensional face point cloud according to the color image to be detected, the depth image to be detected and the reconstructed neutral form face model corresponding to the color image to be detected; determining an expression coefficient set according to the three-dimensional face key point set, the three-dimensional face point cloud and the reconstructed neutral form face model; the reconstructed neutral surface model is a three-dimensional variable face model which only comprises shape features and corresponds to the face contained in the color image to be detected. By adopting the technical scheme, when the face in the acquired image is required to be subjected to the expression coefficient determination, a two-dimensional color image to be detected, a three-dimensional depth image to be detected and a reconstructed neutral surface model corresponding to the color image to be detected are combined at first, face key points under two-dimensional conditions in the color image to be detected are determined, a corresponding three-dimensional face key point set under a three-dimensional space is determined, further, a face required to be subjected to the expression coefficient determination is determined according to the determined three-dimensional face key point set, three-dimensional point cloud data corresponding to the reconstructed neutral surface model for helping to carry out the expression coefficient determination are determined under the three-dimensional space, further, the three-dimensional face key point set and the three-dimensional face point cloud are respectively subjected to the expression coefficient solving according to the reconstructed neutral surface model only comprising shape features, and further, the expression coefficient set is determined according to the solving result. The problem that the accuracy is low when the facial expression coefficient is solved only by relying on the facial key points in the two-dimensional image and the three-dimensional variable facial model or only by using the deep convolutional neural network is solved, and the large-amplitude expression and the fine expression are difficult to capture simultaneously is solved, the accuracy and the accuracy of the facial expression coefficient determination are improved, the use of complex expression capturing equipment is not required, and the cost of the expression capturing is reduced.
Example two
Fig. 2 is a flowchart of an expression coefficient determining method provided by a second embodiment of the present invention, where the technical solution of the second embodiment of the present invention is further optimized based on the above-mentioned alternative technical solutions, by performing face detection and face key point detection on a color image to be detected, a two-dimensional face key point set to be detected is determined, and then, according to a registration relationship between the color image to be detected and a depth image to be detected, an intermediate three-dimensional face key point set corresponding to the face key point set to be detected in a three-dimensional space is determined, a three-dimensional space range of a face to be detected is defined in a point cloud corresponding to the depth image to be detected by using the intermediate three-dimensional face key point set, and a point cloud formed by points located in the three-dimensional space range is determined as an intermediate three-dimensional face point cloud, and then, according to a correspondence relationship between the intermediate three-dimensional face key point set and a reconstructed neutral table face model, the intermediate three-dimensional face key point set and the intermediate three-dimensional face point cloud are converted into a three-dimensional face key point set and a three-dimensional face point cloud which are matched with the reconstructed neutral table face model. According to the reconstructed neutral surface model and a preset expression substrate set, a rough surface model is built, further according to the minimum distance principle, the rough surface model is subjected to expression coefficient solving through a three-dimensional human face key point set to obtain a corresponding rough expression coefficient set, the human face model is reconstructed according to the determined rough expression coefficient set to obtain a fine surface model, then according to the minimum distance principle, the fine surface model is subjected to expression coefficient solving through a three-dimensional human face point cloud to obtain a corresponding fine expression coefficient set, the rough expression coefficient set and the fine expression coefficient set are combined to obtain a final required expression coefficient set, the human face key points in a two-dimensional space are converted into a three-dimensional space based on a depth image and correspond to the human face model, meanwhile, rough expression solving is respectively carried out according to the three-dimensional human face key points, fine expression solving is carried out according to the three-dimensional human face point cloud, meanwhile, the large-amplitude and fine expression in the human face are focused, and the accuracy and precision of the determination of the expression coefficient are improved.
As shown in fig. 2, the method for determining an expression coefficient provided in the second embodiment of the present invention specifically includes the following steps:
s201, acquiring a color image to be detected and a depth image to be detected corresponding to the color image to be detected.
S202, face detection and face key point detection are carried out on the color image to be detected, and a face key point set to be detected is determined.
In this embodiment, the set of face key points to be detected may be specifically understood as a set of face key points of a face requiring expression capturing, which is detected in a color image to be detected, in a two-dimensional space.
Specifically, the color face image to be detected is input into a pre-trained face detection model, the coordinates of the face outer frame in the color image to be detected are determined according to the output result, the color image to be detected in the range of the coordinates of the face outer frame is determined to be the face image to be detected, further the face key point recognition can be carried out on the face image to be detected through the pre-trained face key point detection model, and the set of the recognized face key points is determined to be the set of the face key points to be detected.
The pre-trained face detection model may be a trained deep neural network RetinaFace, or may be another neural network model capable of implementing face image detection, which is not limited in this embodiment of the present invention. The pre-trained face key point detection model can be a deep convolution neural network constructed by using a Resnet backbone network and a full-connection layer, and can predict more than 1000 face key point coordinates in a regression mode, and can also be other neural network models capable of realizing face key point detection, and the embodiment of the invention is not limited to the deep convolution neural network. The method comprises the steps of acquiring a batch of color images containing faces in advance, marking face key points in the color images to obtain a face key point detection training sample set, further training the set deep convolutional neural network by using the face key point training sample set, and determining the trained deep convolutional neural network as a face key point detection model.
And S203, registering the color image to be detected and the depth image to be detected, and determining the set of the key points of the faces to be detected, which are configured with the depth values after registration, as a middle three-dimensional key point set of the faces to be detected.
In this embodiment, the intermediate three-dimensional face key point set may be specifically understood as a face key point set corresponding to a to-be-detected color image after the to-be-detected depth information is converted into a three-dimensional space by combining the face key points included in the to-be-detected color image, where the face key point set has not been matched with a pre-constructed expression face model in size and posture.
Specifically, since the color camera for collecting the color image to be detected and the depth camera for collecting the depth image to be detected are calibrated, the color image to be detected and the depth image to be detected can be registered through the internal reference and the external reference of the two cameras, if the depth image to be detected can be aligned to the color image to be detected, the depth value of each pixel point in the color image to be detected on the depth image to be detected can be obtained, namely the depth value corresponding to each face key point to be detected in the color image to be detected can be obtained, the coordinates of each face key point to be detected in the two-dimensional space are combined with the corresponding depth values, the coordinates of each face key point to be detected in the three-dimensional space can be obtained, the corresponding point of each face key point to be detected in the three-dimensional space is determined as the middle three-dimensional face key point, and then the set of each middle three-dimensional face key point is determined as the middle three-dimensional face key point set.
For example, it is assumed that the coordinates of the ith face key point to be detected in the two-dimensional space can be expressed as (x i ,y i ) Wherein x is i And y i The horizontal coordinate and the vertical coordinate of the key point of the face to be detected in the pixel coordinate system, and the depth value of the key point of the face to be detected in the world coordinate system can be expressed asThe three-dimensional space coordinates corresponding to the key points of the face to be detected can be expressed as +.>The middle three-dimensional face key points of the face key points to be detected under the world coordinate system can be calculated according to the camera internal and external parameters determined by camera calibration>
S204, determining a three-dimensional space range of the face to be detected in the point cloud corresponding to the depth image to be detected according to the intermediate three-dimensional face key point set, and determining points in the three-dimensional space range of the face to be detected in the point cloud as an intermediate three-dimensional face point cloud.
In this embodiment, the three-dimensional space range of the face to be detected may be specifically understood as a position range of the face in which expression capturing is required in the three-dimensional space. The middle three-dimensional face point cloud can be specifically understood as a set of points located in the three-dimensional space range of the face to be detected in a world coordinate system corresponding to the depth image to be detected, and the middle three-dimensional face point cloud is not matched with a pre-constructed expression face model in size and gesture.
Specifically, under the world coordinate system determined according to the depth image to be detected, a point cloud containing the face to be detected is determined, the edge position of the face to be detected under the world coordinate system is determined according to the middle three-dimensional face key point set, the three-dimensional space range of the face to be detected in the point cloud is further determined, and the points in the three-dimensional space range are determined to be the middle three-dimensional face point cloud. Optionally, point clouds in and near the three-dimensional space range to be detected can be intercepted to be used as an intermediate three-dimensional face point cloud, so that the extraction of corresponding points of the face to be detected is ensured to be complete, and the embodiment of the invention is not limited to the extraction.
S205, according to the intermediate three-dimensional face key point set and the intermediate three-dimensional face point cloud, the corresponding relation between the intermediate three-dimensional face key point set and the reconstructed neutral surface condition face model corresponding to the color image to be detected is determined.
Specifically, because the reconstructed neutral surface model corresponding to the image to be detected is a face model not containing an expression coefficient, and the solving of the expression coefficient is performed on the reconstructed neutral surface model in combination with the face image to be subjected to expression capturing, the key point set of the face to be detected in the three-dimensional space and the corresponding point cloud are required to be converted, so that the specification of the key point set is matched with the reconstructed neutral surface model, the key point set and the corresponding point cloud can be combined for carrying out the solving of the expression coefficient, and at the moment, the scaling ratio and the rotation angle can be determined according to the corresponding relation of the same spatial position points between the key point set of the intermediate three-dimensional face and the reconstructed neutral surface model, and then the key point set of the intermediate three-dimensional face and the point cloud of the intermediate three-dimensional face are converted into the same coordinate system as the reconstructed neutral surface model, so as to obtain the corresponding three-dimensional key point set of the face and the three-dimensional face point cloud.
Further, fig. 3 is a schematic flow chart of determining a three-dimensional face key point set and a three-dimensional face point cloud according to a corresponding relationship between a three-dimensional face key point set and a three-dimensional face point cloud and a reconstructed neutral surface model corresponding to a color image to be detected, which is provided in a second embodiment of the present invention, and specifically includes the following steps as described in fig. 3:
s2051, determining a scaling coefficient and an alignment coefficient according to the corresponding relation between the middle three-dimensional face key point set and the same spatial position point in the reconstructed neutral surface model.
In this embodiment, the scaling factor may be specifically understood as a scaling factor that transforms the size of the intermediate three-dimensional face key point set to be enlarged or reduced when the size is consistent with the reconstructed neutral-form face model. The alignment coefficient can be specifically understood as a coefficient for converting the position and posture of the central three-dimensional face key point set to rotation and translation required when the position and posture is consistent with the reconstructed neutral surface model.
Specifically, a plurality of groups of point pairs are randomly selected from the middle three-dimensional face key point set, first point pair distances among the groups of point pairs are determined, corresponding point pairs which are positioned at the same spatial position as the middle three-dimensional face key point set are selected from the reconstructed neutral form face model, second point pair distances among the groups of point pairs are determined, and the ratio of the second point pair distances to the corresponding first point pair distances is averaged to obtain the corresponding scaling factor. And determining a rotation matrix and a translation matrix of the middle three-dimensional face key point set relative to the reconstructed neutral surface model by singular value decomposition aiming at the corresponding relation of the points, which are positioned at the same spatial position and angle as the middle three-dimensional face key point set, in the reconstructed neutral surface model, and further determining the rotation matrix and the translation matrix as corresponding alignment coefficients.
Exemplary, assume that the first pair distance between the ith group of pairs selected in the intermediate three-dimensional face key set isReconstructing a second point pair distance between the point pairs corresponding to the group of point pairs in the neutral surface model to be +.>The final scaling factor can be expressed as:
s2052, converting the middle three-dimensional face key point set into a three-dimensional face key point set and converting the middle three-dimensional face point cloud into a three-dimensional face point cloud according to the scaling coefficient and the alignment coefficient.
The three-dimensional face key point set and the three-dimensional face point cloud are the same as the size and the gesture of the reconstructed neutral surface condition face model.
Specifically, each intermediate three-dimensional face key point in the intermediate three-dimensional face key point set is multiplied by a scaling factor to obtain a corrected intermediate three-dimensional face key point set with the same size as the reconstructed neutral surface model, and then the corrected intermediate three-dimensional face key point set is substituted according to a rotation matrix and a translation matrix in the determined alignment coefficient, so that the three-dimensional face key point set with the same posture as the reconstructed neutral face expression model can be obtained, and the same operation is performed on the intermediate three-dimensional face point cloud to obtain the three-dimensional face point cloud.
For example, assuming a rotation matrix of R, a translation matrix of T,the ith point in the modified intermediate three-dimensional face key point set is expressed as (x) i ,y i ,z i ) Then the ith point (x 'in the final three-dimensional face key point set' i ,y' i ,z' i ) Can be represented by the following formula:
it is clear that, because the depth camera can have partial information distortion at the positions of the face edge or hair and the like when acquiring the depth map, in order to avoid interference, before the three-dimensional face key point set and the three-dimensional face point cloud are determined by the middle three-dimensional face key point set and the middle three-dimensional face point cloud, the middle three-dimensional face key point set and the middle three-dimensional face point cloud are filtered by a preset outlier filtering algorithm, and then the three-dimensional face key point set and the three-dimensional face point cloud are determined by the filtered middle three-dimensional face key point set and the middle three-dimensional face point cloud.
For example, the outlier filtering algorithm may be that a mean value of distances between each point in the middle three-dimensional face key point set and the middle three-dimensional face point cloud and a preset number of points nearby the points is determined, when a difference value between each point distance and the mean value is greater than a preset distance threshold, the point is considered to be an outlier, and the outlier is deleted from the middle three-dimensional face key point set and the middle three-dimensional face point cloud.
S206, constructing a rough surface emotion face model according to the reconstructed neutral surface emotion face model and a preset expression base set.
In this embodiment, the preset expression base set may be specifically understood as an image set including only one expression for constructing expression features in a face model. The preset expression substrate set may be an expression substrate of the initial three-dimensional variable face model, or may be an expression substrate set by a user according to actual situations, such as an expression substrate of a cartoon character type, which is not limited in the embodiment of the present invention. The rough surface emotion face model is specifically understood as a three-dimensional variable face model which is added with expression characteristics on the basis of reconstructing a neutral face expression model and used for representing a large-amplitude expression of a face in a color image to be detected.
Specifically, an initial expression coefficient is set for each preset expression substrate in a preset expression substrate set, the sum of products of each initial expression coefficient and the corresponding preset expression substrate is summed with a reconstructed neutral surface model, and the sum result is determined to be a rough surface model.
Illustratively, taking the BFM model as an example, assume that the reconstructed neutral surface model is denoted as S id ,B j Preset expression base for representing different facial expression characteristics, w j Expression base B j The weight of (a) is the expression coefficient, M is the number of expression substrates, and the rough surface emotion face model is constructedCan be expressed as:
s207, carrying out expression coefficient solving on the rough surface emotion face model according to the three-dimensional face key point set, and determining a rough expression coefficient set corresponding to the preset expression base set.
Specifically, determining each three-dimensional face key point in the three-dimensional face key point set, and determining an expression coefficient set with the minimum distance between the rough surface emotion face model and a face corresponding to the three-dimensional key point set as a rough expression coefficient set corresponding to a preset expression base set relative to the distances of the rough surface emotion face model under different expression coefficients.
Further, fig. 4 is a schematic flow chart of carrying out expression coefficient solving on a rough surface model according to a three-dimensional human face key point set to determine a rough expression coefficient set corresponding to a preset expression base set, as shown in fig. 4, and specifically includes the following steps:
s2071, determining a first vertex set corresponding to the three-dimensional face key point set in the reconstructed neutral surface case face model according to the mapping relation between the three-dimensional face key point set and the rough surface case face model, and determining a second vertex set corresponding to each preset expression base.
Specifically, according to the mapping relation between the three-dimensional face key point set and each point in the reconstructed neutral form face model, determining a first vertex corresponding to each three-dimensional face key point in the three-dimensional face key point set in the reconstructed neutral form face model, and determining a corresponding first vertex set. Determining the mapping relation between the three-dimensional face key point set and each point in the neutral expression substrate in the preset expression substrates, further determining second vertexes in each preset expression substrate corresponding to each three-dimensional face key point in the three-dimensional face key point set according to the mapping relation, and determining second vertex sets corresponding to each preset expression substrate.
Optionally, the mapping relationship between the three-dimensional face key point set and the reconstructed neutral expression face model and the mapping relationship between the three-dimensional face key point set and each point in the neutral expression substrate in the preset expression substrate set may be represented by an index manner, and since the size and the number of each preset expression substrate in the preset expression substrate set are consistent, after determining the index corresponding to the neutral expression substrate therein, the second vertex set corresponding to the three-dimensional face key point set may be determined from other preset expression substrates according to the index. Further, the mapping relationship may be determined by an iterative nearest neighbor algorithm, or may be determined by other manners, which are not limited in this embodiment of the present invention.
S2072, determining a first distance between the three-dimensional face key point set and the rough surface face model according to the three-dimensional face key point set, the first vertex set and each second vertex set.
Specifically, the first vertex set and each second vertex set are substituted into the rough surface face model, and a first distance between the three-dimensional face key point set and the rough surface face model substituted with each vertex set under different expression coefficients is determined.
In the above example, the three-dimensional face key point set may be represented by setting a rough expression loss function, and the mathematical relationship between the first distance and the expression coefficient between the rough expression loss function and the face model may be represented by the following formula:
wherein p is lmk Is a three-dimensional face key point set,as regular term lambda b1 Sum sigma b1 Is an adjustable parameter.
S2073, determining an expression coefficient set corresponding to the rough surface model when the first distance is minimum as a rough expression coefficient set corresponding to the preset expression base set.
By taking the above example, deriving the rough expression loss function, and considering the facial expression corresponding to the three-dimensional facial key point set to be the most similar to the expression corresponding to the current rough-surface-plot facial model when the derivative is the smallest, and using the corresponding expression coefficient w in the rough-surface-plot facial model j Is determined as a rough expression coefficient set W corresponding to a preset expression substrate set c
In the embodiment of the invention, the rough expression coefficient is solved by utilizing the three-dimensional face key point set, semantic information corresponding to key points in the face to be detected is fully utilized, various large-amplitude expressions can be identified due to the fact that each point in the three-dimensional face key point set corresponds to each vertex in the rough surface model, and meanwhile, the iterative optimization times can be greatly reduced due to the existence of the corresponding relation, and the solving efficiency of the rough expression coefficient is improved.
S208, determining the rough surface emotion face model substituted into the rough expression coefficient set as a reconstructed rough surface emotion face model, and constructing a fine surface emotion face model according to the reconstructed rough surface emotion face model and the preset expression base set.
In this embodiment, the reconstructed rough surface face model may be specifically understood as a three-dimensional variable face model adapted to a face contained in a color image to be detected, and the three-dimensional variable face model already contains large-scale expression feature information in the face. The fine-surface-intelligence face model is specifically understood as a three-dimensional variable face model which is added with expression features on the basis of reconstructing a coarse face expression model and used for representing the fine-amplitude expression of the face in the color image to be detected.
Specifically, since the rough expression coefficient in the rough expression coefficient set can represent the large-amplitude expression characteristic in the face which needs to be determined currently, however, due to certain errors in the steps of registering the color image to be detected and the depth image to be detected and key point detection, the rough expression coefficient set is difficult to determine the fine expression in the real expression, at the moment, the determined rough expression coefficient set is substituted into the rough expression face model to obtain the reconstructed rough expression face model containing the rough expression characteristic, the initial expression coefficient is set for each preset expression substrate in the preset expression substrate set on the basis of the reconstructed rough expression face model, the sum of products of each initial expression coefficient and the corresponding preset expression substrate is summed with the reconstructed rough expression face model, the summation result is determined to be the fine expression face model, and the fine expression in the face which needs to be determined is determined through the expression coefficient.
Following the above example, the reconstructed rough surface model may be represented asWherein (1)>For the corresponding rough expression coefficient of the j-th preset expression substrate in the rough expression coefficient set, the fine-surface-condition face model S constructed by combining the reconstructed rough-surface-condition face model may be expressed as follows:
S209, carrying out expression coefficient solving on the fine-surface-emotion face model according to the three-dimensional face point cloud, and determining a fine expression coefficient set corresponding to the preset expression base set.
Specifically, determining the distance between each three-dimensional face point in the three-dimensional face point cloud and the fine expression face model under different expression coefficients, and determining an expression coefficient set with the minimum distance between the fine expression face model and the face corresponding to the three-dimensional face point cloud as a fine expression coefficient set corresponding to a preset expression base set.
Further, fig. 5 is a schematic flow chart of determining a fine expression coefficient set corresponding to a preset expression base set according to the method for solving the expression coefficient of the fine expression model according to the three-dimensional face point cloud, as shown in fig. 5, which specifically includes the following steps:
s2091, determining a third vertex set corresponding to the three-dimensional face point cloud in the reconstructed rough-surface-plot face model according to the mapping relation between the three-dimensional face point cloud and the fine-surface-plot face model, and determining a fourth vertex set corresponding to each preset expression base.
Specifically, a mapping relation between each point in the reconstructed rough surface face model and the three-dimensional face point cloud is determined through a preset algorithm, a third vertex which can form a corresponding relation with the three-dimensional face point cloud in the reconstructed rough surface face model is determined, and a corresponding third vertex set is determined. And determining the mapping relation between each point in the neutral expression substrate in the preset expression substrate set and the three-dimensional face point cloud through a preset algorithm, further determining fourth vertexes in the preset expression substrates corresponding to the three-dimensional face point cloud according to the mapping relation, and determining a fourth vertex set corresponding to each expression substrate. Alternatively, the preset algorithm may be an iterative nearest neighbor algorithm, or may be another algorithm that may achieve the same purpose, which is not limited in the embodiment of the present invention.
S2092, determining a second distance between the three-dimensional face point cloud and the fine-surface face model according to the three-dimensional face point cloud, the third vertex set and the fourth vertex sets.
Specifically, substituting the third vertex set and each fourth vertex set into the fine-surface-area face model, and determining a second distance between the three-dimensional face point cloud and the fine-surface-area face model substituted with each vertex set under different expression coefficients.
In the above example, the three-dimensional face point cloud may be represented by setting a fine expression loss function, and the mathematical relationship between the second distance and the expression coefficient between the fine expression face model may be represented by the following formula:
wherein p is pcd Is a point cloud subset corresponding to each vertex in the fine-form face model in the three-dimensional face point cloud,as regular term lambda b2 Sum sigma b2 Is an adjustable parameter.
And S2093, determining an expression coefficient set corresponding to the fine-form face model when the second distance is minimum.
By taking the above example, deriving the fine expression loss function, when the derivative is minimum, it can consider the facial expression corresponding to the three-dimensional facial point cloud to be the most similar to the expression corresponding to the current fine-form facial model, and at this time, acquire the w of the expression coefficient corresponding to the fine-form facial model j And the set is used for updating the fine expression face model or determining the fine expression coefficient set.
S2094, judging whether the second distance is smaller than the preset threshold, if yes, executing step S2095; if not, step S2096 is performed.
In this embodiment, the preset threshold may be specifically understood as a determination condition preset according to an actual situation to determine whether the expression predicted in the fine-form face model is close enough to the real face expression.
Specifically, by determining whether the second distance is smaller than the preset threshold, if yes, the fine facial expression model under the current expression coefficient set may be considered to be similar enough to the facial expression corresponding to the three-dimensional facial point cloud, and step S2095 is executed at this time; otherwise, it can be considered that the fine facial expression model under the current expression coefficient set has a certain gap with the facial expression corresponding to the three-dimensional facial point cloud, and step S2096 is performed at this time.
S2095, determining the sum of the expression coefficient set and each accumulated expression coefficient set as a fine expression coefficient set corresponding to the preset expression base set.
In this embodiment, the accumulated set of expression coefficients may be specifically understood as a set of expression coefficients obtained by solving the expression coefficients before the currently determined set of expression coefficients.
Specifically, the expression coefficient set and the expression coefficient corresponding to the same preset expression base in each accumulated expression coefficient set are summed to determine a fine expression coefficient corresponding to the preset expression base, and the set of fine expression coefficients corresponding to each preset expression base is determined to be a fine expression coefficient set which is finally obtained.
S2096, updating the fine table face model according to the expression coefficient set, determining the expression coefficient set as an accumulated expression coefficient set, and returning to execute the step S2091.
Specifically, since the fine facial expression model under the current expression coefficient set has a certain gap with the facial expression corresponding to the three-dimensional facial point cloud, the fine expression coefficient needs to be solved again to obtain the expression coefficient containing more fine expression information, at this time, the fine expression coefficient face model after substituting the current expression coefficient is used as a new substrate, a new fine expression coefficient face model is reconstructed, and the method is executed again by the step S2091 according to the new fine expression coefficient face model to solve the expression coefficient, and the current expression coefficient set is determined as an accumulated expression coefficient set.
With the above example, the current expression coefficient set may be substituted into the fine-form face model, and the substituted fine-form face model may be used as a new face model So as to carry out subsequent expression coefficient solving.
In the embodiment of the invention, the three-dimensional facial point cloud contains sufficient expression information, and the expression coefficient solving is repeatedly carried out on the fine facial expression model according to the three-dimensional facial point cloud, so that errors caused by steps of image registration, key point detection and the like are avoided, and the obtained fine expression coefficient set is finer and more accurate.
S210, determining the sum of the coarse expression coefficient set and the fine expression coefficient set as an expression coefficient set.
Specifically, the rough expression coefficient and the fine expression coefficient belonging to the same preset expression base are summed to be used as the expression coefficient corresponding to the preset expression base finally, and a final expression coefficient set is obtained.
In the embodiment of the invention, the finally determined expression coefficient set contains both the rough expression coefficient and the fine expression coefficient, so that the facial expression determined according to the expression coefficient contains both a large-amplitude expression and a fine expression, thereby improving the accuracy of determining the expression.
Further, before determining the corresponding three-dimensional face key point set and three-dimensional face point cloud by using the color image to be detected and the depth image to be detected, a three-dimensional visible face model for carrying out expression coefficient solving needs to be pre-constructed, so as to ensure that the constructed three-dimensional visible face model is more suitable for determining the face expression in the color image to be detected, and a general three-dimensional visible face needs to be obviously reconstructed, fig. 6 is a schematic flow diagram of a method for constructing a reconstructed neutral-form face model, as shown in fig. 6, and specifically includes the following steps:
S301, acquiring a neutral appearance face color image and a neutral appearance face depth image corresponding to the color image to be detected.
In this embodiment, the neutral appearance face color image may be specifically understood as a color image under the condition of no expression, which is acquired in advance by a calibrated color camera for a face to be detected in the color image, which needs to be subjected to expression capturing. The neutral appearance face depth image can be specifically understood as a depth image under the condition of no expression, which is acquired in advance by a calibrated depth camera and is required to be subjected to expression capture in a color image to be detected. It should be clear that the neutral appearance face color image and the neutral appearance face depth image are acquired simultaneously from the same viewing angle via the color camera and the depth camera which are placed together and calibrated.
Specifically, before a color image to be detected is acquired, or before a three-dimensional face key point set and a three-dimensional face point cloud are determined through the acquired color image to be detected and the acquired depth image to be detected, a neutral-form face model for determining the expression therein needs to be reconstructed, at this time, a target user in a non-expression state is shot by a color camera and a depth camera which are placed together and calibrated at the same visual angle, an image shot by the color camera is determined to be a neutral-form face color image, and an image shot by the depth camera is determined to be a neutral-form face depth image.
S302, determining a neutral face key point set and a neutral face point cloud according to the neutral form face color image, the neutral form face depth image and a preset initial neutral form face model.
In this embodiment, the initial neutral appearance face model may be specifically understood as a neutral appearance face model set in the three-dimensional variable face model by itself, which is determined according to a pre-selected three-dimensional variable face model construction method.
It should be clear that the determination of the neutral face key point set and the neutral face point cloud is substantially similar to that in steps S202-S205, and this will not be described in the embodiment of the present invention.
S303, constructing a neutral form face model according to the preset initial neutral form face model and the preset shape base set.
In this embodiment, the preset shape base set may be specifically understood as an image set including only one shape feature for constructing the shape feature in the face model. The neutral-surface-plot face model can be understood as a three-dimensional variable face model containing only appearance shape features of faces in a color image to be detected.
Specifically, an initial shape coefficient is set for each preset shape base in the preset shape base set, the sum of products of each initial shape coefficient and the corresponding preset shape base is summed with an initial neutral form face model, and the sum result is determined to be the neutral form face model.
Illustratively, taking the BFM model as an example, assume that the initial neutral form face model is represented asS i A, representing preset shape substrates corresponding to different facial shape characteristics i Representing a shape of the substrate S i The weight of (a) is the shape coefficient, N is the number of the shape bases, and the neutral surface model S which does not contain the expression features is constructed id Can be expressed as:
s304, according to the neutral face key point set and the neutral face point cloud, carrying out shape coefficient solving on the neutral form face model, and determining a shape coefficient set corresponding to the preset shape base set.
Specifically, a neutral face key point set and a shape factor set with the minimum distance between faces of the neutral surface model corresponding to the neutral face key point set are determined, and the shape factor set corresponding to a preset shape base set is determined as each point in the neutral face point cloud, relative to the distances between the neutral surface model and the neutral surface model under different shape factors.
Further, fig. 7 is a schematic flow chart of determining a shape coefficient set corresponding to a preset shape base set according to a shape coefficient solving method for a neutral face model according to a neutral face key point set and a neutral face point cloud, which is provided in the second embodiment of the present invention, and specifically includes the following steps:
S3041, determining a fifth vertex set corresponding to the initial neutral form face model according to the mapping relation between the neutral face key point set and the initial neutral form face model.
Specifically, according to the mapping relation between the neutral face key point set and each point in the initial neutral face model, determining a fifth vertex corresponding to each neutral face key point in the neutral face key point set in the initial neutral face model, and determining a corresponding fifth vertex set.
For example, a fifth vertex corresponding to the neutral face key point set may be determined in the initial neutral form face model according to the iterative nearest neighbor algorithm, and then the fifth vertex is further corrected by a manual correction manner, so as to obtain a corresponding fifth vertex set, and a mapping relationship corresponding to the fifth vertex set may be represented by an index manner.
S3042, determining a sixth vertex set corresponding to each preset shape base and a neutral face point cloud subset corresponding to the neutral face point cloud and the neutral form face model according to the mapping relation between the neutral face point cloud and the neutral form face model.
Specifically, a mapping relation between each point in the neutral form face model and the neutral face point cloud is determined through a preset algorithm, a neutral face point cloud subset formed by points in the neutral face, which have a corresponding relation with the neutral form face model, is determined, and then a mapping relation between one shape substrate in a preset shape substrate set and the three-dimensional face point cloud is determined through the preset algorithm, and further a sixth vertex set corresponding to each preset shape substrate is determined according to the mapping relation. Alternatively, the preset algorithm may be an iterative nearest neighbor algorithm, or may be another algorithm that may achieve the same purpose, which is not limited in the embodiment of the present invention.
S3043, determining a third distance between the neutral face point cloud and the neutral surface model according to the fifth vertex set, each sixth vertex set and the neutral face point cloud subset.
Specifically, substituting the fifth vertex set and each sixth vertex set into the neutral surface model, and determining a third distance between the neutral surface point cloud subset and the neutral surface model substituted with each vertex set under different shape coefficients.
In the above example, the neutral face point cloud may be represented by setting a reconstruction loss function, where the reconstruction loss function may be represented by the following equation:
wherein p is id Is a neutral face point cloud subset corresponding to each vertex in the neutral form face model in the neutral face point cloud,as regular term lambda id Sum sigma id Is an adjustable parameter.
S3044, determining a shape factor set corresponding to the neutral form face model when the third distance is minimum as a shape factor set corresponding to the preset shape base set.
And taking the derivative of the reconstruction loss function, and considering the face appearance structure corresponding to the neutral face point cloud to be the most similar to the face appearance structure corresponding to the current neutral form face model when the derivative is minimum, acquiring a set of shape coefficients corresponding to the neutral form face model at the moment, and determining the set of shape coefficients according to the corresponding relation between the set of shape coefficients and the preset shape base set.
S305, determining the neutral form face model substituted into the shape coefficient set as a reconstructed neutral form face model.
In the embodiment of the invention, the neutral surface condition face model corresponding to the color image to be detected is reconstructed, so that the reconstructed three-dimensional face model has the smallest appearance structure difference with the face needing to be subjected to the expression coefficient solving, the influence of the appearance structure difference on the subsequent expression coefficient solving is avoided, and the accuracy and precision of the expression coefficient determining are improved.
According to the technical scheme, face detection and face key point detection are carried out on a color image to be detected, a two-dimensional face key point set to be detected is determined, then, according to the registration relation between the color image to be detected and a depth image to be detected, an intermediate three-dimensional face key point set corresponding to the face key point set to be detected in a three-dimensional space is determined, the three-dimensional space range of a face to be detected is defined in a point cloud corresponding to the depth image to be detected by utilizing the intermediate three-dimensional face key point set, the point cloud formed by points located in the three-dimensional space range is determined to be an intermediate three-dimensional face point cloud, and then, according to the corresponding relation between the intermediate three-dimensional face key point set and the same spatial position point in a reconstructed neutral surface model, the intermediate three-dimensional face key point set and the intermediate three-dimensional face point cloud are converted into the three-dimensional face key point set and the three-dimensional face point cloud which are matched with the reconstructed neutral surface model, the point set to be identified is adjusted to the state which is matched with the face to be used for carrying out face coefficient determination in the three-dimensional space, and the face coefficient is matched with the face model is conveniently calculated, and the face coefficient is calculated in the face model is conveniently. Meanwhile, a rough-surface-plot face model is built according to a reconstructed neutral-surface-plot face model and a preset expression substrate set, further, the rough-surface-plot face model is subjected to expression coefficient solving through a three-dimensional face key point set according to a distance minimum principle to obtain a corresponding rough expression coefficient set, the face model is reconstructed according to a determined rough expression coefficient set to obtain a fine-surface-plot face model, the fine-surface-plot face model is subjected to expression coefficient solving through a three-dimensional face point cloud according to a distance minimum hospital to obtain a corresponding fine expression coefficient set, the rough expression coefficient set and the fine expression coefficient set are combined to obtain a final required expression coefficient set, the face key points in a two-dimensional space are converted into a three-dimensional space based on a depth image and correspond to the face model, meanwhile, rough expression solving is respectively carried out according to the three-dimensional face key points, fine expression solving is carried out according to a three-dimensional face point cloud, meanwhile, large-amplitude expression and fine expression in a face are focused, and the accuracy and precision of determining of the expression coefficient are improved.
Example III
Fig. 8 is a schematic structural diagram of an expression factor determining apparatus according to a third embodiment of the present invention, where the expression factor determining apparatus includes: an image acquisition module 41, a point set point cloud determination module 42 and an expression coefficient determination module 43.
The image acquisition module 41 is configured to acquire a color image to be detected and a depth image to be detected corresponding to the color image to be detected; the point set point cloud determining module 42 is configured to determine a three-dimensional face key point set and a three-dimensional face point cloud according to the color image to be detected, the depth image to be detected, and the reconstructed neutral form face model corresponding to the color image to be detected; an expression coefficient determining module 43, configured to determine an expression coefficient set according to the three-dimensional face key point set, the three-dimensional face point cloud, and the reconstructed neutral form face model; the reconstructed neutral surface model is a three-dimensional variable face model which only comprises shape features and corresponds to the face contained in the color image to be detected.
According to the technical scheme, the problem that when the facial key points in the two-dimensional image and the three-dimensional variable facial model are only relied on to solve the facial expression coefficients, or when the depth convolution neural network is only used for solving the facial expression coefficients, the accuracy is low, and the large-amplitude expression and the fine expression are difficult to capture simultaneously is solved, the accuracy and the accuracy of the facial expression coefficient determination are improved, the use of complex expression capturing equipment is not needed, and the cost of the expression capturing is reduced.
Optionally, the point set point cloud determining module 42 includes:
the to-be-detected point set determining unit is used for carrying out face detection and face key point detection on the to-be-detected color image and determining a to-be-detected face key point set;
the middle point set determining unit is used for registering the color image to be detected and the depth image to be detected, and determining a set of key points of each face to be detected, which is configured with a depth value after registration, as a middle three-dimensional face key point set;
the intermediate point cloud determining unit is used for determining the three-dimensional space range of the face to be detected in the point cloud corresponding to the depth image to be detected according to the intermediate three-dimensional face key point set, and determining the points in the three-dimensional space range of the face to be detected in the point cloud as an intermediate three-dimensional face point cloud;
and the point set point cloud determining unit is used for determining the three-dimensional face key point set and the three-dimensional face point cloud according to the corresponding relation between the middle three-dimensional face key point set and the middle three-dimensional face point cloud and the reconstructed neutral surface condition face model corresponding to the color image to be detected.
Optionally, the point set point cloud determining unit is specifically configured to: determining a scaling coefficient and an alignment coefficient according to the corresponding relation between the middle three-dimensional face key point set and the same spatial position point in the reconstructed neutral form face model; according to the scaling coefficient and the alignment coefficient, converting the middle three-dimensional face key point set into a three-dimensional face key point set, and converting the middle three-dimensional face point cloud into a three-dimensional face point cloud; the three-dimensional face key point set and the three-dimensional face point cloud are the same as the size and the gesture of the reconstructed neutral surface condition face model.
Optionally, the expression coefficient determining module 43 includes:
the rough model construction unit is used for constructing a rough surface model according to the reconstructed neutral surface model and a preset expression base set;
the rough coefficient determining unit is used for carrying out expression coefficient solving on the rough surface model according to the three-dimensional face key point set and determining a rough expression coefficient set corresponding to the preset expression base set;
the fine model construction unit is used for determining the rough surface emotion face model substituted into the rough expression coefficient set as a reconstructed rough surface emotion face model, and constructing a fine surface emotion face model according to the reconstructed rough surface emotion face model and a preset expression substrate set;
the fine coefficient determining unit is used for carrying out expression coefficient solving on the fine expression face model according to the three-dimensional face point cloud and determining a fine expression coefficient set corresponding to the preset expression base set;
and the expression coefficient determining unit is used for determining the sum of the coarse expression coefficient set and the fine expression coefficient set as the expression coefficient set.
Optionally, the roughness coefficient determining unit is specifically configured to:
according to the mapping relation between the three-dimensional face key point set and the rough surface condition face model, determining a first vertex set corresponding to the three-dimensional face key point set in the reconstructed neutral surface condition face model, and determining a second vertex set corresponding to each preset expression base;
Determining a first distance between the three-dimensional face key point set and the rough surface face model according to the three-dimensional face key point set, the first vertex set and each second vertex set;
and determining the expression coefficient set corresponding to the rough surface model when the first distance is minimum as the rough expression coefficient set corresponding to the preset expression base set.
Optionally, the fine coefficient determining unit is specifically configured to:
according to the mapping relation between the three-dimensional face point cloud and the fine-form face model, determining a third vertex set corresponding to the three-dimensional face point cloud in the reconstructed rough-form face model, and determining a fourth vertex set corresponding to each preset expression base;
determining a second distance between the three-dimensional face point cloud and the fine-surface face model according to the three-dimensional face point cloud, the third vertex set and each fourth vertex set;
determining an expression coefficient set corresponding to the fine-form face model when the second distance is minimum;
if the second distance is greater than or equal to the preset threshold value, updating the fine-table-plot face model according to the expression coefficient set, determining the expression coefficient set as an accumulated expression coefficient set, and returning to execute the step of determining a third vertex set corresponding to the three-dimensional face point cloud in the reconstructed rough-table-plot face model according to the mapping relation between the three-dimensional face point cloud and the fine-table-plot face model, and determining a fourth vertex set corresponding to each preset expression base;
If the second distance is smaller than the preset threshold value, determining the sum of the expression coefficient set and each accumulated expression coefficient set as a fine expression coefficient set corresponding to the preset expression base set.
Optionally, the expression coefficient determining device further includes: and a reconstruction model determining module.
The reconstruction model determining module is used for acquiring neutral form face color images and neutral form face depth images corresponding to the color images to be detected; determining a neutral face key point set and a neutral face point cloud according to the neutral face color image, the neutral face depth image and a preset initial neutral face model; constructing a neutral form face model according to a preset initial neutral form face model and a preset shape base set; according to the neutral face key point set and the neutral face point cloud, carrying out shape coefficient solving on the neutral form face model, and determining a shape coefficient set corresponding to the preset shape base set; and determining the neutral form face model substituted into the shape coefficient set as a reconstructed neutral form face model.
According to the neutral face key point set and the neutral face point cloud, carrying out shape coefficient solving on the neutral form face model to determine a shape coefficient set corresponding to the preset shape base set, wherein the method comprises the following steps:
Determining a fifth vertex set corresponding to the initial neutral form face model according to the mapping relation between the neutral face key point set and the initial neutral form face model;
according to the mapping relation between the neutral face point cloud and the neutral form face model, determining a sixth vertex set corresponding to each preset shape base and a neutral face point cloud subset corresponding to the neutral face point cloud and the neutral form face model;
determining a third distance between the neutral face point cloud and the neutral surface model according to the fifth vertex set, each sixth vertex set and the neutral face point cloud subset;
and determining the shape factor set corresponding to the neutral surface model when the third distance is minimum as the shape factor set corresponding to the preset shape base set.
The expression coefficient determining device provided by the embodiment of the invention can execute the expression coefficient determining method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the executing method.
Example IV
Fig. 9 is a schematic structural diagram of an expression factor determining apparatus according to a fourth embodiment of the present invention. The emoticon determining device 50 may be an electronic device intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 9, the emoticon determining apparatus 50 includes at least one processor 51, and a memory, such as a Read Only Memory (ROM) 52, a Random Access Memory (RAM) 53, etc., communicatively connected to the at least one processor 51, in which the memory stores a computer program executable by the at least one processor, and the processor 51 can perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 52 or the computer program loaded from the storage unit 58 into the Random Access Memory (RAM) 53. In the RAM 53, various programs and data required for the operation of the expression factor determination device 50 can also be stored. The processor 51, the ROM 52 and the RAM 53 are connected to each other via a bus 54. An input/output (I/O) interface 55 is also connected to bus 54.
The various components in the emoticon determination apparatus 50 are connected to an I/O interface 55, including: an input unit 56 such as a keyboard, a mouse, etc.; an output unit 57 such as various types of displays, speakers, and the like; a storage unit 58 such as a magnetic disk, an optical disk, or the like; and a communication unit 59 such as a network card, modem, wireless communication transceiver, etc. The communication unit 59 allows the emoticon determining device 50 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The processor 51 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 51 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 51 performs the respective methods and processes described above, such as the expression coefficient determination method.
In some embodiments, the emoticon determination method may be implemented as a computer program that is tangibly embodied on a computer-readable storage medium, such as the storage unit 58. In some embodiments, part or all of the computer program may be loaded and/or installed onto the emoticon determining device 50 via the ROM 52 and/or the communication unit 59. When the computer program is loaded into RAM 53 and executed by processor 51, one or more steps of the above-described expression profile determination method may be performed. Alternatively, in other embodiments, the processor 51 may be configured to perform the emotive factor determination method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (11)

1. An expression factor determining method, comprising:
acquiring a color image to be detected and a depth image to be detected corresponding to the color image to be detected;
determining a three-dimensional face key point set and a three-dimensional face point cloud according to the color image to be detected, the depth image to be detected and a reconstructed neutral surface model corresponding to the color image to be detected;
determining an expression coefficient set according to the three-dimensional face key point set, the three-dimensional face point cloud and the reconstructed neutral form face model;
The reconstructed neutral surface model is a three-dimensional variable face model which corresponds to the face contained in the color image to be detected and only contains shape features.
2. The method of claim 1, wherein the determining the three-dimensional face key point set and the three-dimensional face point cloud from the color image to be detected, the depth image to be detected, and the reconstructed neutral-form face model corresponding to the color image to be detected comprises:
performing face detection and face key point detection on the color image to be detected, and determining a face key point set to be detected;
registering the color image to be detected and the depth image to be detected, and determining a set of key points of the face to be detected, which is configured with depth values after registration, as a middle three-dimensional face key point set;
determining a three-dimensional space range of a face to be detected in a point cloud corresponding to the depth image to be detected according to the intermediate three-dimensional face key point set, and determining points in the point cloud, which are positioned in the three-dimensional space range of the face to be detected, as an intermediate three-dimensional face point cloud;
and determining the three-dimensional face key point set and the three-dimensional face point cloud according to the corresponding relation between the intermediate three-dimensional face key point set and the intermediate three-dimensional face point cloud and the reconstructed neutral surface-like face model corresponding to the color image to be detected.
3. The method according to claim 2, wherein the determining the three-dimensional face key point set and the three-dimensional face point cloud according to the correspondence between the intermediate three-dimensional face key point set and the intermediate three-dimensional face point cloud and the reconstructed neutral-form face model corresponding to the color image to be detected includes:
determining a scaling coefficient and an alignment coefficient according to the corresponding relation between the middle three-dimensional face key point set and the same spatial position point in the reconstructed neutral form face model;
according to the scaling coefficient and the alignment coefficient, converting the middle three-dimensional face key point set into a three-dimensional face key point set, and converting the middle three-dimensional face point cloud into a three-dimensional face point cloud;
the three-dimensional face key point set and the three-dimensional face point cloud are the same as the reconstructed neutral surface-plot face model in size and posture.
4. The method of claim 1, wherein the determining the set of emotion coefficients from the set of three-dimensional face keypoints, the three-dimensional face point cloud, and the reconstructed neutral-form face model comprises:
constructing a rough surface emotion face model according to the reconstructed neutral surface emotion face model and a preset expression base set;
Carrying out expression coefficient solving on the rough surface model according to the three-dimensional human face key point set, and determining a rough expression coefficient set corresponding to the preset expression base set;
determining the rough surface emotion face model substituted into the rough expression coefficient set as a reconstructed rough surface emotion face model, and constructing a fine surface emotion face model according to the reconstructed rough surface emotion face model and the preset expression base set;
carrying out expression coefficient solving on the fine-surface-emotion face model according to the three-dimensional face point cloud, and determining a fine expression coefficient set corresponding to the preset expression base set;
and determining the sum of the coarse expression coefficient set and the fine expression coefficient set as an expression coefficient set.
5. The method of claim 4, wherein the performing the expression coefficient solving on the rough surface model according to the three-dimensional face key point set, and determining the rough expression coefficient set corresponding to the preset expression base set, includes:
determining a first vertex set corresponding to the three-dimensional face key point set in the reconstructed neutral surface emotion face model according to the mapping relation between the three-dimensional face key point set and the rough surface emotion face model, and determining a second vertex set corresponding to each preset expression base;
Determining a first distance between the three-dimensional face key point set and the rough surface face model according to the three-dimensional face key point set, the first vertex set and each second vertex set;
and determining an expression coefficient set corresponding to the rough surface model when the first distance is minimum as a rough expression coefficient set corresponding to the preset expression base set.
6. The method of claim 4, wherein the performing the expression coefficient solving on the fine-surface-plot face model according to the three-dimensional face point cloud, and determining the fine expression coefficient set corresponding to the preset expression base set, comprises:
determining a third vertex set corresponding to the three-dimensional face point cloud in the reconstructed rough-surface-plot face model according to the mapping relation between the three-dimensional face point cloud and the fine-surface-plot face model, and determining a fourth vertex set corresponding to each preset expression base;
determining a second distance between the three-dimensional face point cloud and the fine-surface-area face model according to the three-dimensional face point cloud, the third vertex set and each fourth vertex set;
determining an expression coefficient set corresponding to the fine-form face model when the second distance is minimum;
If the second distance is greater than or equal to a preset threshold value, updating the fine-table-plot face model according to the expression coefficient set, determining the expression coefficient set as an accumulated expression coefficient set, and returning to execute the mapping relation between the three-dimensional face point cloud and the fine-table-plot face model, determining a third vertex set corresponding to the three-dimensional face point cloud in the reconstructed rough-table-plot face model, and determining a fourth vertex set corresponding to each preset expression base;
if the second distance is smaller than a preset threshold value, determining the sum of the expression coefficient set and each accumulated expression coefficient set as a fine expression coefficient set corresponding to the preset expression base set.
7. The method of claim 1, further comprising, prior to said determining a three-dimensional face key point set and a three-dimensional face point cloud from the color image to be detected, the depth image to be detected, and a reconstructed neutral-form face model corresponding to the color image to be detected:
acquiring a neutral appearance face color image and a neutral appearance face depth image corresponding to the color image to be detected;
Determining a neutral face key point set and a neutral face point cloud according to the neutral face color image, the neutral face depth image and a preset initial neutral face model;
constructing a neutral form face model according to the preset initial neutral form face model and a preset shape base set;
according to the neutral face key point set and the neutral face point cloud, carrying out shape coefficient solving on the neutral form face model, and determining a shape coefficient set corresponding to the preset shape base set;
and determining the neutral form face model substituted into the shape coefficient set as a reconstructed neutral form face model.
8. The method of claim 7, wherein the performing a shape factor solution on the neutral surface model according to the neutral face keypoint set and the neutral face point cloud to determine a shape factor set corresponding to the preset shape base set includes:
determining a fifth vertex set corresponding to the initial neutral form face model according to the mapping relation between the neutral face key point set and the initial neutral form face model;
According to the mapping relation between the neutral face point cloud and the neutral surface plot face model, determining a sixth vertex set corresponding to each preset shape base and a neutral face point cloud subset corresponding to the neutral face point cloud and the neutral surface plot face model;
determining a third distance between the neutral face point cloud and the neutral surface model according to the fifth vertex set, each sixth vertex set and the neutral face point cloud subset;
and determining the shape coefficient set corresponding to the neutral form face model when the third distance is minimum as the shape coefficient set corresponding to the preset shape base set.
9. An expression factor determination device, comprising:
the image acquisition module is used for acquiring a color image to be detected and a depth image to be detected corresponding to the color image to be detected;
the point set point cloud determining module is used for determining a three-dimensional face key point set and a three-dimensional face point cloud according to the color image to be detected, the depth image to be detected and a reconstructed neutral form face model corresponding to the color image to be detected;
the expression coefficient determining module is used for determining an expression coefficient set according to the three-dimensional face key point set, the three-dimensional face point cloud and the reconstructed neutral form face model;
The reconstructed neutral surface model is a three-dimensional variable face model which corresponds to the face contained in the color image to be detected and only contains shape features.
10. An expression coefficient determination apparatus, characterized in that the expression coefficient determination apparatus comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the emotive factor determination method of any of claims 1-8.
11. A computer readable storage medium storing computer instructions for causing a processor to implement the expression coefficient determination method of any one of claims 1-8 when executed.
CN202210832412.6A 2022-07-14 2022-07-14 Expression coefficient determining method, device, equipment and storage medium Pending CN117437334A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210832412.6A CN117437334A (en) 2022-07-14 2022-07-14 Expression coefficient determining method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210832412.6A CN117437334A (en) 2022-07-14 2022-07-14 Expression coefficient determining method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117437334A true CN117437334A (en) 2024-01-23

Family

ID=89557049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210832412.6A Pending CN117437334A (en) 2022-07-14 2022-07-14 Expression coefficient determining method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117437334A (en)

Similar Documents

Publication Publication Date Title
CN106780576B (en) RGBD data stream-oriented camera pose estimation method
CN114186632B (en) Method, device, equipment and storage medium for training key point detection model
CN110276768B (en) Image segmentation method, image segmentation device, image segmentation apparatus, and medium
JP5833507B2 (en) Image processing device
CN112989995B (en) Text detection method and device and electronic equipment
CN117333928B (en) Face feature point detection method and device, electronic equipment and storage medium
CN115578515B (en) Training method of three-dimensional reconstruction model, three-dimensional scene rendering method and device
CN115578433B (en) Image processing method, device, electronic equipment and storage medium
CN115330940B (en) Three-dimensional reconstruction method, device, equipment and medium
CN112489099A (en) Point cloud registration method and device, storage medium and electronic equipment
CN115239888B (en) Method, device, electronic equipment and medium for reconstructing three-dimensional face image
CN111192312B (en) Depth image acquisition method, device, equipment and medium based on deep learning
CN115937546A (en) Image matching method, three-dimensional image reconstruction method, image matching device, three-dimensional image reconstruction device, electronic apparatus, and medium
CN114998433A (en) Pose calculation method and device, storage medium and electronic equipment
CN114882106A (en) Pose determination method and device, equipment and medium
CN115578432B (en) Image processing method, device, electronic equipment and storage medium
CN117437334A (en) Expression coefficient determining method, device, equipment and storage medium
CN115049810A (en) Coloring method, device and equipment for solid-state laser radar point cloud and storage medium
CN114494857A (en) Indoor target object identification and distance measurement method based on machine vision
US11315265B2 (en) Fingertip detection method, fingertip detection device, and medium
CN112150527A (en) Measuring method and device, electronic device and storage medium
CN116229583B (en) Driving information generation method, driving device, electronic equipment and storage medium
CN113012281B (en) Determination method and device for human body model, electronic equipment and storage medium
CN113705620B (en) Training method and device for image display model, electronic equipment and storage medium
CN117152391B (en) Fitment overall arrangement intelligent management system based on virtual reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination