CN109376698B - Face modeling method and device, electronic equipment, storage medium and product - Google Patents

Face modeling method and device, electronic equipment, storage medium and product Download PDF

Info

Publication number
CN109376698B
CN109376698B CN201811446394.8A CN201811446394A CN109376698B CN 109376698 B CN109376698 B CN 109376698B CN 201811446394 A CN201811446394 A CN 201811446394A CN 109376698 B CN109376698 B CN 109376698B
Authority
CN
China
Prior art keywords
face
model
offset
standard
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811446394.8A
Other languages
Chinese (zh)
Other versions
CN109376698A (en
Inventor
朴镜潭
王权
钱晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201811446394.8A priority Critical patent/CN109376698B/en
Publication of CN109376698A publication Critical patent/CN109376698A/en
Application granted granted Critical
Publication of CN109376698B publication Critical patent/CN109376698B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a face modeling method and device, electronic equipment, a storage medium and a product, wherein the method comprises the following steps: obtaining a face statistical model through a plurality of face models and a standard face model; obtaining face dense point cloud of a target face; and transforming the face statistical model based on the face dense point cloud to obtain a target face model corresponding to the target face, and obtaining more accurate face key points by obtaining the more accurate face statistical model to ensure that the obtained target face model has better effect.

Description

Face modeling method and device, electronic equipment, storage medium and product
Technical Field
The present application relates to computer vision technologies, and in particular, to a method and an apparatus for face modeling, an electronic device, a storage medium, and a product.
Background
The face modeling registration is a key technology for applying a real collected face to multiple aspects such as animation, image synthesis and enhancement, and after a face model is obtained by using a geometric method, parts with semantic information such as five sense organs and the like need to be mapped to the same position, so that further work such as editing, animation production and the like can be carried out.
The general face registration technology needs a relatively smooth and complete face point cloud or a relatively large number of human pictures, a large number of iterative computations and a relatively high-resolution face original model, so that the existing modeling method is generally limited by the conditions of insufficient acquired data or insufficient operational performance and the like, and the modeling effect is not ideal.
Disclosure of Invention
The embodiment of the application provides a face modeling method and device, electronic equipment, a storage medium and a product.
According to an aspect of the embodiments of the present application, a face modeling method is provided, including:
obtaining a face statistical model according to a plurality of face models and a standard face model, wherein face key points are known in the face models in the plurality of face models;
obtaining face dense point cloud of a target face;
and transforming the face statistical model based on the face dense point cloud to obtain a target face model corresponding to the target face.
Optionally, in any of the method embodiments described above in the present application, the obtaining a face statistical model according to a plurality of face models and a standard face model includes:
based on the standard face model, incomplete face parts of the plurality of face models are filled, and a plurality of complete face models are obtained;
and carrying out model fusion on the obtained complete human face model and the standard human face model based on a human face central line to obtain a human face statistical model.
Optionally, in any one of the method embodiments of the present application, the performing model fusion on the obtained complete face model and the standard face model based on a face centerline to obtain a face statistical model includes:
acquiring at least two symmetrical vertexes in the complete face model to determine a face center line, and coinciding the face center line with a face center line of the standard face model to acquire a key point set comprising face key points of the complete face models and face key points of the standard face model;
decomposing the key point set by a principal component analysis method to obtain a main key point set and other key point sets in the key point set;
and determining the face statistical model based on the other key point sets.
Optionally, in any of the method embodiments described above in the present application, the method further includes:
determining an offset variance of a primary keypoint in the face statistics model based on the set of primary keypoints;
determining a direction of offset of at least one primary keypoint in the face statistics model based on the variance of the offset of the primary keypoint.
Optionally, in any one of the method embodiments of the present application, before determining a shift direction of at least one main keypoint in the face statistical model based on the shift variance of the main keypoint, the method further includes:
normalizing the offset variance.
Optionally, in any one of the method embodiments of the present application, the performing incomplete face part completion on the plurality of face models based on the standard face model to obtain a plurality of complete face models includes:
acquiring an extension part of the standard face model, and acquiring a parameter relation between the extension part and a key point part based on the relation between the key point part and the extension part in the standard face model;
extending a face model of the plurality of face models based on the parameter relationship to obtain a corresponding model extension part;
obtaining the complete face model based on a face model of the plurality of face models and the model extension.
Optionally, in any one of the method embodiments of the present application, the transforming the face statistical model based on the face dense point cloud to obtain a target face model corresponding to the target face includes:
obtaining a transformation relation between points in the dense point cloud of the human face and key points in the statistical model of the human face by using an iterative closest point algorithm;
and transforming the face statistical model based on the transformation relation to obtain the target face model.
Optionally, in any of the above method embodiments of the present application, the transformation relation includes an offset of at least one keypoint;
the transforming the face statistical model based on the transformation relation to obtain the target face model comprises:
and shifting the corresponding key points in the face statistical model in the shifting direction based on the shifting amount of the at least one key point to obtain the target face model.
Optionally, in any one of the method embodiments of the present application, before transforming the face statistical model based on the transformation relation to obtain the target face model, the method further includes:
and optimizing the transformation relation based on the set proportion and the projection coordinates of the feature points, the corresponding error of the human face on the dense point cloud and the human face deformation constraint.
According to another aspect of the embodiments of the present application, there is provided a face modeling apparatus, including:
the statistical model unit is used for obtaining a face statistical model according to a plurality of face models and a standard face model, wherein face key points are known in the face models in the plurality of face models;
the point cloud obtaining unit is used for obtaining face dense point cloud of the target face;
and the target face model acquisition unit is used for transforming the face statistical model based on the face dense point cloud to acquire a target face model corresponding to the target face.
Optionally, in any one of the apparatus embodiments described above in the present application, the statistical model unit includes:
the face complementing module is used for complementing incomplete face parts of the plurality of face models based on the standard face model to obtain a plurality of complete face models;
and the face fusion module is used for carrying out model fusion on the obtained complete face model and the standard face model based on a face central line to obtain a face statistical model.
Optionally, in an embodiment of any of the apparatuses described in the present application, the face fusion module is specifically configured to obtain at least two symmetric vertices in the complete face model to determine a face center line, and coincide the face center line with a face center line of the standard face model to obtain a key point set including face key points of the multiple complete face models and face key points of the standard face model; decomposing the key point set by a principal component analysis method to obtain a main key point set and other key point sets in the key point set; and determining the face statistical model based on the other key point sets.
Optionally, in any apparatus embodiment of the present application, the face fusion module is further configured to determine, based on the set of main key points, an offset variance of the main key points in the face statistical model; determining a direction of offset of at least one primary keypoint in the face statistics model based on the variance of the offset of the primary keypoint.
Optionally, in any apparatus embodiment of the present application, the face fusion module is further configured to normalize the offset variance before determining the offset direction of at least one main keypoint in the face statistical model based on the offset variance of the main keypoint.
Optionally, in an embodiment of any apparatus of the present application, the face complementing module is specifically configured to obtain an extension part of the standard face model, and obtain a parameter relationship between the extension part and the key point part based on a relationship between the key point part and the extension part in the standard face model; extending a face model of the plurality of face models based on the parameter relationship to obtain a corresponding model extension part; obtaining the complete face model based on a face model of the plurality of face models and the model extension.
Optionally, in any apparatus embodiment of the present application, the target face model obtaining unit includes:
the transformation relation obtaining module is used for obtaining a transformation relation between points in the dense point cloud of the human face and key points in the statistical model of the human face by using an iterative closest point algorithm;
and the model transformation module is used for transforming the face statistical model based on the transformation relation to obtain the target face model.
Optionally, in any of the apparatus embodiments described above in this application, the transformation relation includes an offset of at least one keypoint;
the model transformation module is specifically configured to shift, in the shift direction, a corresponding key point in the face statistical model based on the shift amount of the at least one key point, so as to obtain the target face model.
Optionally, in any apparatus embodiment of the present application, the target face model obtaining unit further includes:
and the transformation relation optimization module is used for optimizing the transformation relation based on the set proportion combined with the feature point projection coordinates, the corresponding error of the human face on the dense point cloud and the human face deformation constraint.
According to another aspect of the embodiments of the present application, there is provided an electronic device, including a processor, where the processor includes the face modeling apparatus as described in any one of the above.
According to still another aspect of an embodiment of the present application, there is provided an electronic device including: a memory for storing executable instructions;
and a processor in communication with the memory for executing the executable instructions to perform the operations of the face modeling method as described in any one of the above.
According to still another aspect of the embodiments of the present application, there is provided a computer-readable storage medium for storing computer-readable instructions, which when executed, perform the operations of the face modeling method according to any one of the above.
According to yet another aspect of embodiments of the present application, there is provided a computer program product comprising computer readable code which, when run on a device, executes instructions for implementing a face modeling method as described in any one of the above.
Based on the face modeling method and device, the electronic equipment, the storage medium and the product provided by the embodiment of the application, the face statistical model is obtained according to the plurality of face models and the standard face model; obtaining face dense point cloud of a target face; and transforming the face statistical model based on the face dense point cloud to obtain a target face model corresponding to the target face, and obtaining more accurate face key points by obtaining the more accurate face statistical model to ensure that the obtained target face model has better effect.
The technical solution of the present application is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description, serve to explain the principles of the application.
The present application may be more clearly understood from the following detailed description with reference to the accompanying drawings, in which:
fig. 1 is a schematic flow chart of a face modeling method according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of a face modeling apparatus according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of an electronic device suitable for implementing the terminal device or the server according to the embodiment of the present application.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Fig. 1 is a schematic flow chart of a face modeling method according to an embodiment of the present application. The method may be performed by any electronic device, such as a terminal device, a server, a mobile device, etc. As shown in fig. 1, the method of this embodiment includes:
and step 110, obtaining a face statistical model according to the plurality of face models and the standard face model.
Wherein, the face key points are known by the face model.
Optionally, a face model of a known face key point is overlapped with a standard face model, a face statistical model is obtained based on an overlapping result, the face statistical model not only includes the features of the standard face model, but also includes the features of a plurality of face models, which better conform to the characteristics of a face, and a direction in which each key point in the face statistical model can be shifted can be provided, so as to ensure that a corresponding target face model can be obtained for each target face.
And step 120, obtaining face dense point cloud of the target face.
In the reverse engineering, a point data set of the product appearance surface obtained by a measuring instrument is called point cloud, the number of points obtained by using a three-dimensional coordinate measuring machine is small, the distance between the points is large, and the point cloud is called sparse point cloud; the point clouds obtained by using the three-dimensional laser scanner or the photographic scanner have larger and denser point quantities and are called dense point clouds.
The method for acquiring the dense point cloud of the face is not limited in the embodiment of the application, and the dense point cloud of the face can be acquired through scanning or shooting, and optionally, the dense point cloud of the face in the embodiment of the application describes the face through denser key points.
And step 130, transforming the face statistical model based on the face dense point cloud to obtain a target face model corresponding to the target face.
According to the face modeling method provided by the embodiment of the application, a face statistical model is obtained according to a plurality of face models and a standard face model; obtaining face dense point cloud of a target face; and transforming the face statistical model based on the face dense point cloud to obtain a target face model corresponding to the target face, and obtaining more accurate face key points by obtaining the more accurate face statistical model to ensure that the obtained target face model has better effect.
In one or more alternative embodiments, step 110 includes:
and performing incomplete face part filling on the plurality of face models based on the standard face model to obtain a plurality of complete face models.
The face models of the known face key points obtained in the embodiment of the present application may be incomplete, for example, the face models do not include forehead and ear portions, and in order to obtain a complete face model, missing portions of the face models need to be supplemented, in the embodiment of the present application, a plurality of face models are supplemented through a standard face model, or face portion supplementation may be achieved through other manners (for example, counting a plurality of complete faces to obtain an average complete face model, performing face portion supplementation on a plurality of face models with the average complete face model, and the like), and the embodiment of the present application does not limit the manner of the particular incomplete face portion supplementation.
And carrying out model fusion on the obtained complete face model and the standard face model based on the face central line to obtain a face statistical model.
Alternatively, in general, the face may be considered to be substantially symmetrical, and the embodiment of the present application performs symmetrical recognition on a plurality of complete face models to obtain at least one symmetrical vertex (a point where there is no symmetrical point), for example: generally, the symmetric vertex includes a nose and at least one point on the same central axis as the nose, and if the face key point is taken as the symmetric vertex, the symmetric vertex may include but is not limited to: the method comprises the steps that a straight line can be determined by two points, a face center line of a complete face model can be obtained based on at least two symmetrical vertexes, the standard face model is of a known face center line, the complete face model and the standard face model are overlapped based on the face center line, the accurate overlapping mode is achieved, and the obtained face statistical model can reflect the deviation direction of each key point in different faces possibly occurring relative to the corresponding key point in the standard face model on the basis of the standard face model.
Optionally, performing model fusion on the obtained complete face model and the standard face model based on a face center line to obtain a face statistical model, including:
acquiring at least two symmetrical vertexes in the complete face model to determine a face center line, and coinciding the face center line with a face center line of a standard face model to acquire a key point set comprising a plurality of face key points of the complete face model and face key points of the standard face model;
decomposing the key point set by a Principal Component Analysis (PCA) method to obtain a main key point set and other key point sets in the key point set;
and determining a face statistical model based on the other key point sets.
Principal Component Analysis (PCA) is one of the most widely used data compression algorithms. In PCA, the data is transformed from the original coordinate system to a new coordinate system, determined by the data itself. When converting the coordinate system, the direction with the largest variance is taken as the coordinate axis direction, because the largest variance of the data gives the most important information of the data. The first new axis is selected by the method with the largest variance in the original data, and the second new axis is selected by the direction which is orthogonal to the first new axis and has the second largest variance. The process is repeated for the feature dimension of the original data.
In the embodiment of the application, by means of PCA decomposition, the key points of the face model with a large variance relative to the standard face model are determined as a main key point set, and the key points with a small variance are used as other key point sets, so that the key points in the other key point sets are considered to be overlapped with the key points in the standard face model, and at the moment, the number of the main key points is reduced.
Optionally, after determining the face statistical model based on the other key point sets, further comprising:
determining the offset variance of the main key points in the face statistical model based on the main key point set;
and determining the offset direction of at least one main key point in the face statistical model based on the offset variance of the main key points.
According to the embodiment of the application, the position and the direction of the at least one main key point in the obtained face statistical model can be determined through the variance between the main key point set and the corresponding main key point in the standard face model.
Optionally, before determining the offset direction of at least one main key point in the face statistical model based on the offset variance of the main key point, the method further includes:
the offset variance is normalized.
Optionally, in the embodiment of the present application, the offset direction is determined based on the main key point, and the specific offset of the offset is not involved, and normalizing the offset variance may reduce the data amount occupied by the offset variance.
In one or more alternative embodiments, the incomplete partial completion of the plurality of face models based on the standard face model to obtain a plurality of complete face models includes:
acquiring an extension part of a standard face model, and acquiring a parameter relation between the extension part and a key point part based on the relation between the key point part and the extension part in a plurality of standard face models;
expanding on the face model based on the parameter relation to obtain a corresponding model extension part;
a complete face model is obtained based on the face model and the model extension.
Optionally, due to the influence of hair, a cap, and other obstructions, the face model collected by the face may not include a complete face model, for example: no forehead and/or ear parts, etc.; the method comprises the steps of counting parameter relations between key point parts and extension parts in a standard face model, extending the face model based on the parameter relations to obtain model extension parts, and adding the extension parts to the face model to obtain the complete face model comprising all faces.
In one or more optional embodiments, transforming the face statistical model based on the face dense point cloud to obtain a target face model corresponding to a target face, including:
obtaining a transformation relation between points in the dense point cloud of the human face and main key points in the statistical model of the human face by using an iterative closest point algorithm;
and transforming the face statistical model based on the transformation relation to obtain a target face model.
An Iterative Closest Point (ICP) algorithm is a Point set-to-Point set registration method, which aims to obtain a transformation relationship between two Point clouds. In the embodiment of the application, a plurality of key points included in the face statistical model are regarded as one point cloud, the conversion relation between the dense point cloud of the face and the face statistical model can be realized through an ICP (inductively coupled plasma) algorithm, and the face statistical model is converted through the conversion relation, so that the target face model corresponding to the target face can be obtained.
Optionally, the transformation relation comprises an offset of at least one keypoint;
transforming the face statistical model based on the transformation relation to obtain a target face model, comprising:
and shifting the corresponding key points in the face statistical model in the shifting direction based on the offset of at least one key point to obtain a target face model.
The ICP algorithm may obtain at least one transformation between two point clouds, for example: translation transformation, rotation transformation, and the like; in the embodiment of the application, the transformation relationship corresponds to each key point, and at least one offset (for example, a translation offset, a rotation offset and the like) of two point clouds corresponding to each key point can be determined.
In one or more optional embodiments, transforming the face statistic model based on the transformation relationship, and before obtaining the target face model, the method further includes:
and optimizing the transformation relation based on the set proportion and the projection coordinates of the feature points, the corresponding error of the human face on the dense point cloud and the human face deformation constraint.
Alternatively, the transformation relation is optimized using LM (a nonlinear optimization algorithm, which is a very efficient iterative algorithm that can be applied to most cases). The specific term loss function is illustrated below:
1) projection coordinates of the feature points: a Convolutional Neural Network (CNN) detects the loss between positions of a human body on a 2D key point on an RGB image and a 3D key point corresponding to a modeled 3D model through calculation of a projection equation;
2) dense point cloud correspondence: obtaining a complete human face geometric model without corresponding relation by utilizing a multi-frame fusion technology, obtaining a standard model by a closest point iteration searching mode (such as ICP), and calculating a 3D distance between the geometric model and a corresponding point of the obtained standard model;
3) and (3) human face deformation constraint: and determining the face deformation constraint based on the variances of the face on different deformation coefficients obtained by the prior statistical model.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Fig. 2 is a schematic structural diagram of a face modeling apparatus according to an embodiment of the present application. The apparatus of this embodiment may be used to implement the method embodiments described above in this application. As shown in fig. 2, the apparatus of this embodiment includes:
and the statistical model unit 21 is configured to obtain a face statistical model according to the plurality of face models and the standard face model.
Wherein, the face key points are known by the face model.
And the point cloud obtaining unit 22 is used for obtaining the face dense point cloud of the target face.
And the target face model obtaining unit 23 is configured to transform the face statistical model based on the face dense point cloud to obtain a target face model corresponding to the target face.
According to the face modeling device provided by the embodiment of the application, the face statistical model is obtained according to the plurality of face models and the standard face model; obtaining face dense point cloud of a target face; and transforming the face statistical model based on the face dense point cloud to obtain a target face model corresponding to the target face, and obtaining more accurate face key points by obtaining the more accurate face statistical model to ensure that the obtained target face model has better effect.
In one or more alternative embodiments, the statistical model unit 21 includes:
the face complementing module is used for complementing incomplete face parts of the plurality of face models based on the standard face model to obtain a plurality of complete face models;
and the face fusion module is used for carrying out model fusion on the obtained complete face model and the standard face model based on the face central line to obtain a face statistical model.
Alternatively, in general, the face may be considered to be substantially symmetrical, and the embodiment of the present application performs symmetrical recognition on a plurality of complete face models to obtain at least one symmetrical vertex (a point where there is no symmetrical point), for example: generally, the symmetric vertex includes a nose and at least one point on the same central axis as the nose, and if the face key point is taken as the symmetric vertex, the symmetric vertex may include but is not limited to: the method comprises the steps that a straight line can be determined by two points, a face center line of a complete face model can be obtained based on at least two symmetrical vertexes, the standard face model is of a known face center line, the complete face model and the standard face model are overlapped based on the face center line, the accurate overlapping mode is achieved, and the obtained face statistical model can reflect the deviation direction of each key point in different faces possibly occurring relative to the corresponding key point in the standard face model on the basis of the standard face model.
Optionally, the face fusion module is specifically configured to obtain at least two symmetric vertices in the complete face model to determine a face center line, and coincide the face center line with a face center line of the standard face model to obtain a key point set including a plurality of face key points of the complete face model and face key points of the standard face model; decomposing the key point set by a principal component analysis method to obtain a main key point set and other key point sets in the key point set; and determining a face statistical model based on the other key point sets.
Optionally, the face fusion module is further configured to determine an offset variance of the main key points in the face statistical model based on the main key point set; and determining the offset direction of at least one main key point in the face statistical model based on the offset variance of the main key points.
Optionally, the face fusion module is further configured to normalize the offset variance before determining the offset direction of at least one main keypoint in the face statistical model based on the offset variance of the main keypoint.
Optionally, the face complementing module is specifically configured to obtain a parameter relationship between the extended portion and the key point portion based on a relationship between the key point portion and the extended portion in the standard face model by obtaining the extended portion of the standard face model; expanding on the face model based on the parameter relation to obtain a corresponding model extension part; a complete face model is obtained based on the face model and the model extension.
In one or more alternative embodiments, the target face model obtaining unit 23 includes:
the transformation relation obtaining module is used for obtaining a transformation relation between points in the dense point cloud of the face and key points in the face statistical model by using an iterative closest point algorithm;
and the model transformation module is used for transforming the face statistical model based on the transformation relation to obtain a target face model.
An Iterative Closest Point (ICP) algorithm is a Point set-to-Point set registration method, which aims to obtain a transformation relationship between two Point clouds. In the embodiment of the application, a plurality of key points included in the face statistical model are regarded as one point cloud, the conversion relation between the dense point cloud of the face and the face statistical model can be realized through an ICP (inductively coupled plasma) algorithm, and the face statistical model is converted through the conversion relation, so that the target face model corresponding to the target face can be obtained.
Optionally, the transformation relation comprises an offset of at least one keypoint;
and the model transformation module is specifically used for shifting the corresponding key points in the face statistical model in the shifting direction based on the offset of at least one key point to obtain the target face model.
Optionally, the target face model obtaining unit further includes:
and the transformation relation optimization module is used for optimizing the transformation relation based on the set proportion combined with the feature point projection coordinates, the corresponding error of the human face on the dense point cloud and the human face deformation constraint.
According to another aspect of the embodiments of the present application, there is provided an electronic device, including a processor, where the processor includes the face modeling apparatus as described in any one of the above.
According to still another aspect of an embodiment of the present application, there is provided an electronic device including: a memory for storing executable instructions;
and a processor, for communicating with the memory to execute the executable instructions to perform the operations of the face modeling method provided by any of the above embodiments.
According to still another aspect of the embodiments of the present application, a computer-readable storage medium is provided for storing computer-readable instructions, which when executed, perform the operations of the face modeling method provided in any one of the above embodiments.
According to a further aspect of the embodiments of the present application, there is provided a computer program product, which includes computer readable code, and when the computer readable code runs on a device, a processor in the device executes instructions for implementing the face modeling method provided in any one of the above embodiments.
It should be understood that the description of the embodiments of the present application emphasizes the differences between the embodiments, and the same or similar parts may be referred to each other, and therefore, for brevity, the description is not repeated.
The embodiment of the application also provides an electronic device, which can be a mobile terminal, a Personal Computer (PC), a tablet computer, a server and the like. Referring now to fig. 3, there is shown a schematic block diagram of an electronic device 300 suitable for use in implementing a terminal device or server of an embodiment of the present application: as shown in fig. 3, the electronic device 300 includes one or more processors, communication sections, and the like, for example: one or more Central Processing Units (CPUs) 301, and/or one or more image processors (GPUs) 313, etc., which may perform various appropriate actions and processes according to executable instructions stored in a Read Only Memory (ROM)302 or loaded from a storage section 308 into a Random Access Memory (RAM) 303. The communication section 312 may include, but is not limited to, a network card, which may include, but is not limited to, an ib (infiniband) network card.
The processor may communicate with the read-only memory 302 and/or the random access memory 303 to execute the executable instructions, connect with the communication part 312 through the bus 304, and communicate with other target devices through the communication part 312, so as to complete the operation corresponding to any one of the methods provided by the embodiments of the present application, for example, obtaining a face statistic model according to a plurality of face models and a standard face model; obtaining face dense point cloud of a target face; and transforming the face statistical model based on the face dense point cloud to obtain a target face model corresponding to the target face.
Further, in the RAM303, various programs and data necessary for the operation of the apparatus can also be stored. The CPU301, ROM302, and RAM303 are connected to each other via a bus 304. The ROM302 is an optional module in case of the RAM 303. The RAM303 stores or writes executable instructions into the ROM302 at runtime, which causes the central processing unit 301 to perform operations corresponding to the above-described communication method. An input/output (I/O) interface 305 is also connected to bus 304. The communication unit 312 may be integrated, or may be provided with a plurality of sub-modules (e.g., a plurality of IB network cards) and connected to the bus link.
The following components are connected to the I/O interface 305: an input portion 306 including a keyboard, a mouse, and the like; an output section 307 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 308 including a hard disk and the like; and a communication section 309 including a network interface card such as a LAN card, a modem, or the like. The communication section 309 performs communication processing via a network such as the internet. A drive 310 is also connected to the I/O interface 305 as needed. A removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 310 as necessary, so that a computer program read out therefrom is mounted into the storage section 308 as necessary.
It should be noted that the architecture shown in fig. 3 is only an optional implementation manner, and in a specific practical process, the number and types of the components in fig. 3 may be selected, deleted, added or replaced according to actual needs; in different functional component settings, separate settings or integrated settings may also be used, for example, GPU313 and CPU301 may be separately provided or GPU313 may be integrated on CPU301, the communication part may be separately provided or integrated on CPU301 or GPU313, and so on. These alternative embodiments are all within the scope of the present disclosure.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method illustrated in the flowchart, the program code may include instructions corresponding to performing the steps of the method provided by embodiments of the present application, for example, obtaining a face statistical model from a plurality of face models and a standard face model; obtaining face dense point cloud of a target face; and transforming the face statistical model based on the face dense point cloud to obtain a target face model corresponding to the target face. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 309, and/or installed from the removable medium 311. The operations of the above-described functions defined in the method of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 301.
The methods and apparatus of the present application may be implemented in a number of ways. For example, the methods and apparatus of the present application may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present application are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present application may also be embodied as a program recorded in a recording medium, the program including machine-readable instructions for implementing a method according to the present application. Thus, the present application also covers a recording medium storing a program for executing the method according to the present application.
The description of the present application has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the application in the form disclosed. Many modifications and variations will be apparent to practitioners skilled in this art. The embodiment was chosen and described in order to best explain the principles of the application and the practical application, and to enable others of ordinary skill in the art to understand the application for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (22)

1. A face modeling method, comprising:
obtaining a face statistical model according to a superposition result of a plurality of face models and a known standard face model, wherein face key points are known in the face models in the plurality of face models;
obtaining face dense point cloud of a target face;
transforming the face statistical model based on the offset between the key points of the face statistical model relative to the corresponding key points of the dense point cloud of the face by using an iterative closest point algorithm to obtain a target face model corresponding to the target face;
wherein, the obtaining of the face statistical model according to the coincidence result of the plurality of face models and the known standard face model comprises:
aiming at a face model in the face models, responding to the fact that the face model is an incomplete face model, and based on the standard face model, filling an incomplete face part of the incomplete face model relative to the standard face model to obtain a plurality of complete face models;
performing model fusion on the obtained multiple complete face models and the standard face model based on a face center line to obtain a face statistical model, wherein the obtained complete face models comprise: the complete face model in the plurality of face models and the complete face model obtained by filling the incomplete face part of the incomplete face model.
2. The method of claim 1, wherein the completing incomplete face portions of the incomplete face model based on the standard face model to obtain a plurality of complete face models comprises:
obtaining at least two symmetrical vertexes in the obtained multiple complete face models to determine the face center line, and coinciding the face center line with the face center line of the standard face model to obtain a key point set comprising face key points of the multiple complete face models and face key points of the standard face model;
decomposing the key point set by a principal component analysis method to obtain a main key point set and other key point sets in the key point set;
aiming at a single other key point in the other key point sets, determining the corresponding key point of the other key point from the face key points of the standard face model, and obtaining a face statistical model based on the deviation direction of the other key point and the corresponding key point of the other key point; and the deviation direction is the deviation direction of each key point in different faces relative to the corresponding key point in the standard face model.
3. The method of claim 2, further comprising:
determining the offset variance of a main key point in the face statistical model based on the main key point set, wherein the offset variance of the main key point is the offset variance of the main key point relative to a standard face model;
determining a direction of offset of at least one primary keypoint in the face statistics model based on the variance of the offset of the primary keypoint.
4. The method of claim 3, wherein before determining the direction of the shift of at least one primary keypoint in the face statistical model based on the shift variance of the primary keypoint, further comprising:
normalizing the offset variance.
5. The method according to any one of claims 3-4, wherein said completing the incomplete face part of the incomplete face model based on the standard face model to obtain a complete face model comprises:
acquiring an extension part of the standard face model, and obtaining a parameter relationship between the extension part and a key point part based on a relationship between the key point part and the extension part in the standard face model, wherein the parameter relationship is a relationship between the extension part and the key point part characterized by parameters, and the extension part of the standard face model is a part which is included in the standard face model and is not included in a face model in the plurality of face models;
extending a face model of the plurality of face models based on the parameter relationship to obtain a corresponding model extension part;
the complete face model is obtained based on the incomplete face model and the model extension.
6. The method of claim 3 or 4, wherein the offset comprises a translational offset and a rotational offset.
7. The method of claim 5, wherein the keypoint offset comprises a translational offset and a rotational offset of a keypoint.
8. The method of claim 1, wherein before transforming the face statistical model based on the offset between the key points of the face statistical model relative to the corresponding key points of the dense point cloud of the face by using the iterative closest point algorithm to obtain the target face model corresponding to the target face, the method further comprises:
and optimizing the offset between the key points of the face statistical model relative to the corresponding key points of the face dense point cloud according to a set proportion based on the projection coordinates of the feature points, the corresponding error of the face on the dense point cloud and the face deformation constraint.
9. The method according to claim 3 or 4, wherein before transforming the face statistical model based on the offset between the key points of the face statistical model relative to the corresponding key points of the dense point cloud of the face by using the iterative closest point algorithm to obtain the target face model corresponding to the target face, the method further comprises:
and optimizing the offset between the key points of the face statistical model relative to the corresponding key points of the face dense point cloud according to a set proportion based on the projection coordinates of the feature points, the corresponding error of the face on the dense point cloud and the face deformation constraint.
10. The method according to any one of claims 6 to 7, wherein before transforming the face statistical model based on the offset between the key points of the face statistical model relative to the corresponding key points of the dense point cloud of the face by using the iterative closest point algorithm to obtain the target face model corresponding to the target face, the method further comprises:
and optimizing the offset between the key points of the face statistical model relative to the corresponding key points of the face dense point cloud according to a set proportion based on the projection coordinates of the feature points, the corresponding error of the face on the dense point cloud and the face deformation constraint.
11. A face modeling apparatus, comprising:
the statistical model unit is used for obtaining a face statistical model according to the superposition result of a plurality of face models and known standard face models, wherein face key points are known in the face models in the plurality of face models;
the point cloud obtaining unit is used for obtaining face dense point cloud of the target face;
a target face model obtaining unit, configured to transform the face statistical model based on an offset between key points of the face statistical model and corresponding key points of the dense point cloud of the face by using an iterative closest point algorithm, so as to obtain a target face model corresponding to the target face;
wherein the statistical model unit includes:
a face complementing module, configured to, for a face model of the plurality of face models, in response to that the face model is an incomplete face model, complement an incomplete face part of the incomplete face model relative to the standard face model based on the standard face model, and obtain a complete face model;
a face fusion module, configured to perform model fusion on the obtained multiple complete face models and the standard face model based on a face center line to obtain a face statistical model, where the obtained complete face model includes: the complete face model in the plurality of face models and the complete face model obtained by filling the incomplete face part of the incomplete face model.
12. The apparatus according to claim 11, wherein the face fusion module is specifically configured to obtain at least two symmetric vertices of the obtained multiple complete face models to determine the face center line, and to obtain a key point set including face key points of the multiple complete face models and face key points of the standard face model by coinciding the face center line with the face center line of the standard face model; decomposing the key point set by a principal component analysis method to obtain a main key point set and other key point sets in the key point set; aiming at a single other key point in the other key point sets, determining the corresponding key point of the other key point from the face key points of the standard face model, and obtaining a face statistical model based on the deviation direction of the other key point and the corresponding key point of the other key point; and the deviation direction is the deviation direction of each key point in different faces relative to the corresponding key point in the standard face model.
13. The apparatus of claim 12, wherein the face fusion module is further configured to determine a variance of offsets of the primary keypoints in the face statistical model based on the set of primary keypoints; and determining the offset direction of at least one main key point in the face statistical model based on the offset variance of the main key point, wherein the offset variance of the main key point is the offset variance of the main key point relative to a standard face model.
14. The apparatus of claim 13, wherein the face fusion module is further configured to normalize the variance of the offsets before determining the direction of the offset of at least one primary keypoint in the face statistical model based on the variance of the offsets of the primary keypoints.
15. The apparatus according to any of the claims 13-14, wherein the face complementing module is specifically configured to obtain a parametric relationship between an extension portion and a keypoint portion based on a relationship between the keypoint portion and the extension portion in the standard face model by obtaining the extension portion of the standard face model, wherein the parametric relationship is a relationship between the extension portion and the keypoint portion characterized by a parameter, and the extension portion of the standard face model is a portion included in the standard face model and not included in a face model in the plurality of face models; extending a face model of the plurality of face models based on the parameter relationship to obtain a corresponding model extension part; the complete face model is obtained based on the incomplete face model and the model extension.
16. The apparatus of any of claims 12-13, wherein the offset comprises a translational offset and a rotational offset.
17. The apparatus of claim 11, wherein the offset comprises a translational offset and a rotational offset.
18. The apparatus according to claim 11, wherein the target face model obtaining unit further comprises:
and the transformation relation optimization module is used for optimizing the offset between the key points of the face statistical model relative to the corresponding key points of the face dense point cloud according to a set proportion based on the feature point projection coordinates, the corresponding error of the face on the dense point cloud and the face deformation constraint.
19. The apparatus according to any one of claims 13-14, wherein the target face model obtaining unit further comprises:
and the transformation relation optimization module is used for optimizing the offset between the key points of the face statistical model relative to the corresponding key points of the face dense point cloud according to a set proportion based on the feature point projection coordinates, the corresponding error of the face on the dense point cloud and the face deformation constraint.
20. The apparatus according to claim 11, wherein the target face model obtaining unit further comprises:
and the transformation relation optimization module is used for optimizing the offset between the key points of the face statistical model relative to the corresponding key points of the face dense point cloud according to a set proportion based on the feature point projection coordinates, the corresponding error of the face on the dense point cloud and the face deformation constraint.
21. An electronic device, comprising: a memory for storing executable instructions;
and a processor in communication with the memory for executing the executable instructions to perform the operations of the face modeling method of any of claims 1 to 10.
22. A computer-readable storage medium storing computer-readable instructions that, when executed, perform the operations of the face modeling method of any of claims 1 to 10.
CN201811446394.8A 2018-11-29 2018-11-29 Face modeling method and device, electronic equipment, storage medium and product Active CN109376698B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811446394.8A CN109376698B (en) 2018-11-29 2018-11-29 Face modeling method and device, electronic equipment, storage medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811446394.8A CN109376698B (en) 2018-11-29 2018-11-29 Face modeling method and device, electronic equipment, storage medium and product

Publications (2)

Publication Number Publication Date
CN109376698A CN109376698A (en) 2019-02-22
CN109376698B true CN109376698B (en) 2022-02-01

Family

ID=65374755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811446394.8A Active CN109376698B (en) 2018-11-29 2018-11-29 Face modeling method and device, electronic equipment, storage medium and product

Country Status (1)

Country Link
CN (1) CN109376698B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110059660A (en) * 2019-04-26 2019-07-26 北京迈格威科技有限公司 Mobile terminal platform 3D face registration method and device
CN110826501B (en) * 2019-11-08 2022-04-05 杭州小影创新科技股份有限公司 Face key point detection method and system based on sparse key point calibration
CN112419144B (en) * 2020-11-25 2024-05-24 上海商汤智能科技有限公司 Face image processing method and device, electronic equipment and storage medium
CN112396693A (en) * 2020-11-25 2021-02-23 上海商汤智能科技有限公司 Face information processing method and device, electronic equipment and storage medium
CN112396692B (en) * 2020-11-25 2023-11-28 北京市商汤科技开发有限公司 Face reconstruction method, device, computer equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116902A (en) * 2011-11-16 2013-05-22 华为软件技术有限公司 Three-dimensional virtual human head image generation method, and method and device of human head image motion tracking
CN106599836A (en) * 2016-12-13 2017-04-26 北京智慧眼科技股份有限公司 Multi-face tracking method and tracking system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI226589B (en) * 2003-04-28 2005-01-11 Ind Tech Res Inst Statistical facial feature extraction method
JP2007257324A (en) * 2006-03-23 2007-10-04 Space Vision:Kk Face model creating system
CN101556701A (en) * 2009-05-15 2009-10-14 陕西盛世辉煌智能科技有限公司 Human face image age changing method based on average face and aging scale map
JP5463866B2 (en) * 2009-11-16 2014-04-09 ソニー株式会社 Image processing apparatus, image processing method, and program
FR2998402B1 (en) * 2012-11-20 2014-11-14 Morpho METHOD FOR GENERATING A FACE MODEL IN THREE DIMENSIONS
CN104036546B (en) * 2014-06-30 2017-01-11 清华大学 Method for carrying out face three-dimensional reconstruction at any viewing angle on basis of self-adaptive deformable model
KR101997500B1 (en) * 2014-11-25 2019-07-08 삼성전자주식회사 Method and apparatus for generating personalized 3d face model
CN106780591B (en) * 2016-11-21 2019-10-25 北京师范大学 A kind of craniofacial shape analysis and Facial restoration method based on the dense corresponding points cloud in cranium face
CN107146199B (en) * 2017-05-02 2020-01-17 厦门美图之家科技有限公司 Fusion method and device of face images and computing equipment
CN107506717B (en) * 2017-08-17 2020-11-27 南京东方网信网络科技有限公司 Face recognition method based on depth transformation learning in unconstrained scene

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116902A (en) * 2011-11-16 2013-05-22 华为软件技术有限公司 Three-dimensional virtual human head image generation method, and method and device of human head image motion tracking
CN106599836A (en) * 2016-12-13 2017-04-26 北京智慧眼科技股份有限公司 Multi-face tracking method and tracking system

Also Published As

Publication number Publication date
CN109376698A (en) 2019-02-22

Similar Documents

Publication Publication Date Title
CN109376698B (en) Face modeling method and device, electronic equipment, storage medium and product
US8405659B2 (en) System and method for establishing correspondence, matching and repairing three dimensional surfaces of arbitrary genus and arbitrary topology in two dimensions using global parameterization
US9117267B2 (en) Systems and methods for marking images for three-dimensional image generation
WO2020042975A1 (en) Face pose estimation/three-dimensional face reconstruction method and apparatus, and electronic device
CN111768477B (en) Three-dimensional facial expression base establishment method and device, storage medium and electronic equipment
JP2018195309A (en) Training method and training device for image processing device for face recognition
US11568601B2 (en) Real-time hand modeling and tracking using convolution models
US11615587B2 (en) Object reconstruction with texture parsing
EP3991140A1 (en) Portrait editing and synthesis
US11049288B2 (en) Cross-device supervisory computer vision system
CN110910433A (en) Point cloud matching method based on deep learning
WO2020240497A1 (en) System and method of generating a 3d representation of an object
WO2018039936A1 (en) Fast uv atlas generation and texture mapping
CN115330980A (en) Expression migration method and device, electronic equipment and storage medium
Barath et al. Relative pose from sift features
Yin et al. [Retracted] Virtual Reconstruction Method of Regional 3D Image Based on Visual Transmission Effect
He et al. Data-driven 3D human head reconstruction
EP4086853A2 (en) Method and apparatus for generating object model, electronic device and storage medium
US20220198707A1 (en) Method and apparatus with object pose estimation
Ma et al. A lighting robust fitting approach of 3D morphable model for face reconstruction
CN114332603A (en) Appearance processing method and device for dialogue module and electronic equipment
Huang et al. A surface approximation method for image and video correspondences
Benseddik et al. Direct method for rotation estimation from spherical images using 3D mesh surfaces with SPHARM representation
Maronidis et al. Designing and evaluating an expert system for restoring damaged byzantine icons
US20240153046A1 (en) Multi-view segmentation and perceptual inpainting with neural radiance fields

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant