CN107945267B - Method and equipment for fusing textures of three-dimensional model of human face - Google Patents

Method and equipment for fusing textures of three-dimensional model of human face Download PDF

Info

Publication number
CN107945267B
CN107945267B CN201711328399.6A CN201711328399A CN107945267B CN 107945267 B CN107945267 B CN 107945267B CN 201711328399 A CN201711328399 A CN 201711328399A CN 107945267 B CN107945267 B CN 107945267B
Authority
CN
China
Prior art keywords
texture
face
dimensional
camera
triangular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711328399.6A
Other languages
Chinese (zh)
Other versions
CN107945267A (en
Inventor
傅可人
荆海龙
熊伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wisesoft Co Ltd
Original Assignee
Wisesoft Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wisesoft Co Ltd filed Critical Wisesoft Co Ltd
Priority to CN201711328399.6A priority Critical patent/CN107945267B/en
Publication of CN107945267A publication Critical patent/CN107945267A/en
Application granted granted Critical
Publication of CN107945267B publication Critical patent/CN107945267B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a method and equipment for fusing textures of a three-dimensional model of a human face, which can enable the result after the texture fusion to be more natural and avoid the phenomenon of uneven and unnatural transition easily occurring on two sides of a nose wing during texture mapping. The method comprises the following steps: inputting three-dimensional model data of a human face; inputting a plurality of face texture images at different viewing angles; carrying out color correction on the face texture image; calculating the visibility of each triangular patch in each texture camera; calculating texture weight values of each triangular face relative to each texture camera; correcting texture weight values of the triangular face in a preset area on the human face three-dimensional model relative to the front face texture camera; smoothing and normalizing the texture weight value relative to each texture camera; and performing texture fusion according to the texture weight of each triangular face relative to each texture camera, and acquiring the three-dimensional texture of the face.

Description

Method and equipment for fusing textures of three-dimensional model of human face
Technical Field
The invention relates to the technical field of computer vision, in particular to a method and equipment for fusing textures of a human face three-dimensional model.
Background
The three-dimensional modeling has important research significance and application value in the aspects of industrial design, artistic design, architectural design, three-dimensional measurement and the like. The three-dimensional modeling of the human face is widely applied to the field of biological characteristics including three-dimensional human face library building, three-dimensional human face recognition and the like. In general, three-dimensional modeling of an object includes two parts: modeling a surface three-dimensional model and mapping textures. The former has gained widespread attention in the field of computer vision, while the latter, texture mapping, is an important but relatively overlooked direction by researchers. Texture mapping refers to how to calculate a better texture image and map the better texture image to the surface of a three-dimensional model. Texture fusion and mapping under multiple visual angles are important for three-dimensional modeling and rendering, and the quality of results directly influences the reality of the three-dimensional model.
The essence of the texture fusion problem is actually how to piece texture fragments, because when the three-dimensional model and the texture camera parameters are known, there is a one-to-one correspondence between the three-dimensional surface of the model and the coordinates on the texture image of the texture camera; in other words, the texture patches on the texture image may be reverse mapped onto the three-dimensional model. In the case of a multi-texture camera, texture fragments from multiple texture cameras are mapped onto the same three-dimensional surface, thereby generating information superposition, and the texture fragments are likely to have different characteristics such as illumination, shadow, reflection and the like, so that the texture fragments need to be fused.
The method and patent aiming at texture fusion and mapping at home and abroad are described in the following. Lempitsky et al, 2007, proposed "Seamless stitching of Image-Based Texture Maps" and performed Texture fusion using the "optimal mosaic" method. The surface of an object is divided into different areas by image segmentation, and texture joints caused by 'optimal mosaic' are eliminated by a Markov Random Field (MRF), so that the surface color of the model is smooth. However, geometric shapes with complex topology generally have difficulty in obtaining ideal smooth fusion parameters, so that the surface of the texture model still has a few thin seams. Gal et al, 2010 propose "Seamless Montage for Texturing Models" (Seamless Montagy for texture Models), using MRF to allow for the consideration of situations where texture triangle mappings are inaccurate. Although it allows for deviations in texture fragment mapping, this method is time consuming to optimize for MRF and not practical enough in cases where texture camera texture mapping is accurate. In addition, the two existing methods are not specially used for face texture fusion under the condition of uneven ambient illumination.
The chinese patent application with application number 201511025408.5 discloses a "high fidelity full face texture fusion method in a three-dimensional full face texture camera", which may achieve fidelity texture fusion under ideal conditions, but the texture weight calculated by the method is not suitable for situations such as surface occlusion in real natural environment, and linear weighting is always adopted when calculating texture fusion of all regions, and when the texture camera is not illuminated uniformly (for example, when the face is bright and the background is dark under a flash lamp), artificial shadows are easily introduced to the side of the face, resulting in the problem of uneven transition and unnatural fusion results.
Disclosure of Invention
At least one of the objectives of the present invention is to overcome the above problems in the prior art, and provide a method and an apparatus for texture fusion of a three-dimensional model of a human face, which can make the result after texture fusion transition more natural, and avoid the phenomenon of uneven and unnatural transition that easily occurs on both sides of the nose wing during texture mapping.
In order to achieve the above object, the present invention adopts the following aspects.
A method for texture fusion of a three-dimensional model of a human face, comprising:
inputting human face three-dimensional model data, and acquiring three-dimensional point cloud index data of each triangular patch on the surface of the model; inputting a plurality of face texture images at different viewing angles, and acquiring mapping data of a face three-dimensional model on the face texture images according to corresponding texture camera parameters; carrying out color correction on the face texture image; calculating the visibility of each triangular patch in each texture camera; calculating texture weight values of each triangular face relative to each texture camera; correcting texture weight values of the triangular face in a preset area on the human face three-dimensional model relative to the front face texture camera; smoothing and normalizing the texture weight value relative to each texture camera; and performing texture fusion according to the texture weight of each triangular face relative to each texture camera, and acquiring the three-dimensional texture of the face.
Preferably, the three-dimensional model data of the human face comprises three-dimensional point cloud data in the three-dimensional model of the human face; the method further comprises the step of reconstructing the three-dimensional point cloud according to the three-dimensional coordinates of three vertexes of each triangular patch on the surface of the human face three-dimensional model, and obtaining the three-dimensional point cloud index data of each triangular patch.
Preferably, the texture camera parameters include three-dimensional coordinates of the optical center of each texture camera relative to the three-dimensional model of the human face.
Preferably, the color correction includes performing white balance and brightness normalization processing on each of the face texture images, so that color differences of the face texture camera texture images at a plurality of different viewing angles are smaller than a preset threshold value, and the luminance is consistent.
Preferably, the determining the texture weight value includes: traversing each triangular patch, calculating the normal vector of the triangular patch, and calculating the included angle between the normal vector and the connecting line from the center of the triangular patch to the optical center of each texture camera
Figure BDA0001506133430000034
The subscript j represents the jth triangular patch, the superscript i represents the corresponding texture cameras, and the number of the texture cameras is greater than or equal to three;
for the jth triangular patch, according to the included angle
Figure BDA0001506133430000031
Calculating texture weights corresponding to different texture cameras i according to the visibility
Figure BDA0001506133430000032
Figure BDA0001506133430000033
Preferably, the predetermined area is: the distances between the median line of the human face three-dimensional model and the two sides of the plane determined by the optical axis of the front face texture camera are respectively less than or equal to r, and r is 50 mm;
the correction includes: setting the texture weight of the triangular patch in the predetermined area relative to the front texture camera to 1, and setting the weights of the triangular patch relative to other texture cameras to 0, wherein the following table j represents the corresponding serial number of the triangular patch in the predetermined area.
Preferably, the smoothing process includes: traversing each triangular patch, determining a triangular patch set M of the triangular patches neighboring in the three-dimensional model of the face, and according to a formula
Figure BDA0001506133430000041
To update the texture weight
Figure BDA0001506133430000042
Wherein | M | is the number of neighboring triangular patches;
the normalization process includes: traversing each triangular patch, pressing
Figure BDA0001506133430000043
Normalizing the texture weights to enable the sum of the weights of the texture cameras corresponding to each triangular patch to be 1; .
Preferably, the texture fusion comprises: traversing each triangular patch, carrying out affine transformation on the triangular patches under the visual angle of each texture camera, and determining a texture triangle according to a face texture image acquired by the corresponding texture camera
Figure BDA0001506133430000044
According to the formula
Figure BDA0001506133430000045
For texture triangle
Figure BDA0001506133430000046
Weighted summation is carried out to obtain the fused texture triangle
Figure BDA0001506133430000047
And the merged texture triangles
Figure BDA0001506133430000048
And (5) pasting the picture on a triangular face corresponding to the human face three-dimensional model to obtain the human face three-dimensional texture.
Preferably, the calculating the visibility of each triangular patch in each texture camera comprises: constructing a two-dimensional recording matrix and initializing the value of each element to infinity; calculating a two-dimensional projection triangle of each triangular patch in each texture camera; calculating the distance from the central point of each triangular patch to the optical center of each texture camera; updating the values of the elements of the two-dimensional recording matrix according to the distance from the center of each triangular patch to the optical center of each texture camera and the two-dimensional projection triangle; and determining the visibility of each triangular patch in each texture camera according to the values of the two-dimensional recording matrix elements.
An apparatus for texture fusion of a three-dimensional model of a human face, comprising at least one processor, and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any of the methods described above; the database server is used for storing a live working experience database.
In summary, due to the adoption of the technical scheme, the invention at least has the following beneficial effects:
through image preprocessing, weight calculation and smoothness of robustness, color correction is carried out on images of the multi-view texture camera, the texture of the optimal texture camera is taken to carry out mapping by combining the visibility of a triangular surface patch on the surface of a three-dimensional model, the result after texture fusion is enabled to be more natural, special processing is carried out on the middle area of the face including the nose and the like, the texture images of the front texture camera are directly taken to carry out mapping, and the phenomenon that transition is uneven and unnatural and easily occurs on two sides of the nose wing during texture mapping is avoided.
Drawings
Fig. 1 is a flow chart of a method for texture fusion of a three-dimensional model of a human face according to an embodiment of the invention.
Fig. 2 is a schematic diagram of a three-dimensional model of a human face according to an embodiment of the invention.
FIG. 3 is a schematic diagram of a multi-view texture camera and corresponding texture images according to an embodiment of the invention.
Fig. 4 is a schematic diagram of an angle between a normal vector of a triangular patch and a line connecting the center of the triangular patch to the optical center of the texture camera according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a predetermined region on a three-dimensional model of a human face according to an embodiment of the invention.
FIG. 6 is a diagram illustrating an example of a fused three-dimensional texture of a human face according to an embodiment of the present invention.
Fig. 7 is a flow chart illustrating the calculation of the visibility of each triangular patch in each texture camera according to an embodiment of the present invention.
Fig. 8 is a schematic structural diagram of an apparatus for texture fusion of a three-dimensional model of a human face according to an embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and embodiments, so that the objects, technical solutions and advantages of the present invention will be more clearly understood. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The method provided by the embodiment of the invention comprises the step of carrying out real and natural face three-dimensional texture mapping after a face three-dimensional model is obtained from a three-dimensional reconstruction means based on image information (comprising binocular vision, depth measurement and the like). The method utilizes multi-view texture camera image color correction, combines the visibility of a three-dimensional model surface triangular patch, and simultaneously calculates the included angle between the normal vector of the triangular patch and the optical center connecting line from the patch to the multi-view texture camera to comprehensively determine the texture weight required by fusion. And then correcting the texture weight of the middle front face texture camera, and performing three-dimensional space smoothing on the texture weight to eliminate transition traces of texture splicing.
Fig. 1 is a flow chart of a method for texture fusion of a three-dimensional model of a human face according to an embodiment of the invention. The method comprises the following steps:
step 101: inputting human face three-dimensional model data, and acquiring three-dimensional point cloud index data of each triangular patch on the surface of the model
Wherein, the human face three-dimensional model (as shown in fig. 2) can be obtained by reading the human face three-dimensional model stored in advance, or be established by separately executing the human face three-dimensional surface modeling process. The input human face three-dimensional model data comprises three-dimensional point cloud data in the human face three-dimensional model, such as three-dimensional coordinate values of each point, the total number of points in the point cloud and the like. And further reconstructing the three-dimensional point cloud according to the three-dimensional coordinates of the three vertexes of each triangular patch on the surface of the human face three-dimensional model to obtain the three-dimensional point cloud index data of each triangular patch.
Step 102: inputting a plurality of face texture images under different visual angles, and acquiring mapping data of a face three-dimensional model on the face texture images according to corresponding texture camera parameters
For example, I can be used0,I1,I-1,. that are captured by a corresponding plurality of texture cameras, wherein the superscript "0" represents the face texture camera directly facing the face, and "+ 1" represents the first texture camera to the right of the face texture camera, "-1" represents the first texture camera to the left of it, "+ 2", "-2", etc., and so on, to represent more face texture images and corresponding more texture cameras. Fig. 3 is a schematic diagram illustrating three texture cameras respectively acquiring three face texture images from different viewing angles, and the following describes an embodiment of the present invention in detail based on this. Wherein the texture camera parameters include respective texture camera optical centers O-1,O0,O+1Relative to the three-dimensional coordinates of the three-dimensional model of the face. According to the texture camera parameters and the three-dimensional point cloud, the two-dimensional space coordinates of the three-dimensional points on the three-dimensional model mapped to the texture images of each texture camera can be obtained, and further according to the face texture image I0,I1,I-1And acquiring a texture pixel value after the three-dimensional point on the three-dimensional model is mapped to the two-dimensional space coordinate. However, the step of obtaining the texel values may be performed after the following steps.
Step 103: color correction of face texture images
In particular, the face texture image I can be processed0,I1,I-1Each of (1) toAnd performing white balance and brightness normalization processing on the images to ensure that the color difference of the texture images of the face texture camera under a plurality of different visual angles is smaller than a preset threshold (for example, 0.041) and the brightness is consistent (for example, the difference between the average brightness and the color difference is smaller than 2).
Step 104: calculating visibility of each triangular patch in respective texture cameras
Wherein the visibility of each triangular patch in each texture camera can be obtained by calculating the distance from the center of each triangular patch to the optical center of each texture camera and the two-dimensional projection triangle of the center of each triangular patch in each texture camera
Figure BDA0001506133430000073
Where the subscript j denotes the jth triangle patch and the superscript i denotes the corresponding texture camera number (i e {0, -1,1, -2, 2. }). For example, can be
Figure BDA0001506133430000075
The value is assigned to the value to be assigned,
Figure BDA0001506133430000074
a value equal to 1 indicates that it is visible,
Figure BDA0001506133430000076
equal to 0 means invisible. The visibility will be explained in detail by the embodiment of fig. 7 below
Figure BDA0001506133430000077
The step of calculating (2).
Step 105: calculating texture weight of each triangular face relative to each texture camera
The texture weight can be determined according to the included angle between the normal vector of the triangular patch and the connecting line from the center of the triangular patch to the optical center of the texture camera and the corresponding visibility. Specifically, each triangular patch is traversed, a normal vector of each triangular patch is calculated, and an included angle between the normal vector and a connecting line from the center of each triangular patch to the optical center of each texture camera is calculated
Figure BDA0001506133430000078
(as shown in fig. 4), where the subscript j denotes the jth triangle patch, and the superscript i denotes the corresponding texture camera (i e {0, -1,1, -2, 2. }). Furthermore, for the jth triangular patch, according to the included angle
Figure BDA0001506133430000071
Calculating texture weights corresponding to different texture cameras i according to the visibility
Figure BDA0001506133430000072
Figure BDA0001506133430000081
Step 106: correcting texture weight of a triangular face in a preset area on a human face three-dimensional model relative to a front face texture camera
Wherein the predetermined area is: fig. 5 shows a schematic diagram of regions in the three-dimensional face model, where distances between the bit lines and two sides of a plane defined by the optical axis of the front face texture camera are less than or equal to r (e.g., r is 50 mm). The correction specifically comprises: setting the texture weight of the triangular face slice in the region relative to the front face texture camera to 1, e.g.
Figure BDA0001506133430000082
And, the weight of the triangle face is set to 0 with respect to other texture cameras, for example
Figure BDA0001506133430000083
And the like, wherein the following table j represents the corresponding serial number of the triangular patch located in the predetermined area. In particular, for a three-dimensional coordinate system that has been corrected by rotation, as shown in fig. 5, where the z-axis is the direction of the optical axis of the texture camera, the x-axis is the horizontal direction, and the y-axis is the direction orthogonal to the x-axis of the z-axis, the median line of the three-dimensional model of the human face along the direction of the optical axis of the texture camera can be obtained by finding the average x-coordinate of the three-dimensional point cloud of the three-dimensional model of the human face. Through the steps, the middle part (corresponding to the region where the nose is located) of the three-dimensional face texture can be derived from the positive partA texture camera for human faces.
Step 107: smoothing and normalizing the texture weight value relative to each texture camera
Wherein the smoothing process includes: traversing each triangular patch, determining a triangular patch set M of the triangular patches neighboring in the three-dimensional model of the face, and according to a formula
Figure BDA0001506133430000084
To update the texture weight
Figure BDA0001506133430000085
For example, a set M of 500 neighboring triangular patches on the three-dimensional model may be found for a certain patch, so that the potential | M |, of M in this embodiment is 500, and the texture weight is smoothed according to the foregoing formula.
Wherein the normalization process comprises: traversing each triangular patch, pressing
Figure BDA0001506133430000086
And carrying out normalization processing on the texture weight values to enable the sum of the weight values of the texture cameras corresponding to each triangular patch to be 1. And texture splicing traces can be effectively eliminated by smoothing and normalizing the texture weight.
Step 108: performing texture fusion according to the texture weight of each triangular face relative to each texture camera, and acquiring the three-dimensional texture of the face
Specifically, texture fusion includes: traversing each triangular patch, carrying out affine transformation on the triangular patches under the visual angle of each texture camera, and determining a texture triangle according to a face texture image acquired by the corresponding texture camera
Figure BDA0001506133430000091
According to the formula
Figure BDA0001506133430000092
For texture triangle
Figure BDA0001506133430000093
To carry outWeighted summation to obtain fused texture triangles
Figure BDA0001506133430000094
And the merged texture triangles
Figure BDA0001506133430000095
And (3) pasting the image on a triangular face corresponding to the human face three-dimensional model to obtain human face three-dimensional texture, wherein the result is shown in fig. 6.
In the embodiment, the texture images are subjected to color correction, so that the colors of the texture images acquired by the texture cameras are not deviated and are consistent in brightness, and the consistency of subsequent texture fusion is improved; the texture weight required by fusion is comprehensively determined by combining the visibility of the triangular surface patch on the surface of the three-dimensional model and the included angle of the normal vector, the texture weight of the middle front face texture camera is corrected, and the texture weight is further subjected to three-dimensional space smoothing to eliminate the trace of texture splicing. Because the middle area of the face at the position of the nose and the like is specially processed, namely the texture image of the front texture camera is selected for mapping, the phenomenon that transition is uneven and unnatural easily occurs at two sides of the nose wing during texture mapping can be avoided, and the vivid and natural face three-dimensional texture of the texture fusion mapping is obtained.
Fig. 7 shows detailed steps of calculating the visibility of each triangular patch in each texture camera according to an embodiment of the present invention, which specifically include the following steps, which may be performed in a specific order, partially repeated, or in parallel:
step 201: constructing a two-dimensional recording matrix and initializing the value of each element to infinity
Specifically, traversing each texture camera i in sequence, and constructing a two-dimensional recording matrix R with the same size as the corresponding texture imageiAnd recording the values of all elements of the matrix in two dimensions
Figure BDA0001506133430000096
Are all initialized to + ∞, wherein the same size means that the number n of two-dimensional recording matrix elements is in phase with the number n of pixels of the texture imageAnd the two-dimensional coordinates are the same and are in one-to-one correspondence with the corresponding pixels.
Step 202: calculating two-dimensional projection triangle of each triangular patch in each texture camera
Sequentially traversing each triangular patch and each texture camera, and calculating a two-dimensional projection triangle of the jth triangular patch in the texture camera i
Figure BDA0001506133430000101
It may be defined by three two-dimensional coordinate points.
Step 203: calculating the distance from the central point of each triangular patch to the optical center of each texture camera
Sequentially traversing each triangular patch and each texture camera, and calculating the distance from the center point of the jth triangular patch to the optical center O of the texture camera i
Figure BDA0001506133430000102
Step 204: updating the values of the elements of the two-dimensional recording matrix according to the distance from the center of each triangular patch to the optical center of each texture camera and the two-dimensional projection triangle
The updating mode for updating the element values of the two-dimensional recording matrix is as follows: sequentially traversing each element, each triangular patch and each texture camera of the two-dimensional recording matrix if the two-dimensional recording matrix RiTwo-dimensional coordinate l of the nth elementnLocated in a two-dimensional projection triangle
Figure BDA0001506133430000103
Inner (i.e. inner)
Figure BDA00015061334300001015
) And the distance from the center of the triangular patch to the optical center of the texture camera
Figure BDA0001506133430000104
Is less than
Figure BDA0001506133430000105
Let two-dimensional record matrix element RnIs equal to the distance from the center of the triangular patch to the texture camera's optical center, i.e.
Figure BDA0001506133430000106
Step 205: determining the visibility of each triangular patch in the respective texture camera based on the values of the elements of the two-dimensional recording matrix
Specifically, each element of the two-dimensional recording matrix, each triangular patch, and each texture camera are traversed in sequence if the two-dimensional recording matrix RiTwo-dimensional coordinate l of the nth elementnLocated in a two-dimensional projection triangle
Figure BDA0001506133430000107
Distance from the center of the inner and triangular patch to the optical center of the texture camera
Figure BDA0001506133430000108
Is greater than
Figure BDA0001506133430000109
Then the triangular patch is not visible in the texture camera, visibility
Figure BDA00015061334300001010
Distance between two adjacent plates
Figure BDA00015061334300001011
Is less than or equal to
Figure BDA00015061334300001012
Then is visible, visibility
Figure BDA00015061334300001013
Thereby obtaining the visibility of each triangular patch in each texture camera
Figure BDA00015061334300001014
FIG. 8 illustrates an apparatus, namely an electronic device 310 (e.g., a computer server with program execution functionality) for texture fusion of a three-dimensional model of a human face according to an embodiment of the present invention, which includes at least one processor 311, a power supply 314, and a memory 312 and an input-output interface 313 communicatively connected to the at least one processor 311; the memory 312 stores instructions executable by the at least one processor 311, the instructions being executable by the at least one processor 311 to enable the at least one processor 311 to perform a method disclosed in any one of the embodiments; the input/output interface 313 may include a display, a keyboard, a mouse, and a USB interface, and is used for inputting three-dimensional model data of a human face; the power supply 314 is used to provide power to the electronic device 310.
The foregoing is merely a detailed description of specific embodiments of the invention and is not intended to limit the invention. Various alterations, modifications and improvements will occur to those skilled in the art without departing from the spirit and scope of the invention.

Claims (7)

1. A method for texture fusion of a three-dimensional model of a human face, the method comprising:
inputting human face three-dimensional model data, and acquiring three-dimensional point cloud index data of each triangular patch on the surface of the model; inputting a plurality of face texture images at different viewing angles, and acquiring mapping data of a face three-dimensional model on the face texture images according to corresponding texture camera parameters; carrying out color correction on the face texture image; calculating the visibility of each triangular patch in each texture camera;
traversing each triangular patch, calculating the normal vector of the triangular patch, and calculating the included angle between the normal vector and the connecting line from the center of the triangular patch to the optical center of each texture camera
Figure FDA0002695283750000011
The subscript j represents the jth triangular patch, the superscript i represents the corresponding texture cameras, and the number of the texture cameras is greater than or equal to three;
for the jth triangular patch, according to the included angle
Figure FDA0002695283750000012
Calculating texture weights corresponding to different texture cameras i according to the visibility
Figure FDA0002695283750000013
Figure FDA0002695283750000014
And (3) correcting the texture weight of the triangular face in the preset area on the human face three-dimensional model relative to the front face texture camera: setting the texture weight of the triangular surface patch in the preset area relative to the front texture camera to be 1, and setting the weight of the triangular surface patch relative to other texture cameras to be 0, wherein a subscript j represents a corresponding serial number of the triangular surface patch in the preset area; the predetermined area is: the distances between the median line of the human face three-dimensional model and the two sides of the plane determined by the optical axis of the front face texture camera are respectively less than or equal to r, and r is 50 mm; and smoothing and normalizing the texture weight value relative to each texture camera: traversing each triangular patch, determining a triangular patch set M of the triangular patches neighboring in the three-dimensional model of the face, and according to the triangular patch set M
Figure FDA0002695283750000015
To update the texture weight
Figure FDA0002695283750000016
Wherein | M | is the number of neighboring triangular patches; the normalization process includes: traversing each triangular patch, pressing
Figure FDA0002695283750000017
Normalizing the texture weights to enable the sum of the weights of the texture cameras corresponding to each triangular patch to be 1;
and performing texture fusion according to the texture weight of each triangular face relative to each texture camera, and acquiring the three-dimensional texture of the face.
2. The method of claim 1, wherein the three-dimensional model data of the human face comprises three-dimensional point cloud data in the three-dimensional model of the human face; the method further comprises the step of reconstructing the three-dimensional point cloud according to the three-dimensional coordinates of three vertexes of each triangular patch on the surface of the human face three-dimensional model, and obtaining the three-dimensional point cloud index data of each triangular patch.
3. The method of claim 1, wherein the texture camera parameters comprise three-dimensional coordinates of the respective texture camera optical center with respect to the three-dimensional model of the human face.
4. The method of claim 1, wherein the color correction comprises performing white balance and brightness normalization on each of the facial texture images to make color differences of the facial texture camera texture images at different viewing angles less than a predetermined threshold and brightness consistent.
5. The method of claim 1, wherein the texture fusion comprises: traversing each triangular patch, carrying out affine transformation on the triangular patches under the visual angle of each texture camera, and determining a texture triangle according to a face texture image acquired by the corresponding texture camera
Figure FDA0002695283750000021
According to the formula
Figure FDA0002695283750000022
For texture triangle
Figure FDA0002695283750000023
Weighted summation is carried out to obtain the fused texture triangle
Figure FDA0002695283750000024
And the merged texture triangles
Figure FDA0002695283750000025
And (5) pasting the picture on a triangular face corresponding to the human face three-dimensional model to obtain the human face three-dimensional texture.
6. The method of claim 1, wherein the calculating the visibility of each triangular patch in each texture camera comprises: constructing a two-dimensional recording matrix and initializing the value of each element to infinity; calculating a two-dimensional projection triangle of each triangular patch in each texture camera; calculating the distance from the central point of each triangular patch to the optical center of each texture camera; updating the values of the elements of the two-dimensional recording matrix according to the distance from the center of each triangular patch to the optical center of each texture camera and the two-dimensional projection triangle; and determining the visibility of each triangular patch in each texture camera according to the values of the two-dimensional recording matrix elements.
7. An apparatus for texture fusion of a three-dimensional model of a human face, the apparatus comprising at least one processor, and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 6.
CN201711328399.6A 2017-12-13 2017-12-13 Method and equipment for fusing textures of three-dimensional model of human face Active CN107945267B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711328399.6A CN107945267B (en) 2017-12-13 2017-12-13 Method and equipment for fusing textures of three-dimensional model of human face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711328399.6A CN107945267B (en) 2017-12-13 2017-12-13 Method and equipment for fusing textures of three-dimensional model of human face

Publications (2)

Publication Number Publication Date
CN107945267A CN107945267A (en) 2018-04-20
CN107945267B true CN107945267B (en) 2021-02-26

Family

ID=61942882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711328399.6A Active CN107945267B (en) 2017-12-13 2017-12-13 Method and equipment for fusing textures of three-dimensional model of human face

Country Status (1)

Country Link
CN (1) CN107945267B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458932B (en) * 2018-05-07 2023-08-22 阿里巴巴集团控股有限公司 Image processing method, device, system, storage medium and image scanning apparatus
CN108682030B (en) * 2018-05-21 2022-04-26 北京微播视界科技有限公司 Face replacement method and device and computer equipment
CN109118578A (en) * 2018-08-01 2019-01-01 浙江大学 A kind of multiview three-dimensional reconstruction texture mapping method of stratification
CN109271923A (en) * 2018-09-14 2019-01-25 曜科智能科技(上海)有限公司 Human face posture detection method, system, electric terminal and storage medium
CN109242961B (en) 2018-09-26 2021-08-10 北京旷视科技有限公司 Face modeling method and device, electronic equipment and computer readable medium
CN111369651A (en) * 2018-12-25 2020-07-03 浙江舜宇智能光学技术有限公司 Three-dimensional expression animation generation method and system
CN110153582B (en) * 2019-05-09 2020-05-19 清华大学 Welding scheme generation method and device and welding system
CN110232730B (en) * 2019-06-03 2024-01-19 深圳市三维人工智能科技有限公司 Three-dimensional face model mapping fusion method and computer processing equipment
CN110569768B (en) * 2019-08-29 2022-09-02 四川大学 Construction method of face model, face recognition method, device and equipment
CN111179210B (en) * 2019-12-27 2023-10-20 浙江工业大学之江学院 Face texture map generation method and system and electronic equipment
CN111192223B (en) * 2020-01-07 2022-09-30 腾讯科技(深圳)有限公司 Method, device and equipment for processing face texture image and storage medium
CN111340959B (en) * 2020-02-17 2021-09-14 天目爱视(北京)科技有限公司 Three-dimensional model seamless texture mapping method based on histogram matching
CN111710036B (en) * 2020-07-16 2023-10-17 腾讯科技(深圳)有限公司 Method, device, equipment and storage medium for constructing three-dimensional face model
CN112257657B (en) * 2020-11-11 2024-02-27 网易(杭州)网络有限公司 Face image fusion method and device, storage medium and electronic equipment
CN112402973B (en) * 2020-11-18 2022-11-04 芯勍(上海)智能化科技股份有限公司 Model detail judging method, terminal device and computer readable storage medium
CN113824879B (en) * 2021-08-23 2023-03-24 成都中鱼互动科技有限公司 Scanning device and normal map generation method
CN113838176B (en) * 2021-09-16 2023-09-15 网易(杭州)网络有限公司 Model training method, three-dimensional face image generation method and three-dimensional face image generation equipment
CN114742956B (en) * 2022-06-09 2022-09-13 腾讯科技(深圳)有限公司 Model processing method, device, equipment and computer readable storage medium
CN115761154A (en) * 2022-10-20 2023-03-07 中铁第四勘察设计院集团有限公司 Three-dimensional model generation method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080018407A (en) * 2006-08-24 2008-02-28 한국문화콘텐츠진흥원 Computer-readable recording medium for recording of 3d character deformation program
CN101958008A (en) * 2010-10-12 2011-01-26 上海交通大学 Automatic texture mapping method in three-dimensional reconstruction of sequence image
CN105550992A (en) * 2015-12-30 2016-05-04 四川川大智胜软件股份有限公司 High fidelity full face texture fusing method of three-dimensional full face camera
CN107346425A (en) * 2017-07-04 2017-11-14 四川大学 A kind of three-D grain photographic system, scaling method and imaging method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080018407A (en) * 2006-08-24 2008-02-28 한국문화콘텐츠진흥원 Computer-readable recording medium for recording of 3d character deformation program
CN101958008A (en) * 2010-10-12 2011-01-26 上海交通大学 Automatic texture mapping method in three-dimensional reconstruction of sequence image
CN105550992A (en) * 2015-12-30 2016-05-04 四川川大智胜软件股份有限公司 High fidelity full face texture fusing method of three-dimensional full face camera
CN107346425A (en) * 2017-07-04 2017-11-14 四川大学 A kind of three-D grain photographic system, scaling method and imaging method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"一种改进的人脸纹理映射方法";魏衍君等;《计算机仿真》;20120131;第29卷(第1期);第253-256页 *
"多视角三维人脸建模的关键技术研究";宗智勇;《中国优秀硕士学位论文全文数据库信息科技辑》;20130315(第03期);第29-36页 *

Also Published As

Publication number Publication date
CN107945267A (en) 2018-04-20

Similar Documents

Publication Publication Date Title
CN107945267B (en) Method and equipment for fusing textures of three-dimensional model of human face
CN109003325B (en) Three-dimensional reconstruction method, medium, device and computing equipment
US11410320B2 (en) Image processing method, apparatus, and storage medium
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
WO2021077720A1 (en) Method, apparatus, and system for acquiring three-dimensional model of object, and electronic device
CN106600686B (en) Three-dimensional point cloud reconstruction method based on multiple uncalibrated images
JP5099965B2 (en) Surface reconstruction and registration using Helmholtz mutual image pairs
US9036898B1 (en) High-quality passive performance capture using anchor frames
US9437034B1 (en) Multiview texturing for three-dimensional models
KR100681320B1 (en) Method for modelling three dimensional shape of objects using level set solutions on partial difference equation derived from helmholtz reciprocity condition
WO2010133007A1 (en) Techniques for rapid stereo reconstruction from images
US10169891B2 (en) Producing three-dimensional representation based on images of a person
CN108364292B (en) Illumination estimation method based on multiple visual angle images
US9147279B1 (en) Systems and methods for merging textures
Mori et al. Efficient use of textured 3D model for pre-observation-based diminished reality
CN111462030A (en) Multi-image fused stereoscopic set vision new angle construction drawing method
CN110033509B (en) Method for constructing three-dimensional face normal based on diffuse reflection gradient polarized light
Xu et al. Survey of 3D modeling using depth cameras
CN111382618B (en) Illumination detection method, device, equipment and storage medium for face image
Beeler et al. Improved reconstruction of deforming surfaces by cancelling ambient occlusion
Birkbeck et al. Variational shape and reflectance estimation under changing light and viewpoints
CN111354077A (en) Three-dimensional face reconstruction method based on binocular vision
EP3756163A1 (en) Methods, devices, and computer program products for gradient based depth reconstructions with robust statistics
CN113345063A (en) PBR three-dimensional reconstruction method, system and computer storage medium based on deep learning
CN111145341A (en) Single light source-based virtual-real fusion illumination consistency drawing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant