CN107274493B - Three-dimensional virtual trial type face reconstruction method based on mobile platform - Google Patents

Three-dimensional virtual trial type face reconstruction method based on mobile platform Download PDF

Info

Publication number
CN107274493B
CN107274493B CN201710506496.3A CN201710506496A CN107274493B CN 107274493 B CN107274493 B CN 107274493B CN 201710506496 A CN201710506496 A CN 201710506496A CN 107274493 B CN107274493 B CN 107274493B
Authority
CN
China
Prior art keywords
face
image
dimensional
basic
reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710506496.3A
Other languages
Chinese (zh)
Other versions
CN107274493A (en
Inventor
童晶
邹晓
朱红强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Campus of Hohai University
Original Assignee
Changzhou Campus of Hohai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Campus of Hohai University filed Critical Changzhou Campus of Hohai University
Priority to CN201710506496.3A priority Critical patent/CN107274493B/en
Publication of CN107274493A publication Critical patent/CN107274493A/en
Application granted granted Critical
Publication of CN107274493B publication Critical patent/CN107274493B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a three-dimensional virtual trial type face reconstruction method based on a mobile platform, which comprises the steps of obtaining a picture of the front face of a user face through a camera or an album, transmitting the picture to a server, calculating by an algorithm program through the server, returning face related parameters and a reconstructed face texture map after the calculation, instantiating a new face three-dimensional model according to the face related parameters and the map, and displaying the new face three-dimensional model. Different three-dimensional virtual hairstyles, accessories, display backgrounds and the like are loaded according to user selection, and are displayed on a screen in an instancing mode, so that the effect of displaying the virtual hairstyles is achieved. The invention provides a real virtual hair trying experience for the user, realizes the functions of three-dimensional face simulation, hair style and accessory replacement, personal information storage and the like of the user on the aspect of functional innovation, loads and selects the hair style and the accessory and the like in the server, and finally displays the hair style and the accessory in an instancing way, thereby providing a basis for the hair trying of the user and more truly reflecting the actual effects of different hair styles.

Description

Three-dimensional virtual trial type face reconstruction method based on mobile platform
Technical Field
The invention relates to the technical field of virtual reality image processing, in particular to a three-dimensional virtual trial type face reconstruction method based on a mobile platform.
Background
With the development of economy, people pay more attention to the self image, and prefer to enhance the consumption of the self image quality, and hairstyles and clothes shapes play a key role in shaping the personal image quality. Unlike a garment, the hairstyle is difficult to recover for a short period of time once the change is made. For the problem that the hair style cannot be previewed in advance, there are two main solutions at present: the hair style atlas and the hair color template are used, and the hair style application is based on the two-dimensional image. Both of these solutions, however, have significant drawbacks.
Hair style albums and hair color templates do not provide a hair try for a particular user. The trial type application based on the two-dimensional image generally requires that the photographing angle of a user corresponds to the angle of a hairstyle, otherwise, the face contour is not appropriate, and the face shape does not conform to the user. For different users, even the same hairstyle has different effects, and the existing method cannot well meet the requirement of previewing before the hairstyle is changed by the consumer.
Disclosure of Invention
In view of the above defects in the prior art, the technical problem to be solved by the present invention is to provide a three-dimensional virtual try-on type human face reconstruction method based on a mobile platform, wherein the method is built based on the mobile platform and performs system optimization in combination with a server, a try-on type effect is presented in a three-dimensional manner, multi-angle rotation observation can be performed, and requirements of a user for changing a hair style, coloring a hair and accessories are met. The invention discloses a user face three-dimensional reconstruction method based on a single picture by applying face detection, image deformation, image fusion and three-dimensional model deformation technologies, wherein the reconstructed three-dimensional user face is used for trying to send, the user trying to send effect can be really restored, and the experience and the practicability are better.
In order to achieve the above object, the present invention provides a three-dimensional virtual trial type face reconstruction method based on a mobile platform, which comprises the following steps:
establishing a three-dimensional face library of a face type grid and a basic face texture mapping for face image grid reconstruction and texture mapping reconstruction, wherein the target of grid reconstruction is to obtain face type fitting parameters, and the texture mapping reconstruction is to obtain a face texture mapping based on a user photo;
collecting a face image, identifying corresponding feature points of a face contour position and eyebrow, eye, nose and mouth contour positions in the image, determining a user face shape according to the face contour feature points, and determining an extracted image element range according to the feature points of facial contour positions;
after the face data is determined, face fitting parameters are calculated according to a face fitting formula (in the prior art, the details are not repeated here), and mesh reconstruction is realized; after the position of the five sense organs is determined, extracting image elements, and obtaining a new face texture mapping through an image deformation algorithm (in the prior art, no further description is given here) and Poisson fusion processing;
and combining the reconstructed grid and texture maps, and performing instantiation display to obtain a complete three-dimensional face of the user.
The three-dimensional face library for establishing the face type grids and the basic face texture maps specifically comprises the following steps:
the method comprises the steps of achieving preliminary acquisition of a three-dimensional face by means of FaceShift Studio and a Kinect camera, adjusting and correcting face feature positions of basic grids by using 3dsMax and Zbrush software, redrawing basic chartlet materials for fusion by using Photoshop according to an existing UV expansion map, and combining a trimmed grid model with a chartlet to complete establishment of a basic face three-dimensional face library.
The three-dimensional face library comprises seven basic face type grid models of a long face, a goose egg face, a square face, a pear-shaped face, a diamond face, a heart-shaped face and a round face and a basic face texture map, wherein the seven basic face type grid models mark corresponding position feature points of a face contour in advance, the feature points are projected onto a plane XOY, and a reference feature point T is obtained after normalization1,T2,T3,...,T7
After the face data are determined, fitting parameters are calculated according to a face fitting formula to realize grid reconstruction, and the method specifically comprises the following steps: recognizing facial contour feature points according to the input facial image, and carrying out normalization processing on the coordinates of the contour feature points to obtain a vector TinputProjecting seven standard face models in a face shape library onto an XOY plane, and carrying out normalization processing to obtain T1,T2,T3…T7According to the formula:
Figure BDA0001334799870000021
wherein the content of the first and second substances,
Figure BDA0001334799870000022
and α12,...,α7For unknowns to be solved, solving
Figure BDA0001334799870000023
And obtaining corresponding face fitting parameters, and transmitting the parameters to the mobile terminal to complete the reconstruction of the grid.
After the positions of the five sense organs are determined, image elements are extracted, and a new face texture map is obtained after image algorithm processing, wherein the method specifically comprises the following steps:
detecting the position of five sense organs according to the characteristic points, corroding the corresponding face image area, intercepting the corroded area, performing deformation operation of rotation and scaling, fusing the image after the deformation operation to a basic mapping material, and removing high light and shadow to obtain a reconstructed user face mapping.
The above etching the corresponding face image region specifically includes:
different corrosion times are set for different parts of the face image so as to achieve the corrosion effect.
The deformation operation of rotating and scaling the area obtained after the cutting corrosion is specifically as follows:
and performing deformation operation of rotation and scaling on the area obtained after the interception and corrosion, simultaneously moving the facial feature control points according to the translation scaling coefficient, and performing inverse distance weighting calculation according to the front and back change of one control point to obtain the change matrixes of all the other control points, wherein the interpolation function is as follows:
Figure BDA0001334799870000031
n are control points representing the four corners of the picture, dis1, … disN represent the distance of the point currently to be calculated from the control points 1, … N;
after the deformation, the blank space formed by stretching in the image is inversely interpolated.
And the image after the deformation operation is fused to the basic chartlet material, and the image fusion algorithm is a Poisson fusion algorithm.
The invention has the beneficial effects that:
the invention provides a real virtual try-on experience for the user, realizes the functions of three-dimensional face simulation, hair style and accessory replacement, personal information storage and the like of the user on the aspect of functional innovation, and is based on the interaction of a mobile terminal. The system loads and selects the hair style, accessories and the like in the server, and the system loads the file of the corresponding material after selection and displays the file in an instancing way. The three-dimensional face reconstruction algorithm based on a single photo provides a basis for a user to try hair style, and the actual effects of different hair styles are reflected more truly.
The conception, the specific structure and the technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, the features and the effects of the present invention.
Drawings
FIG. 1 is a functional and structural framework diagram of a three-dimensional virtual trial-type system of the present invention;
FIG. 2 is a flowchart illustrating interaction between a mobile terminal and a server terminal according to the present invention;
FIG. 3 is a general framework diagram of the system program of the present invention.
Detailed Description
As shown in fig. 1, 2 and 3, a three-dimensional virtual trial type face reconstruction method based on a mobile platform includes the following steps:
establishing a three-dimensional face library of a face type grid and a basic face texture mapping for face image grid reconstruction and texture mapping reconstruction, wherein the target of grid reconstruction is to obtain face type fitting parameters, and the texture mapping reconstruction is to obtain a face texture mapping based on a user photo;
collecting a face image, identifying corresponding feature points of a face contour position and facial features, determining a user face according to the face contour feature points, and determining an extracted image element range according to the facial features;
after the face data are determined, face fitting parameters are calculated according to a formula, and grid reconstruction is achieved; after the position of the five sense organs is determined, extracting image elements, and obtaining a new face texture mapping after processing through an image algorithm;
and combining the reconstructed grid and texture maps, and performing instantiation display to obtain a complete three-dimensional face of the user.
In this embodiment, the three-dimensional face library for establishing the face shape mesh and the basic face texture map specifically includes:
the method comprises the steps of achieving preliminary acquisition of a three-dimensional face by means of FaceShift Studio and a Kinect camera, properly adjusting and correcting face feature positions of basic grids by using 3dsMax and Zbrush software, redrawing basic chartlet materials for fusion by using Photoshop according to an existing UV expansion map, and combining a trimmed grid model with a chartlet to complete establishment of a basic face three-dimensional face library.
In this embodiment, the three-dimensional face library includes seven basic face type mesh models, namely a long face, a goose egg face, a square face, a pear face, a diamond face, a heart face and a round face, and a basic face texture map, wherein the seven basic face type mesh models mark feature points at positions corresponding to face contours in advance, project the feature points onto a plane XOY, and obtain a reference feature point T after normalization1,T2,T3,...,T7
In this embodiment, after determining the face data, calculating a fitting parameter according to a formula to realize mesh reconstruction, specifically: according to the formula:
Figure BDA0001334799870000041
wherein the content of the first and second substances,
Figure BDA0001334799870000042
and α12,...,α7For unknowns to be solved, solving
Figure BDA0001334799870000043
And obtaining corresponding face fitting parameters, and transmitting the parameters to the mobile terminal to complete the reconstruction of the grid.
In this embodiment, after the positions of the five sense organs are determined, image elements are extracted, and a new face texture map is obtained after image algorithm processing, specifically:
detecting the position of five sense organs according to the characteristic points, corroding the corresponding face image area, intercepting the corroded area, performing deformation operation of rotation and scaling, fusing the image after the deformation operation to a basic mapping material, and removing high light and shadow to obtain a reconstructed user face mapping.
In this embodiment, the etching of the corresponding face image area specifically includes: different corrosion times are set for different parts of the face image so as to achieve reasonable corrosion effect.
In this embodiment, the deformation operation of cutting out and scaling the area obtained after etching specifically includes:
and performing deformation operation of rotation and scaling on the area obtained after the interception and corrosion, simultaneously moving the facial feature control points according to the translation scaling coefficient, and performing inverse distance weighting calculation according to the front and back change of one control point to obtain the change matrixes of all the other control points, wherein the interpolation function is as follows:
Figure BDA0001334799870000051
after the deformation, the blank space formed by stretching in the image is inversely interpolated.
In this embodiment, the image after the deformation operation is fused to the basic mapping material, and the image fusion algorithm is a poisson fusion algorithm.
The specific process of the invention is as follows:
the invention adopts the design idea of an integral architecture with separated resources, logics and expressions, and takes the MVC mode as the overall program framework mode to keep the flexibility and the expansibility of the system.
The resource layer mainly manages related art resources including three-dimensional models, different types of pictures, two-dimensional images and the like through a Unity3D resource management technology in combination with art resource naming specifications.
The task of the logic layer is to perform logic processing on different functional requirements in the system, and the logic processing is divided into three modules, namely a virtual trial type module, a consultation acquisition module and a personal center module. The virtual hair trying center allows the replacement of hair styles, hair colors, accessories and backgrounds, the model is dragged by touch to observe in multiple angles, and the storage and sharing of the screenshots of the hair trying effect are supported. And the popular hair style information and hair care product recommending module is used for providing information such as fashion hair style, matching and product information. The personal center module manages and stores data information such as personal collection and trial-issue screenshots of the user, and the data information is convenient to review.
And the presentation layer adopts Unity3D to perform graphic rendering of the trial-launch scene, and UGUI is used as an interface development tool to complete the visual and interactive processes of the whole system.
During operation, the front photo of the face of the user is obtained through a camera or an album and is transmitted to a server, the server calculates through an algorithm program, then returns face related parameters and a reconstructed face texture map, instantiates a new face three-dimensional model according to the face related parameters and the map and displays the new face three-dimensional model. Loading different three-dimensional virtual hairstyles, accessories, display backgrounds and the like according to user selection, and displaying the three-dimensional virtual hairstyles, the accessories, the display backgrounds and the like in an instancing manner on a screen, so that the virtual hairstyle effect is displayed, and the steps are as follows:
and (I) three-dimensional scanning and establishment of a basic facial form three-dimensional face library.
The female has higher demand on the virtual trial type, so the three-dimensional face reconstruction method is mainly aimed at Asian females. Investigation and analysis show that the facial contour of most Asian women can be obtained by fitting seven basic facial shapes, namely a long face, a goose egg face, a square face, a pear-shaped face, a diamond-shaped face, a heart-shaped face and a round face. Therefore, a three-dimensional face library containing the seven basic facial forms is established to serve as a fitting basis for grid model reconstruction, and the method is more convenient and faster.
The FaceShift Studio is a real-time facial expression capture tool that is used primarily to replicate human facial movements and convert them into three-dimensional models or animations. The method realizes the initial acquisition of the three-dimensional face by combining the faceShift Studio camera with the Kinect camera. And ensuring the correct connection of the Kinect camera, and selecting a card mine sensor mode to receive information such as depth and images. In the training mode, a scanned person creates own expression file by simulating preset natural expression, and the captured expression file can be processed to generate a smooth three-dimensional model.
The basic model obtained by three-dimensional scanning has a simple topological structure and reasonable UV expansion, but the mapping image has the problems of dislocation deformation, color blurring and unevenness and the like. Combining the structures and contour features of seven different face types, properly adjusting and correcting the face feature positions of the basic grids by using 3dsMax and Zbrew software, redrawing basic chartlet materials for fusion by using Photoshop according to the existing UV expansion diagram, and combining the trimmed grid models with the chartlets to complete the establishment of the basic face type three-dimensional head portrait library. In addition, for the seven basic materials of different face shapes in the three-dimensional head portrait model library, some feature points at the corresponding positions of the face contour need to be marked to prepare for mesh reconstruction.
And (II) reconstructing a three-dimensional head portrait grid model.
The complete three-dimensional model is usually composed of two parts, namely a grid model and a texture map, and the invention adopts a scheme of separately reconstructing the grid and the texture map and takes a single face picture of a user as a basis. The three-dimensional human face mesh model reconstruction algorithm aims to obtain seven fusion coefficients and then transmits the fusion coefficients back to the mobile terminal for mesh fitting reconstruction.
After receiving the input photo, firstly detecting the characteristic points of the outermost outline of the face in the photo, and storing the information of the points in the photo. Projecting the characteristic points on a plane XOY for the three-dimensional human face basic material with the characteristic points at the corresponding positions of the pre-marked face contour, and normalizing to obtain the reference characteristic points T1,T2,T3,...,T7. According to the formula:
Figure BDA0001334799870000071
wherein the content of the first and second substances,
Figure BDA0001334799870000072
and α12,...,α7For unknowns to be solved, solving
Figure BDA0001334799870000073
Corresponding face fitting parameters can be obtained and transmitted to the mobile terminal to complete the reconstruction of the grid.
And (III) reconstructing a three-dimensional head portrait texture map.
Texture mapping makes the model more detailed and visually realistic. In order to show the trial hair effect of different users, the face shape of the users needs to be fitted, and more importantly, the texture map capable of restoring the real face of the users is reconstructed.
And (3) extracting key points of the five sense organ region of the input photo by using an interface provided by a Face detection library Face + +, and acquiring approximate ranges of eyes, a nose, eyebrows and a mouth in the photo. And then corroding the region, namely enlarging the corresponding position, wherein the more the corrosion times are, the larger the corresponding position expansion is, different corrosion times can be set for different parts of the face to achieve a reasonable corrosion effect, and then the whole characteristic region obtained by corrosion is extracted from the photo.
And carrying out translation, scaling and deformation operations on the key area, simultaneously moving the facial feature control points according to a translation and scaling coefficient, and carrying out inverse distance weighting calculation according to the front and back change of one control point to obtain the change matrixes of all the other control points, wherein an interpolation function is as follows:
Figure BDA0001334799870000074
after the deformation, the blank space formed by stretching in the image needs to be interpolated reversely, and the basic idea is the reverse process of the above process.
The image obtained by deformation is seamlessly fused to the basic face mapping material through the Poisson fusion algorithm, and the problem of fusion of the overlapped area can be well solved. g is a function of the source image of the five sense organs of the human face, the gradient
Figure BDA0001334799870000075
f is a function of the background mapping material image S, f is a function to be solved of the region omega,
Figure BDA0001334799870000081
is the fusion boundary. The fusion target has two: the method is characterized in that basic mapping material information is reserved, and seamless splicing is performed. According to Poisson image editing algorithm (Poisson Ima)ge edition) into an optimization solution problem:
Figure BDA0001334799870000082
the solution result is the solution of the Poisson equation:
Figure BDA0001334799870000083
discrete solution form of the above equation:
Figure BDA0001334799870000084
wherein v ispq=gp-gq,|Np|=4。
And (IV) optimizing the operation efficiency.
In order to realize the efficient operation of the system, the calculation and storage pressure of the mobile terminal is reduced, the three-dimensional head portrait reconstruction algorithm is built on the server, the processing operation, parameter acquisition, optimization and the like of the photos are carried out on the server, the system is well protected, and meanwhile, the operation efficiency is effectively improved. The server is used as an operation platform of a core algorithm, and the functions comprise extracting human face characteristic points, generating an algorithm of a human face mapping, obtaining some face weight data and sending the synthesized human face mapping and the face weight data to the mobile terminal. In addition, the three-dimensional art material resources such as models and maps with large data volume are reasonably packaged into AssetBundle through the Unity3D and are placed on the server, and dynamic loading is carried out during system operation, so that more memory space is released.
And (V) optimizing the art materials.
The computing power, the memory space and the rendering capability of the mobile terminal are limited, and the data volume of model resources should be reduced as much as possible in order to ensure the overall fluency of the system. The invention optimizes the model by reducing the number and the surface number of the model, combining and sharing the mapping, using the normal mapping, the light mapping and other methods to embody the details of the model and the like.
The details of human head are more, and the number of faces and vertices of vivid three-dimensional human faces is usually very high. According to the invention, after detail carving is carried out by using ZBursh software on the basis of the face obtained by scanning, topological wiring is carried out on the high module again, the total number of faces is controlled within two thousand and five hundred, the peaks at the five sense organ positions are distributed more densely, and the rest positions are sparse, so that the face details are well kept while the face number is reduced. The hair model adopts a similar method, the hierarchy and the texture of the hair are firstly reflected by the high-precision model, and then the required number of faces of the hair model is automatically or manually reduced.
A charting is another important resource in the present invention, and typically 1-2 charting maps per model. The mapping for the mobile terminal can be reduced in resolution to reduce the resource amount, and different models can realize mapping sharing through reasonable UV division. If the clothes are concave-convex through the model expression, a plurality of vertexes and patches need to be subdivided, and the visual concave-convex effect is realized by using the normal mapping so as to reduce the data volume. Meanwhile, the illumination effect of the static scene is enhanced by applying the illumination map, so that the static model looks more real and rich and has more stereoscopic impression through less performance consumption.
The functions realized by the invention are as follows:
the invention provides a real virtual try-on experience for the user, realizes the functions of three-dimensional face simulation, hair style and accessory replacement, personal information storage and the like of the user on the aspect of functional innovation, and is based on the interaction of a mobile terminal.
The three-dimensional face reconstruction of the user is realized, and after the user shoots a face picture, the system generates a corresponding face three-dimensional model. The three-dimensional face reconstruction algorithm based on a single photo provides a basis for a user to try hair style, and the actual effects of different hair styles are reflected more truly.
The system loads and selects the hair style, the accessories and the like in the server, and loads and instantiates and displays the files of the corresponding materials after selection.
The system stores the selected hair style data into the temporary server directory and places the corresponding information into the database for storage, so that the user can view the hair style data again at the later stage.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (2)

1. A three-dimensional virtual trial type face reconstruction method based on a mobile platform is characterized by comprising the following steps:
establishing a three-dimensional face library of a face type grid and a basic face texture mapping for face image grid reconstruction and texture mapping reconstruction, wherein the target of grid reconstruction is to obtain face type fitting parameters, and the texture mapping reconstruction is to obtain a face texture mapping based on a user photo;
collecting a face image, identifying corresponding feature points of a face contour position and eyebrow, eye, nose and mouth contour positions in the image, determining a user face shape according to the face contour feature points, and determining an extracted image element range according to the feature points of facial contour positions;
after the face shape data are determined, face shape fitting parameters are calculated according to a face shape fitting formula, and grid reconstruction is achieved; after the position of the five sense organs is determined, extracting image elements, and obtaining a new face texture mapping through image deformation algorithm and Poisson fusion processing;
merging the reconstructed grid and texture maps, and performing instantiation display to obtain a complete three-dimensional face of the user;
the three-dimensional face library for establishing the face type grids and the basic face texture maps specifically comprises the following steps:
the method comprises the steps of achieving preliminary acquisition of a three-dimensional face by means of a FaceShift Studio and a Kinect camera, adjusting and correcting facial feature positions of basic grids by using 3dsMax and Zbrush software, redrawing basic chartlet materials for fusion by using Photoshop according to an existing UV expansion map, and combining a trimmed grid model with a chartlet to complete establishment of a basic face three-dimensional face library;
the three-dimensional face library comprises seven basic face type grid models of a long face, a goose egg face, a square face, a pear-shaped face, a diamond face, a heart-shaped face and a round face and a basic face texture map, wherein the seven basic face type grid models mark feature points at corresponding positions of face contours in advance, the feature points are projected onto a plane XOY, and a reference feature point T is obtained after normalization1,T2,T3,...,T7
After the face data are determined, fitting parameters are calculated according to a face fitting formula to realize grid reconstruction, and the method specifically comprises the following steps: recognizing facial contour feature points according to the input facial image, and carrying out normalization processing on the coordinates of the contour feature points to obtain a vector TinputProjecting seven standard face models in a face shape library onto an XOY plane, and carrying out normalization processing to obtain T1,T2,T3…T7According to the formula:
Figure FDA0002414127750000011
wherein the content of the first and second substances,
Figure FDA0002414127750000021
and α12,...,α7For unknowns to be solved, solving
Figure FDA0002414127750000022
Obtaining corresponding face fitting parameters, and transmitting the parameters to the mobile terminal to complete the reconstruction of the grid;
after the position of the five sense organs is determined, image elements are extracted, and a new face texture map is obtained after image algorithm processing, wherein the method specifically comprises the following steps:
detecting the position of five sense organs according to the characteristic points, corroding the corresponding face image area, intercepting the corroded area, performing deformation operation of rotation and scaling, fusing the image after the deformation operation to a basic mapping material, and removing high light and shadow to obtain a reconstructed user face mapping;
the method for corroding the corresponding face image area specifically comprises the following steps:
different corrosion times are set for different parts of the face image so as to achieve the corrosion effect;
and carrying out deformation operation of rotation and scaling on the area obtained after the cutting corrosion, specifically comprising the following steps:
and performing deformation operation of rotation and scaling on the area obtained after the interception and corrosion, simultaneously moving the facial feature control points according to the translation scaling coefficient, and performing inverse distance weighting calculation according to the front and back change of one control point to obtain the change matrixes of all the other control points, wherein the interpolation function is as follows:
Figure FDA0002414127750000023
n are control points representing the four corners of the picture, dis1, … disN represent the distance of the point currently to be calculated from the control points 1, … N;
after the deformation, the blank space formed by stretching in the image is inversely interpolated.
2. The three-dimensional virtual trial-type face reconstruction method based on the mobile platform as claimed in claim 1, wherein the image after the deformation operation is fused to a basic mapping material, and the image fusion algorithm is a poisson fusion algorithm.
CN201710506496.3A 2017-06-28 2017-06-28 Three-dimensional virtual trial type face reconstruction method based on mobile platform Active CN107274493B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710506496.3A CN107274493B (en) 2017-06-28 2017-06-28 Three-dimensional virtual trial type face reconstruction method based on mobile platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710506496.3A CN107274493B (en) 2017-06-28 2017-06-28 Three-dimensional virtual trial type face reconstruction method based on mobile platform

Publications (2)

Publication Number Publication Date
CN107274493A CN107274493A (en) 2017-10-20
CN107274493B true CN107274493B (en) 2020-06-19

Family

ID=60071180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710506496.3A Active CN107274493B (en) 2017-06-28 2017-06-28 Three-dimensional virtual trial type face reconstruction method based on mobile platform

Country Status (1)

Country Link
CN (1) CN107274493B (en)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171789B (en) * 2017-12-21 2022-01-18 迈吉客科技(北京)有限公司 Virtual image generation method and system
CN108537861B (en) * 2018-04-09 2023-04-18 网易(杭州)网络有限公司 Map generation method, device, equipment and storage medium
CN108711180B (en) * 2018-05-02 2021-08-06 北京市商汤科技开发有限公司 Method and device for generating makeup and/or face-changing special effect program file package and method and device for generating makeup and/or face-changing special effect
CN108776983A (en) * 2018-05-31 2018-11-09 北京市商汤科技开发有限公司 Based on the facial reconstruction method and device, equipment, medium, product for rebuilding network
CN109410315A (en) * 2018-08-31 2019-03-01 南昌理工学院 Hair styling method, device, readable storage medium storing program for executing and intelligent terminal
CN109035380B (en) * 2018-09-11 2023-03-10 北京旷视科技有限公司 Face modification method, device and equipment based on three-dimensional reconstruction and storage medium
CN109409274B (en) * 2018-10-18 2020-09-04 四川云从天府人工智能科技有限公司 Face image transformation method based on face three-dimensional reconstruction and face alignment
CN109523345A (en) * 2018-10-18 2019-03-26 河海大学常州校区 WebGL virtual fitting system and method based on virtual reality technology
CN109377557B (en) * 2018-11-26 2022-12-27 中山大学 Real-time three-dimensional face reconstruction method based on single-frame face image
CN109671159A (en) * 2018-12-26 2019-04-23 贵州锦微科技信息有限公司 The virtual try-in method of ethnic group's hairdressing based on 3D VR technology
CN109685892A (en) * 2018-12-31 2019-04-26 南京邮电大学盐城大数据研究院有限公司 A kind of quick 3D face building system and construction method
CN109886144B (en) * 2019-01-29 2021-08-13 深圳市云之梦科技有限公司 Virtual trial sending method and device, computer equipment and storage medium
CN109859134A (en) * 2019-01-30 2019-06-07 珠海天燕科技有限公司 A kind of processing method and terminal of makeups material
CN109857311A (en) * 2019-02-14 2019-06-07 北京达佳互联信息技术有限公司 Generate method, apparatus, terminal and the storage medium of human face three-dimensional model
CN110009725B (en) * 2019-03-06 2021-04-09 浙江大学 Face reconstruction method based on multiple RGB images
CN110021064A (en) * 2019-03-07 2019-07-16 李辉 A kind of aestheticism face system and method
US10650564B1 (en) * 2019-04-21 2020-05-12 XRSpace CO., LTD. Method of generating 3D facial model for an avatar and related device
CN110120053A (en) * 2019-05-15 2019-08-13 北京市商汤科技开发有限公司 Face's dressing processing method, device and equipment
CN110544149A (en) * 2019-08-06 2019-12-06 尚尚珍宝(北京)网络科技有限公司 Virtual wearing method and device of wearable product
CN110543826A (en) * 2019-08-06 2019-12-06 尚尚珍宝(北京)网络科技有限公司 Image processing method and device for virtual wearing of wearable product
CN111179411B (en) * 2019-11-25 2023-03-28 郭宗源 Visual facial cosmetology plastic simulation method, system and equipment based on social platform
CN111640182B (en) * 2020-04-20 2023-04-07 南京征帆信息科技有限公司 Wall surface texture drawing system and method
CN111899159B (en) * 2020-07-31 2023-12-22 北京百度网讯科技有限公司 Method, device, apparatus and storage medium for changing hairstyle
CN112052158B (en) * 2020-08-05 2022-09-30 腾讯科技(成都)有限公司 Art resource operation information acquisition method and device
CN112116699B (en) * 2020-08-14 2023-05-16 浙江工商大学 Real-time real-person virtual trial sending method based on 3D face tracking
CN112991523B (en) * 2021-04-02 2023-06-30 福建天晴在线互动科技有限公司 Efficient and automatic hair matching head shape generation method and generation device thereof
CN113724396A (en) * 2021-09-10 2021-11-30 广州帕克西软件开发有限公司 Virtual face-lifting method and device based on face mesh
CN117389676B (en) * 2023-12-13 2024-02-13 成都白泽智汇科技有限公司 Intelligent hairstyle adaptive display method based on display interface

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663820A (en) * 2012-04-28 2012-09-12 清华大学 Three-dimensional head model reconstruction method
CN103065360A (en) * 2013-01-16 2013-04-24 重庆绿色智能技术研究院 Generation method and generation system of hair style effect pictures
CN104157010A (en) * 2014-08-29 2014-11-19 厦门幻世网络科技有限公司 3D human face reconstruction method and device
CN104376594A (en) * 2014-11-25 2015-02-25 福建天晴数码有限公司 Three-dimensional face modeling method and device
CN105045968A (en) * 2015-06-30 2015-11-11 青岛理工大学 Hairstyle design method and system
CN106652025A (en) * 2016-12-20 2017-05-10 五邑大学 Three-dimensional face modeling method and three-dimensional face modeling printing device based on video streaming and face multi-attribute matching

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663820A (en) * 2012-04-28 2012-09-12 清华大学 Three-dimensional head model reconstruction method
CN103065360A (en) * 2013-01-16 2013-04-24 重庆绿色智能技术研究院 Generation method and generation system of hair style effect pictures
CN104157010A (en) * 2014-08-29 2014-11-19 厦门幻世网络科技有限公司 3D human face reconstruction method and device
CN104376594A (en) * 2014-11-25 2015-02-25 福建天晴数码有限公司 Three-dimensional face modeling method and device
CN105045968A (en) * 2015-06-30 2015-11-11 青岛理工大学 Hairstyle design method and system
CN106652025A (en) * 2016-12-20 2017-05-10 五邑大学 Three-dimensional face modeling method and three-dimensional face modeling printing device based on video streaming and face multi-attribute matching

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Image Warping with Scattered Data Interpolation";Detlef Ruprecht,Heinrich Muller;《IEEE Computer Graphics and Applications》;19951231;参见第38页 *
"基于单张二维图片的三维人脸建模";龚勋;《中国博士学位论文全文数据库 信息科技辑》;20090615;第33-35页 *
"基于单视图人脸重建的移动平台发型置换引擎的研究和应用";王超;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160215;第10-11、23-29页 *

Also Published As

Publication number Publication date
CN107274493A (en) 2017-10-20

Similar Documents

Publication Publication Date Title
CN107274493B (en) Three-dimensional virtual trial type face reconstruction method based on mobile platform
Sýkora et al. Ink-and-ray: Bas-relief meshes for adding global illumination effects to hand-drawn characters
CN108305312B (en) Method and device for generating 3D virtual image
CN109377557B (en) Real-time three-dimensional face reconstruction method based on single-frame face image
KR101199475B1 (en) Method and apparatus for reconstruction 3 dimension model
CN113298936B (en) Multi-RGB-D full-face material recovery method based on deep learning
Lu et al. Illustrative interactive stipple rendering
JP2024522287A (en) 3D human body reconstruction method, apparatus, device and storage medium
Hu et al. Capturing braided hairstyles
JP2001268594A (en) Client server system for three-dimensional beauty simulation
WO2021063271A1 (en) Human body model reconstruction method and reconstruction system, and storage medium
WO2002013144A1 (en) 3d facial modeling system and modeling method
Li et al. In-home application (App) for 3D virtual garment fitting dressing room
Hudon et al. Deep normal estimation for automatic shading of hand-drawn characters
CN112102480B (en) Image data processing method, apparatus, device and medium
CN113628327A (en) Head three-dimensional reconstruction method and equipment
Thalmann et al. Modeling of populations
CN105913496A (en) Method and system for fast conversion of real clothes to three-dimensional virtual clothes
Verhoeven Computer graphics meets image fusion: The power of texture baking to simultaneously visualise 3D surface features and colour
WO2020104990A1 (en) Virtually trying cloths & accessories on body model
Tarini et al. Texturing faces
CN113870404B (en) Skin rendering method of 3D model and display equipment
Lu et al. Parametric shape estimation of human body under wide clothing
CN116385619B (en) Object model rendering method, device, computer equipment and storage medium
Teng et al. Image-based tree modeling from a few images with very narrow viewing range

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant