CN112419487A - Three-dimensional hair reconstruction method and device, electronic equipment and storage medium - Google Patents

Three-dimensional hair reconstruction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112419487A
CN112419487A CN202011413467.0A CN202011413467A CN112419487A CN 112419487 A CN112419487 A CN 112419487A CN 202011413467 A CN202011413467 A CN 202011413467A CN 112419487 A CN112419487 A CN 112419487A
Authority
CN
China
Prior art keywords
hair
dimensional
image
data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011413467.0A
Other languages
Chinese (zh)
Other versions
CN112419487B (en
Inventor
郑彦波
宋新慧
袁燚
胡志鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202011413467.0A priority Critical patent/CN112419487B/en
Publication of CN112419487A publication Critical patent/CN112419487A/en
Application granted granted Critical
Publication of CN112419487B publication Critical patent/CN112419487B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a hair three-dimensional reconstruction method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring an original hair image, detecting the hair direction of the original hair image, and generating a hair directional diagram; constructing initial three-dimensional hair data according to a hair directional diagram and a preset three-dimensional target model; optimizing the hair shape of the initial three-dimensional hair data through a hair generation model to obtain target three-dimensional hair data; the target three-dimensional hair data can be micro-rendered into a two-dimensional hair image, and the difference between the two-dimensional hair image and the original hair image is minimized by optimizing the hair generation model. The obtained target three-dimensional hair data is more accurate and more accords with the growth rule of real hair.

Description

Three-dimensional hair reconstruction method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for three-dimensional hair reconstruction, an electronic device, and a storage medium.
Background
The modeling of virtual three-dimensional characters, animals, is a very important part in games, science fiction movies, VR, etc. And people and animals are vivid, and high-precision three-dimensional modeling of hair is indispensable. How to construct a realistic three-dimensional model of hair requires accurate recognition of human hair in the image.
And (3) reconstructing the three-dimensional hair of the universal tone, and training a deep neural network model by using data of the image and the three-dimensional hair in a matching way. After the model is trained, the two-dimensional image is input, and the three-dimensional hair data is output after the model is calculated.
Due to the fact that the individual hair difference is large, the training sample cannot cover all objects, the hair three-dimensional reconstruction result is directly output through the deep neural network model in the mode, and the accuracy of the hair three-dimensional reconstruction result is not high.
Disclosure of Invention
The embodiment of the application provides a hair three-dimensional reconstruction method, which is used for improving the accuracy of hair three-dimensional reconstruction.
The embodiment of the application provides a hair three-dimensional reconstruction method, which comprises the following steps:
acquiring an original hair image, detecting the hair direction of the original hair image, and generating a hair directional diagram;
constructing initial three-dimensional hair data according to the hair directional diagram and a preset three-dimensional target model;
optimizing the hair shape of the initial three-dimensional hair data through a hair generation model to obtain target three-dimensional hair data;
the target three-dimensional hair data may be micro-rendered into a two-dimensional hair image, with minimal difference between the two-dimensional hair image and the original hair image by optimizing the hair generation model.
In one embodiment, the acquiring the original hair image and detecting the hair direction of the original hair image to generate the hair direction diagram includes:
performing hair edge detection on a target image to obtain an original hair image;
and filtering the original hair image by using a linear filter to obtain hair directions of different positions of the original hair image, and forming the hair directional diagram.
In an embodiment, the initial three-dimensional hair data comprises: three-dimensional position coordinates of a plurality of points corresponding to each virtual hair; according to the hair directional diagram and a preset three-dimensional target model, constructing initial three-dimensional hair data, wherein the method comprises the following steps:
and according to the hair directions of different positions indicated by the hair directional diagram, extending from the preset hair root position of the three-dimensional target model to obtain three-dimensional position coordinates of a plurality of points corresponding to each virtual hair.
In an embodiment, the optimizing the hair shape of the initial three-dimensional hair data by a hair generation model to obtain target three-dimensional hair data includes:
for each virtual hair, taking the three-dimensional position coordinates of a plurality of points corresponding to the virtual hair as the input of the hair generation model, and obtaining the optimized three-dimensional position coordinates of the plurality of points output by the hair generation model;
and obtaining the target three-dimensional hair data according to the three-dimensional position coordinates of the optimized points of each virtual hair.
In one embodiment, obtaining the optimized three-dimensional position coordinates of the plurality of points output by the hair generation model using the three-dimensional position coordinates of the plurality of points as an input to the hair generation model comprises:
inputting the three-dimensional position coordinates of the plurality of points into a coding module of the hair generation model, and outputting a hair feature vector;
and inputting the hair feature vector into a decoding module of the hair generation model, and outputting the three-dimensional position coordinates of the optimized points.
In an embodiment, before obtaining the optimized three-dimensional position coordinates of the plurality of points output by the hair generation model using the three-dimensional position coordinates of the plurality of points as an input to the hair generation model, the method further comprises:
acquiring position coordinates of a plurality of points belonging to the same real hair;
and performing machine learning by using the position coordinates of the plurality of points belonging to the same real hair, and training to obtain the hair generation model.
In one embodiment, the micro-rendering the target three-dimensional hair data into a two-dimensional hair image comprises:
constructing a virtual camera facing the three-dimensional target model;
and projecting the target three-dimensional hair data to a two-dimensional plane from the viewpoint of the virtual camera to form a two-dimensional hair image.
In an embodiment, minimizing the difference between the two-dimensional hair image and the two-dimensional hair image by optimizing the hair generation model comprises:
calculating a first difference between a hair contour of the two-dimensional hair image and a hair contour of the original hair image;
calculating a second difference between the hair direction diagram of the two-dimensional hair image and the hair direction diagram of the original hair image;
iteratively optimizing the hair generation model to minimize a sum of the first difference and the second difference.
The embodiment of the present application further provides a hair three-dimensional reconstruction device, including:
the direction detection module is used for acquiring an original hair image, detecting the hair direction of the original hair image and generating a hair directional diagram;
the model building module is used for building initial three-dimensional hair data according to the hair directional diagram and a preset three-dimensional target model;
the hair optimization module is used for optimizing the hair shape of the initial three-dimensional hair data through a hair generation model to obtain target three-dimensional hair data;
and the reverse transfer module is used for micro-rendering the target three-dimensional hair data into a two-dimensional hair image, and the difference between the two-dimensional hair image and the two-dimensional hair image is minimized by optimizing the hair generation model.
An embodiment of the present application further provides an electronic device, where the electronic device includes:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the above-described hair three-dimensional reconstruction method.
Embodiments of the present application further provide a computer-readable storage medium, which stores a computer program, which is executable by a processor to perform the above-mentioned hair three-dimensional reconstruction method.
According to the technical scheme provided by the embodiment of the application, the hair direction of the original hair image is detected, the initial three-dimensional hair data is constructed based on the hair direction, then the hair shape of the initial hair data is optimized through the hair generation model, the target three-dimensional hair data is obtained, the target three-dimensional hair data can be micro-rendered into the two-dimensional hair image, and the difference between the two-dimensional hair image and the original hair image is minimized through optimizing the hair generation model, so that the obtained target three-dimensional hair data is more accurate and more accords with the growth rule of real hair.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of a hair three-dimensional reconstruction method provided in an embodiment of the present application;
FIG. 3 is a comparison of the results of the hair generation model provided in the examples of the present application before and after treatment;
FIG. 4 is a detailed flowchart of step S340 in the corresponding embodiment of FIG. 2;
FIG. 5 is a schematic diagram of the hair texture optimization provided in the embodiments that follow the present application;
fig. 6 is a schematic diagram of a three-dimensional hair reconstruction process provided by an embodiment of the present application;
fig. 7 is a block diagram of a hair three-dimensional reconstruction apparatus provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Fig. 1 is a schematic structural diagram of an electronic device provided in an embodiment of the present application. The electronic device 100 may be used to perform the hair three-dimensional reconstruction method provided by the embodiment of the application. As shown in fig. 1, the electronic device 100 includes: one or more processors 102, and one or more memories 104 storing processor-executable instructions. Wherein the processor 102 is configured to execute a hair three-dimensional reconstruction method provided by the following embodiments of the present application.
The processor 102 may be a gateway, or may be an intelligent terminal, or may be a device including a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or other form of processing unit having data processing capability and/or instruction execution capability, and may process data of other components in the electronic device 100, and may control other components in the electronic device 100 to perform desired functions.
The memory 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement the hair three-dimensional reconstruction method described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
In one embodiment, the electronic device 100 shown in FIG. 1 may also include an input device 106, an output device 108, and a data acquisition device 110, which are interconnected via a bus system 112 and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device 100 may have other components and structures as desired.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like. The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like. The data acquisition device 110 may acquire an image of a subject and store the acquired image in the memory 104 for use by other components. Illustratively, the data acquisition device 110 may be a camera.
In an embodiment, the devices in the exemplary electronic apparatus 100 for implementing the hair three-dimensional reconstruction method of the embodiment of the present application may be integrally disposed, or may be separately disposed, such as integrally disposing the processor 102, the memory 104, the input device 106 and the output device 108, and disposing the data acquisition device 110 separately.
In an embodiment, the example electronic device 100 for implementing the hair three-dimensional reconstruction method of the embodiment of the present application may be implemented as a smart terminal, such as a smart phone, a tablet computer, a desktop computer, a smart watch, a vehicle-mounted device, and the like.
Fig. 2 is a schematic flow chart of a hair three-dimensional reconstruction method provided in an embodiment of the present application. As shown in fig. 2, the method includes the following steps S310 to S340.
Step S310, acquiring an original hair image, detecting the hair direction of the original hair image, and generating a hair directional diagram.
The hair may be human hair or animal hair. The following embodiments mainly refer to human hair, for example, the three-dimensional reconstruction process of animal hair can refer to the three-dimensional reconstruction process of human hair.
The original hair image may be a two-dimensional hair image or an animal image. The raw hair image may be captured by a camera or may be acquired from a local or other device. The hair image acquired here may be referred to as the original hair image for distinction from the two-dimensional hair image re-rendered below. In one embodiment, hair edge detection may be performed on the target image to obtain the original hair image. The target image may be a human image or an animal image. The original hair image may be a hair image segmented from the person image, or an image of an area where an animal is located and captured from the animal image (i.e., to remove interference from the surrounding environment).
Wherein the hair direction diagram is used to indicate the hair direction at different positions in the original hair image. And filtering the original hair image by using a linear filter to obtain hair directions of different positions of the original hair image, and forming the hair directional diagram.
In one embodiment, the linear filter may be a Gabor filter, and the raw hair image is filtered by using the Gabor filter, so that a direction value representing the texture direction can be obtained. The direction value can be considered as an angle value of the texture direction of the local area where each pixel is located, i.e. the hair direction. And forming a hair directional diagram by the angle value corresponding to each pixel.
And S320, constructing initial three-dimensional hair data according to the hair directional diagram and a preset three-dimensional target model.
In one embodiment, the three-dimensional object model may be a three-dimensional head model when the hair is hair, and a three-dimensional animal model when the hair is animal hair. The initial three-dimensional hair data is the preliminary reconstructed hair spatial position data with respect to the target three-dimensional hair data of step S330. And the target three-dimensional hair data is hair spatial position data optimized from the initial three-dimensional hair data.
The initial three-dimensional hair data may include three-dimensional position coordinates of a plurality of points corresponding to each virtual hair. For example, three-dimensional hair root positions of 1024 hair roots may be set in advance. And according to the hair directions of different positions indicated by the hair directional diagram, extending from the preset hair root position of the three-dimensional target model to obtain three-dimensional position coordinates of a plurality of points corresponding to each virtual hair.
The extension from the hair root position may be to increase or decrease the x, y coordinates from the hair root position in the hair direction to obtain new x, y coordinates. The z-coordinate is calculated by projecting the new x, y-coordinate onto the three-dimensional object model and using the z-coordinate of the projected point as the coordinate of the hair at that location. Like Liu Hai, one hair on the forehead grows downwards along the scalp.
Virtual hair is a computer drawn hair as opposed to a real hair. For example, a virtual hair can be constructed by extending a section from the position of the hair root to obtain a three-dimensional point coordinate, then extending the next section to obtain the next three-dimensional point coordinate, and so on. In one embodiment, each virtual hair may have 100 point coordinates, and the 100 point coordinates may be regarded as three-dimensional position coordinates of a plurality of points corresponding to each virtual hair.
And step S330, optimizing the hair shape of the initial three-dimensional hair data through a hair generation model to obtain target three-dimensional hair data.
In one embodiment, the hair generation model may be trained by a vae (spatial automation) model. Specifically, position coordinates of a plurality of points belonging to the same real hair are obtained; and performing machine learning by using the position coordinates of a plurality of points belonging to the same real hair, and training to obtain the hair generation model. For example, the known position coordinates (x, y, z) of 100 points on one hair may be used as input (i.e., 100 × 3 numbers are input), an 8-dimensional feature vector is obtained through an encoding module (encode) of the VAE model, and the 8-bit feature vector is output as the position coordinates of 100 points through a decoding module (decode). And adjusting parameters of the encode module and the decode module to ensure that the position coordinates of the output and the input are as same as possible.
The VAE model may learn a distribution relationship between coordinates of a plurality of points on a single hair strand. And then inputting the three-dimensional position coordinates of the plurality of points of the virtual hair into the hair generation model aiming at each virtual hair, and obtaining the optimized three-dimensional position coordinates of the plurality of points output by the hair generation model. In one embodiment, three-dimensional position coordinates of a plurality of points of the same virtual hair may be input into an encoding module of the hair generation model, and the hair feature vector may be output. The hair feature vector is used to characterize the shape features of an individual hair. And inputting the hair feature vector into a decoding module of the hair generation model, and outputting the optimized three-dimensional position coordinates of the points. While the encoding and decoding modules of the hair generation model may be trained in the above machine learning manner.
For example, 1024 virtual hairs, the three-dimensional coordinates of 100 points per hair, after passing through the encoding module, can obtain 1024 (root hairs) × 8 (dimensions of the feature vector of each hair). After the data passes through a decoding module, the optimized three-dimensional position coordinates of 1024 × 100 points can be obtained. After the optimization processing of the hair generation module, the three-dimensional position coordinate of the virtual hair accords with the hair distribution rule, and the hair can not be twisted. Assuming that there are 100 points for one virtual hair and a total of 1024 virtual hairs, the target three-dimensional hair data may include three-dimensional position coordinates (x, y, z) of 1024 × 100 optimized points. A line connecting 100 points belonging to a virtual hair is a hair.
As shown in fig. 3, (a) is a virtual hair constructed according to a hair pattern, and (b) is an optimized virtual hair. It can be seen from (a) that the virtual hair constructed according to the hair pattern has a large difference from the real hair, does not conform to the hair distribution rule, and has strange distortion. And the optimized virtual hair is more natural and closer to the real hair.
Step S340, micro-rendering the target three-dimensional hair data into a two-dimensional hair image, and minimizing the difference between the two-dimensional hair image and the original hair image by optimizing the hair generation model.
The two-dimensional hair image may be a 2D (two-dimensional) image obtained by photographing a 3D (three-dimensional) model of the hair corresponding to the target three-dimensional hair data. Micro-renderable refers to a technique of throwing a 3D scene into a renderer to obtain a 2D image. Since the target three-dimensional hair data is obtained by three-dimensionally reconstructing according to the original hair image, and the two-dimensional hair image is obtained by photographing the 3D hair model corresponding to the target three-dimensional hair data, the two-dimensional hair image should be identical to the original hair image in an ideal state. The parameters of the hair generation model can be readjusted based on the two-dimensional hair image to minimize the difference between the two-dimensional hair image and the original hair image. The target three-dimensional hair data obtained by the hair generation model processing with the smallest difference can be considered as a result of three-dimensional reconstruction of the hair.
According to the technical scheme provided by the embodiment of the application, the hair direction of the original hair image is detected, the initial three-dimensional hair data is constructed based on the hair direction, then the hair shape of the initial hair data is optimized through the hair generation model, the target three-dimensional hair data is obtained, the target three-dimensional hair data can be micro-rendered into the two-dimensional hair image, and the difference between the two-dimensional hair image and the original hair image is minimized through optimizing the hair generation model, so that the obtained target three-dimensional hair data is more accurate and more accords with the growth rule of real hair.
In an embodiment, as shown in fig. 4, the step S340 may include the following steps: step S341-step S342.
Step S341: and constructing a virtual camera facing the three-dimensional target model.
For example, assuming that the three-dimensional object model is a head model, the three-dimensional object model may be oriented towards a human face for two-dimensional image acquisition. The virtual camera is relative to a real camera, and the virtual camera can be constructed by setting camera parameters such as camera position, focal length and field angle and simulating the effect of real shooting.
Step S342: and projecting the target three-dimensional hair data to a two-dimensional plane from the viewpoint of the virtual camera to form a two-dimensional hair image.
For example, assuming that the target three-dimensional hair data contains three-dimensional position coordinates of 1024 x 100 (100 points per virtual hair) points, the coordinates of each point with respect to the virtual camera can be calculated. Namely, a coordinate system is established by taking a virtual camera as an origin, the world coordinate system of the target three-dimensional hair data is converted into a camera coordinate system, and then the perspective projection transformation is carried out to convert the target three-dimensional hair data from the camera coordinate system into an image coordinate system. The coordinates of each three-dimensional position coordinate in the two-dimensional hair image are obtained. Since the hair is composed of hair, is not a common surface patch, and has a hair overlapping phenomenon, only the hair closest to the camera (i.e., the most superficial) can be rendered. According to the requirement, the virtual hairs corresponding to each pixel point in the two-dimensional hair image can be recorded.
Step S343: a first difference between a hair contour of the two-dimensional hair image and a hair contour of the original hair image is calculated.
Step S344: a second difference between the hair direction diagram of the two-dimensional hair image and the hair direction diagram of the original hair image is calculated.
The sequence of step S343 and step S344 is not limited, and the hair contour may be extracted by an edge detection algorithm, and the hair contour is equivalent to a hair boundary. The first difference is used to characterize the difference between the hair contour of the original hair image and the hair contour of the two-dimensional hair image. In one embodiment, the magnitude of the difference between the hair contours can be characterized by calculating the Euclidean distance between the corresponding pixel points. The hair orientation of the two-dimensional hair image can also be detected by a gabor filter. The second difference is used to characterize a difference between the hair direction diagram of the two-dimensional hair image and the hair direction diagram of the original hair image. In one embodiment, differences between hair direction patterns may be characterized by calculating differences in direction values of corresponding pixel points.
Step S345: iteratively optimizing the hair generation model to minimize a sum of the first difference and the second difference.
In one embodiment, the sum of the first difference and the second difference may be used as a loss, a gradient is calculated by a back propagation algorithm, parameters of the hair generation model are adjusted, feature vectors of 1024 × 8 (the dimension of the feature vector of each hair) are changed, and the feature vectors of 1024 × 8 are decoded by VAE. And acquiring target three-dimensional hair data which accords with the hair contour and texture of the photo and the real hair rule. The sum of the final first difference and the second difference is minimized. The target three-dimensional hair data with the smallest sum of the first difference and the second difference can be regarded as the result of three-dimensional reconstruction of the hair of the original hair image.
As shown in fig. 5, (a) is the hair contour divided from the real photograph, (b) is a currently growing hair (i.e., the initial three-dimensional hair data), and (c) is the hair direction diagram, with different colors representing different directions.
See (d), a and b compare, a and b differ in that the profile is different, the excess portion needs to be compressed inward, and the insufficient portion is stretched outward. c and b, comparing, for each hair in the 3d data of b, calculating the direction of the hair by using two adjacent points, and then obtaining the direction of the corresponding position in c by using the projection result (b, the figure), see (e), wherein the line 52 is the hair direction of b, and the line 51 is the direction result detected by the real hair. Line 52 needs to become line 51. Arrow 53 indicates the trend. The direction of the hair of b can be optimized by the hair generation model to be closer to the direction of the real hair.
As shown in fig. 6, a human face head image is input, and a hair region of the human face head image can be used as an original hair image, and by adopting the scheme provided by the embodiment of the present application, through a series of contrast adjustments, a difference between the hair region in the finally output two-dimensional image and the hair region of the input image is smaller and smaller.
Through the technical scheme provided by the embodiment of the application, the generated projection (namely the two-dimensional hair image) of the target three-dimensional hair data is closer to the outline and the texture of a real photo (namely the original hair image), the shape of each hair line is more in line with the real rule, and the three-dimensional reconstruction result of the hair is more accurate.
The following are embodiments of the apparatus of the present application, which can be used to perform the above-mentioned embodiments of the method for three-dimensional reconstruction of hair of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the three-dimensional hair reconstruction method of the present application.
Fig. 7 is a block diagram of a hair three-dimensional reconstruction apparatus according to an embodiment of the present application. As shown in fig. 7, the apparatus includes: direction detection module 810, model building module 820, hair optimization module 830, and reverse pass module 840.
A direction detecting module 810, configured to obtain an original hair image, detect a hair direction of the original hair image, and generate a hair direction diagram;
a model construction module 820, configured to construct initial three-dimensional hair data according to the hair directional diagram and a preset three-dimensional target model;
a hair optimization module 830, configured to optimize a hair shape of the initial three-dimensional hair data through a hair generation model to obtain target three-dimensional hair data;
a reverse transfer module 840 configured to micro-render the target three-dimensional hair data into a two-dimensional hair image, and minimize a difference between the two-dimensional hair image and the two-dimensional hair image by optimizing the hair generation model.
The implementation process of the functions and actions of each module in the device is specifically detailed in the implementation process of the corresponding step in the hair three-dimensional reconstruction method, and is not described herein again.
In the embodiments provided in the present application, the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (11)

1. A method of three-dimensional reconstruction of hair, comprising:
acquiring an original hair image, detecting the hair direction of the original hair image, and generating a hair directional diagram;
constructing initial three-dimensional hair data according to the hair directional diagram and a preset three-dimensional target model;
optimizing the hair shape of the initial three-dimensional hair data through a hair generation model to obtain target three-dimensional hair data;
the target three-dimensional hair data may be micro-rendered into a two-dimensional hair image, with minimal difference between the two-dimensional hair image and the original hair image by optimizing the hair generation model.
2. The method of claim 1, wherein obtaining the raw hair image and detecting a hair orientation of the raw hair image to generate a hair orientation map comprises:
performing hair edge detection on a target image to obtain an original hair image;
and filtering the original hair image by using a linear filter to obtain hair directions of different positions of the original hair image, and forming the hair directional diagram.
3. The method of claim 1, wherein the initial three-dimensional hair data comprises: three-dimensional position coordinates of a plurality of points corresponding to each virtual hair; according to the hair directional diagram and a preset three-dimensional target model, constructing initial three-dimensional hair data, wherein the method comprises the following steps:
and according to the hair directions of different positions indicated by the hair directional diagram, extending from the preset hair root position of the three-dimensional target model to obtain three-dimensional position coordinates of a plurality of points corresponding to each virtual hair.
4. The method according to claim 3, wherein optimizing the hair shape of the initial three-dimensional hair data by the hair generation model to obtain target three-dimensional hair data comprises:
for each virtual hair, taking the three-dimensional position coordinates of a plurality of points corresponding to the virtual hair as the input of the hair generation model, and obtaining the optimized three-dimensional position coordinates of the plurality of points output by the hair generation model;
and obtaining the target three-dimensional hair data according to the three-dimensional position coordinates of the optimized points of each virtual hair.
5. The method of claim 4, wherein obtaining the optimized three-dimensional position coordinates of the plurality of points output by the hair generation model using the three-dimensional position coordinates of the plurality of points as an input to the hair generation model comprises:
inputting the three-dimensional position coordinates of the plurality of points into a coding module of the hair generation model, and outputting a hair feature vector;
and inputting the hair feature vector into a decoding module of the hair generation model, and outputting the three-dimensional position coordinates of the optimized points.
6. The method of claim 4, wherein before obtaining the optimized three-dimensional position coordinates of the plurality of points output by the hair generation model using the three-dimensional position coordinates of the plurality of points as input to the hair generation model, the method further comprises:
acquiring position coordinates of a plurality of points belonging to the same real hair;
and performing machine learning by using the position coordinates of the plurality of points belonging to the same real hair, and training to obtain the hair generation model.
7. The method of claim 1, wherein said micro-rendering said target three-dimensional hair data into a two-dimensional hair image comprises:
constructing a virtual camera facing the three-dimensional target model;
and projecting the target three-dimensional hair data to a two-dimensional plane from the viewpoint of the virtual camera to form a two-dimensional hair image.
8. The method of claim 1, wherein minimizing the difference between the two-dimensional hair image and the two-dimensional hair image by optimizing the hair generation model comprises:
calculating a first difference between a hair contour of the two-dimensional hair image and a hair contour of the original hair image;
calculating a second difference between the hair direction diagram of the two-dimensional hair image and the hair direction diagram of the original hair image;
iteratively optimizing the hair generation model to minimize a sum of the first difference and the second difference.
9. A device for three-dimensional reconstruction of hair, comprising:
the direction detection module is used for acquiring an original hair image, detecting the hair direction of the original hair image and generating a hair directional diagram;
the model building module is used for building initial three-dimensional hair data according to the hair directional diagram and a preset three-dimensional target model;
the hair optimization module is used for optimizing the hair shape of the initial three-dimensional hair data through a hair generation model to obtain target three-dimensional hair data;
and the reverse transfer module is used for micro-rendering the target three-dimensional hair data into a two-dimensional hair image, and the difference between the two-dimensional hair image and the two-dimensional hair image is minimized by optimizing the hair generation model.
10. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the hair three-dimensional reconstruction method of any one of claims 1-8.
11. A computer-readable storage medium, characterized in that the storage medium stores a computer program executable by a processor to perform the method of three-dimensional reconstruction of hair according to any one of claims 1 to 8.
CN202011413467.0A 2020-12-02 2020-12-02 Three-dimensional hair reconstruction method, device, electronic equipment and storage medium Active CN112419487B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011413467.0A CN112419487B (en) 2020-12-02 2020-12-02 Three-dimensional hair reconstruction method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011413467.0A CN112419487B (en) 2020-12-02 2020-12-02 Three-dimensional hair reconstruction method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112419487A true CN112419487A (en) 2021-02-26
CN112419487B CN112419487B (en) 2023-08-22

Family

ID=74776306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011413467.0A Active CN112419487B (en) 2020-12-02 2020-12-02 Three-dimensional hair reconstruction method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112419487B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907715A (en) * 2021-03-19 2021-06-04 网易(杭州)网络有限公司 Hair model making method and device, storage medium and computer equipment
CN113129347A (en) * 2021-04-26 2021-07-16 南京大学 Self-supervision single-view three-dimensional hairline model reconstruction method and system
CN113658326A (en) * 2021-08-05 2021-11-16 北京奇艺世纪科技有限公司 Three-dimensional hair reconstruction method and device
CN113850904A (en) * 2021-09-27 2021-12-28 北京百度网讯科技有限公司 Method and device for determining hair model, electronic equipment and readable storage medium
CN114187633A (en) * 2021-12-07 2022-03-15 北京百度网讯科技有限公司 Image processing method and device, and training method and device of image generation model
CN114693856A (en) * 2022-05-30 2022-07-01 腾讯科技(深圳)有限公司 Object generation method and device, computer equipment and storage medium
CN114723888A (en) * 2022-04-08 2022-07-08 北京百度网讯科技有限公司 Three-dimensional hair model generation method, device, equipment, storage medium and product
CN114758391A (en) * 2022-04-08 2022-07-15 北京百度网讯科技有限公司 Hairstyle image determining method and device, electronic equipment, storage medium and product
WO2022247179A1 (en) * 2021-05-25 2022-12-01 完美世界(北京)软件科技发展有限公司 Image rendering method and apparatus, device, and storage medium
CN116051729A (en) * 2022-12-15 2023-05-02 北京百度网讯科技有限公司 Three-dimensional content generation method and device and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140233849A1 (en) * 2012-06-20 2014-08-21 Zhejiang University Method for single-view hair modeling and portrait editing
US20160328886A1 (en) * 2014-12-23 2016-11-10 Intel Corporation Sketch selection for rendering 3d model avatar
CN106960465A (en) * 2016-12-30 2017-07-18 北京航空航天大学 A kind of single image hair method for reconstructing based on the field of direction and spiral lines matching
WO2017185301A1 (en) * 2016-04-28 2017-11-02 华为技术有限公司 Three-dimensional hair modelling method and device
CN109064547A (en) * 2018-06-28 2018-12-21 北京航空航天大学 A kind of single image hair method for reconstructing based on data-driven
US20190051048A1 (en) * 2016-04-19 2019-02-14 Zhejiang University Method for single-image-based fully automatic three-dimensional hair modeling
CN109685876A (en) * 2018-12-21 2019-04-26 北京达佳互联信息技术有限公司 Fur rendering method, apparatus, electronic equipment and storage medium
CN110060323A (en) * 2019-03-18 2019-07-26 叠境数字科技(上海)有限公司 The rendering method of three-dimensional hair model opacity
CN110766799A (en) * 2018-07-27 2020-02-07 网易(杭州)网络有限公司 Method and device for processing hair of virtual object, electronic device and storage medium
CN111540021A (en) * 2020-04-29 2020-08-14 网易(杭州)网络有限公司 Hair data processing method and device and electronic equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140233849A1 (en) * 2012-06-20 2014-08-21 Zhejiang University Method for single-view hair modeling and portrait editing
US20160328886A1 (en) * 2014-12-23 2016-11-10 Intel Corporation Sketch selection for rendering 3d model avatar
US20190051048A1 (en) * 2016-04-19 2019-02-14 Zhejiang University Method for single-image-based fully automatic three-dimensional hair modeling
WO2017185301A1 (en) * 2016-04-28 2017-11-02 华为技术有限公司 Three-dimensional hair modelling method and device
CN106960465A (en) * 2016-12-30 2017-07-18 北京航空航天大学 A kind of single image hair method for reconstructing based on the field of direction and spiral lines matching
CN109064547A (en) * 2018-06-28 2018-12-21 北京航空航天大学 A kind of single image hair method for reconstructing based on data-driven
CN110766799A (en) * 2018-07-27 2020-02-07 网易(杭州)网络有限公司 Method and device for processing hair of virtual object, electronic device and storage medium
CN109685876A (en) * 2018-12-21 2019-04-26 北京达佳互联信息技术有限公司 Fur rendering method, apparatus, electronic equipment and storage medium
US20210312695A1 (en) * 2018-12-21 2021-10-07 Beijing Dajia Internet Information Technology Co., Ltd. Hair rendering method, device, electronic apparatus, and storage medium
CN110060323A (en) * 2019-03-18 2019-07-26 叠境数字科技(上海)有限公司 The rendering method of three-dimensional hair model opacity
CN111540021A (en) * 2020-04-29 2020-08-14 网易(杭州)网络有限公司 Hair data processing method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李康;耿国华;周明全;韩翼;: "一种快速可重用的三维头发模型建模方法", 西北大学学报(自然科学版), no. 02, pages 209 - 213 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907715B (en) * 2021-03-19 2024-04-12 网易(杭州)网络有限公司 Hair model making method, device, storage medium and computer equipment
CN112907715A (en) * 2021-03-19 2021-06-04 网易(杭州)网络有限公司 Hair model making method and device, storage medium and computer equipment
CN113129347A (en) * 2021-04-26 2021-07-16 南京大学 Self-supervision single-view three-dimensional hairline model reconstruction method and system
CN113129347B (en) * 2021-04-26 2023-12-12 南京大学 Self-supervision single-view three-dimensional hairline model reconstruction method and system
WO2022247179A1 (en) * 2021-05-25 2022-12-01 完美世界(北京)软件科技发展有限公司 Image rendering method and apparatus, device, and storage medium
CN113658326A (en) * 2021-08-05 2021-11-16 北京奇艺世纪科技有限公司 Three-dimensional hair reconstruction method and device
CN113850904A (en) * 2021-09-27 2021-12-28 北京百度网讯科技有限公司 Method and device for determining hair model, electronic equipment and readable storage medium
CN114187633A (en) * 2021-12-07 2022-03-15 北京百度网讯科技有限公司 Image processing method and device, and training method and device of image generation model
CN114723888A (en) * 2022-04-08 2022-07-08 北京百度网讯科技有限公司 Three-dimensional hair model generation method, device, equipment, storage medium and product
CN114758391B (en) * 2022-04-08 2023-09-12 北京百度网讯科技有限公司 Hair style image determining method, device, electronic equipment, storage medium and product
CN114758391A (en) * 2022-04-08 2022-07-15 北京百度网讯科技有限公司 Hairstyle image determining method and device, electronic equipment, storage medium and product
CN114693856A (en) * 2022-05-30 2022-07-01 腾讯科技(深圳)有限公司 Object generation method and device, computer equipment and storage medium
CN116051729A (en) * 2022-12-15 2023-05-02 北京百度网讯科技有限公司 Three-dimensional content generation method and device and electronic equipment
CN116051729B (en) * 2022-12-15 2024-02-13 北京百度网讯科技有限公司 Three-dimensional content generation method and device and electronic equipment

Also Published As

Publication number Publication date
CN112419487B (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN112419487B (en) Three-dimensional hair reconstruction method, device, electronic equipment and storage medium
CN109325437B (en) Image processing method, device and system
CN111243093B (en) Three-dimensional face grid generation method, device, equipment and storage medium
Bao et al. High-fidelity 3d digital human head creation from rgb-d selfies
JP7526412B2 (en) Method for training a parameter estimation model, apparatus for training a parameter estimation model, device and storage medium
KR101635730B1 (en) Apparatus and method for generating montage, recording medium for performing the method
EP3912085A1 (en) Systems and methods for face reenactment
CN109961507A (en) A kind of Face image synthesis method, apparatus, equipment and storage medium
EP3980974A1 (en) Single image-based real-time body animation
CN111369428B (en) Virtual head portrait generation method and device
EP4036863A1 (en) Human body model reconstruction method and reconstruction system, and storage medium
CN113838176A (en) Model training method, three-dimensional face image generation method and equipment
CN111680544B (en) Face recognition method, device, system, equipment and medium
CN107633544B (en) Processing method and device for ambient light shielding
CN113808277B (en) Image processing method and related device
CN117315211B (en) Digital human synthesis and model training method, device, equipment and storage medium thereof
CN110930503A (en) Method and system for establishing three-dimensional model of clothing, storage medium and electronic equipment
US11403801B2 (en) Systems and methods for building a pseudo-muscle topology of a live actor in computer animation
CN115239861A (en) Face data enhancement method and device, computer equipment and storage medium
CN116416376A (en) Three-dimensional hair reconstruction method, system, electronic equipment and storage medium
CN113658324A (en) Image processing method and related equipment, migration network training method and related equipment
CN111680573A (en) Face recognition method and device, electronic equipment and storage medium
CN112288861B (en) Single-photo-based automatic construction method and system for three-dimensional model of human face
CN112184611A (en) Image generation model training method and device
US11783516B2 (en) Method for controlling digital feather generations through a user interface in a computer modeling system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant