CN117274344B - Model training method, texture synthesis and mapping method for texture of real material - Google Patents

Model training method, texture synthesis and mapping method for texture of real material Download PDF

Info

Publication number
CN117274344B
CN117274344B CN202311558857.0A CN202311558857A CN117274344B CN 117274344 B CN117274344 B CN 117274344B CN 202311558857 A CN202311558857 A CN 202311558857A CN 117274344 B CN117274344 B CN 117274344B
Authority
CN
China
Prior art keywords
texture
real
basic shape
dimensional
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311558857.0A
Other languages
Chinese (zh)
Other versions
CN117274344A (en
Inventor
方顺
张志恒
冯星
崔铭
吕艳娜
张亚男
胡梓楠
范佳佳
刘锦
刘熠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xuanguang Technology Co ltd
Original Assignee
Beijing Xuanguang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xuanguang Technology Co ltd filed Critical Beijing Xuanguang Technology Co ltd
Priority to CN202311558857.0A priority Critical patent/CN117274344B/en
Publication of CN117274344A publication Critical patent/CN117274344A/en
Application granted granted Critical
Publication of CN117274344B publication Critical patent/CN117274344B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0985Hyperparameter optimisation; Meta-learning; Learning-to-learn
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a model training method, texture synthesis and mapping method of real texture, which comprises the steps of obtaining a three-dimensional model, and processing the three-dimensional model to separate a basic shape and real three-dimensional texture contained in the three-dimensional model; processing the basic shape and the real three-dimensional material to determine sampling points on the real three-dimensional material, projection points of the sampling points on the basic shape and preset parameters; based on a preset loss function, training a pre-established neural network model by using the sampling points, the projection points and the preset parameters. The method and the device have the advantages that the neural network model can achieve a good effect when processing textures with a hierarchical structure, and further overcome the defect that the related technology cannot process textures with the hierarchical structure well and the mapping effect of the textures is poor.

Description

Model training method, texture synthesis and mapping method for texture of real material
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a model training method, texture synthesis and mapping method for texture of a real material.
Background
In the related art, the modeling effect on the texture of the complex structure is poor, only the texture smoothly mapped to the 2D parameter space can be processed, the texture synthesis capability on the non-closed structure is limited, the texture with the hierarchical structure can not be processed well, and the mapping effect of the texture is poor.
Disclosure of Invention
The application provides a model training method, texture synthesis and mapping method for real texture, which are used for solving the technical problems in the related art.
In a first aspect, the present invention provides a model training method for texture of a real material, including obtaining a three-dimensional model, and processing the three-dimensional model to separate a basic shape and a real three-dimensional material contained in the three-dimensional model; processing the basic shape and the real three-dimensional material to determine sampling points on the real three-dimensional material, projection points of the sampling points on the basic shape and preset parameters; based on a preset loss function, training a pre-established neural network model by using the sampling points, the projection points and the preset parameters.
Optionally, processing the basic shape and the real three-dimensional material to determine a sampling point on the real three-dimensional material, a projection point of the sampling point on the basic shape, and preset parameters include: determining sampling points on the real three-dimensional material based on camera rays; projecting the sampling points onto the basic shape along the normal line of the basic shape to obtain projection points corresponding to the sampling points; determining a signed distance field corresponding to the sampling point based on a distance between the sampling point and the corresponding projection point; a local tangent spatial representation is determined based on the projection point, a normal to the base shape, and the signed distance field.
Optionally, the neural network model comprises a multi-layer perceptual network; the input layer of the multi-layer perception network is a hash mapping layer, and the projection points are processed by the hash mapping layer to obtain potential characteristics;
the latent features, normals to the basic shape, the signed distance field, and the local tangent space representation are transmitted to different nodes of a hidden layer in the multi-layer perceptual network, and after the hidden layer is processed, an output layer outputs preset physical attribute parameters, wherein the preset physical attribute parameters comprise sampling point bulk density, reflection coefficient, pitch angle, azimuth angle, and normals to sampling points determined based on the pitch angle and the azimuth angle.
Optionally, after the neural network model obtains the preset attribute parameters, rendering and outputting are performed based on the preset attribute parameters to obtain colors corresponding to the sampling points; rendering and outputting based on the preset attribute parameters, wherein obtaining the color corresponding to the sampling point comprises: and inputting the reflection coefficient, the normal line of the sampling point and the normal line of the basic shape to a spherical harmonic renderer for rendering to obtain the color corresponding to the sampling point.
Optionally, the neural network model performs volume rendering based on the sampling point volume density and the color corresponding to the sampling point.
Optionally, when training the neural network model, the total loss function is:
wherein the saidTo reconstruct the loss, is the loss of RGB of the output and input;to reconstruct the super parameters; said->The clustering loss is used for ensuring that similar texture is represented by similar potential characteristics; said->Is a cluster super-parameter; said->For distortion loss, for removing floating artifacts; said->Is a distortion superparameter; said->For normal loss, for negative gradient supervision pitch and azimuth based on volume density; said->Is a normal superparameter.
In a second aspect, the present invention provides a method for synthesizing texture of a real material, comprising: acquiring a three-dimensional model to be processed containing real materials, and processing the three-dimensional model to be processed to separate out the contained basic shape and the real three-dimensional materials; extracting a dough sheet, processing the dough sheet, and determining sampling points on the dough sheet, projection points of the sampling points of the dough sheet on the basic shape and preset parameters; inputting the sampling points on the patch, the projection points of the sampling points of the patch on the basic shape and the preset parameters into the neural network model after training is completed, so as to output the color of each sampling point in the patch; and matching the predicted dough piece based on a dough piece matching algorithm to generate a texture of the material.
In a third aspect, the present invention provides a method for implementing texture mapping of a real material, including directly constructing texture features on a new shape surface based on texture of the material, and controlling mapping of texture of the material on the new shape surface based on a UV parameterization manner.
In a fourth aspect, the present invention provides a computer readable storage medium, wherein the storage medium stores a computer program, and the computer program when executed by a processor implements the method according to any one of the above-mentioned implementations.
In a fifth aspect, the present invention provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method provided in the first aspect when executing the program.
The invention discloses a model training method, texture synthesis and mapping method of real texture, which comprises the steps of obtaining a three-dimensional model, and processing the three-dimensional model to separate a basic shape and real three-dimensional texture contained in the three-dimensional model; processing the basic shape and the real three-dimensional material to determine sampling points on the real three-dimensional material, projection points of the sampling points on the basic shape and preset parameters; based on a preset loss function, training a pre-established neural network model by using the sampling points, the projection points and the preset parameters. The method and the device have the advantages that the neural network model can achieve a good effect when processing textures with a hierarchical structure, and further overcome the defect that the related technology cannot process textures with the hierarchical structure well and the mapping effect of the textures is poor.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a flow chart of a model training method for real texture of materials;
FIG. 2 is a basic shape extraction schematic diagram of the present application;
FIG. 3 is a schematic view of a sample point projected onto a basic shape in the present application;
FIG. 4 is a schematic structural diagram of a neural network model in the present application;
FIG. 5 is a flow chart of a method for synthesizing texture of a real material in the present application;
FIG. 6 is a schematic view of a dough sheet extraction of the present application;
fig. 7 is a schematic diagram of an electronic device corresponding to fig. 1 provided in the present application.
Detailed Description
For the purposes, technical solutions and advantages of the present application, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The following is an exemplary description of a model training method for texture of a real material with reference to fig. 1. The method comprises the following steps:
step 101: and obtaining a three-dimensional model, and processing the three-dimensional model to separate a basic shape and a real three-dimensional material contained in the three-dimensional model.
In this embodiment, the texture of the real material is not just a picture, but has a layer of three-dimensional structure, which is called a hierarchical structure. Below the hierarchical structure, the basic shape is the original basic shape, such as a basic shape trunk (the outer layer of the basic shape trunk is made of bark texture), a basic shape wooden barrel body (the outer layer of the basic shape wooden barrel body is made of wooden barrel texture), a basic shape ground (the outer layer of the basic shape wooden barrel body is made of marble texture), a basic shape banana body (the outer layer of the basic shape banana body is made of banana skin texture) and the like. The real three-dimensional material consists of a hierarchical structure and three-dimensional textures, wherein the three-dimensional textures are colors of different points of the hierarchical structure, and the real three-dimensional material is the unification of the hierarchical structure and the three-dimensional textures.
Different three-dimensional models can be obtained as training samples, and the three-dimensional models are processed to extract basic shapes, so that real three-dimensional materials are separated. Referring to fig. 2, the left to right process in fig. 2 illustrates an exemplary implementation except for the extraction of the basic shape, the first is a 3D model with a surface with true three-dimensional texture, the density field can be estimated by using the COLMAP with camera parameters; then rendering and reconstructing a grid by using a density field and a Marving cube algorithm through a neural network InstantNGP for NeRF; after constructing the grid, the grid can be converted into a convex hull structure through ACD; and finally, re-gridding the grid, and uniformly distributing the vertexes on the surface to enable the network to be smoother, wherein the grid is a basic shape. Therefore, a model is composed of a true three-dimensional material and a basic shape, and the true three-dimensional material is composed of a hierarchical structure and a texture of the material.
Editing of geometry and texture is achieved by separating shape and appearance properties using a mesh-based neural implicit representation. The method is better able to handle non-closed hierarchies and learn texture properties than methods using predicted directed distance fields.
Step 102: and processing the basic shape and the real three-dimensional material to determine sampling points on the real three-dimensional material, projection points of the sampling points on the basic shape and preset parameters.
In this embodiment, by referring to fig. 3, the real three-dimensional material and the basic shape are separated, on the basis of which, a sampling point, a projection point and a preset parameter are determined, and a plurality of sampling points are located on a camera ray. By uniformly sampling and importance sampling, the final sampling points, such as the 4 black points in fig. 3, are exemplified.
After the sampling points are determined, corresponding projection points can be determined, and then preset parameters are determined based on the sampling points and the projection points.
The present embodiment captures, models, and synthesizes textures having a hierarchical structure by using a neural radiation field (NeRF). Compared with the parameterized texture mapping of the traditional method, the method can better process textures with complex structures and provide more accurate texture modeling capability. In the related art, a texture mapping method based on parameterization is generally used, and the texture modeling effect of the method on a complex structure is poor. The traditional method is often only capable of processing the texture smoothly mapped to the 2D parameter space, has limited texture synthesis capability on a non-closed structure, and cannot well process the texture with the hierarchical structure.
As an optional implementation manner of this embodiment, the processing the basic shape and the real three-dimensional material to determine a sampling point on the real three-dimensional material, a projection point of the sampling point on the basic shape, and preset parameters include: determining sampling points on the real three-dimensional material based on camera rays; projecting the sampling points onto the basic shape along the normal line of the basic shape to obtain projection points corresponding to the sampling points;
determining a signed distance field corresponding to the sampling point based on a distance between the sampling point and the corresponding projection point; a local tangent spatial representation is determined based on the projection point, a normal to the base shape, and the signed distance field.
In this alternative implementation, any point of x-projection of the sampling point onto the basic shape isThe projection is performed along the normal direction of the basic shape, wherein the normal is to pass through the x point, referring to the normal n in fig. 3, so in order to determine +.>The point needs to be found first.
Illustratively, 8 closest points may be determined on the basic shape by point xUsing KNN algorithm, the normals of adjacent vertices are calculated: />,/>Wherein->,N=8,w=0.01,/>Represents the kth point of adjacency, +.>Representing a weighted average of the neighboring vertex normals. In order to increase the robustness, since +.>The first term is not reliable, and the second term now dominates.
Further, after obtaining the normal, the intersection of the normal and the basic shape is passed, i.e. in the opposite direction of the normal, i.e.The direction projects x onto the basic shape, and the projection point can be obtained>
Further, by x-point sumFind the distance of x-point with signed distance field SDF as +.>. The SDF value of all points in the basic shape is 0.
Further, referring to fig. 3, the intersection point of the camera direction and the basic shape is not at the normal projection pointWill result inThe point is not differentiable, so as to ensure that the point is differentiable, the normal can be estimated through a gradient descent algorithm, the physical rendering is facilitated, the gradient can be reversely propagated to the camera parameters through the coordinate x, the modification of the camera pose is important, and the reconstruction quality can be improved.
Illustratively, in constructing a normal-based micro-projectable, it may be constructed as follows:
,/>wherein->Is an identity matrix.
In the alternative implementation mode, through structure extraction, the projection points on the basic shape are obtainedBasic shape normal->SDF value->And a local tangential spatial representation, i.e. +.>Wherein t, b, n are each +.>Tangent, minor tangent and normal to the point, which are fixed, can be calculated predictably. The parameters are also input parameters of the neural network, and the output of the neural network is bulk density, coloring parameters, pitch angle and azimuth angle. The most primitive inputs of the network, like NeRF, are also views of multiple perspectives of a three-dimensional object, and then each view is used to calculate a loss function.
Step 103: based on a preset loss function, training a pre-established neural network model by using the sampling points, the projection points and the preset parameters.
In this embodiment, the real three-dimensional material is composed of a hierarchical structure and a texture, the texture is the color of each point on the hierarchical structure, different colors represent different textures, and the hierarchical structure can be represented by the colors through transparency, so that the hierarchical structure and the texture can be obtained by obtaining the colors of all sampling points on the camera ray. The implementation predicts the color of the sampling point through the neural network model.
As an alternative implementation manner of this embodiment, the neural network model includes a multi-layer sensing network;
the input layer of the multi-layer perception network is a hash mapping layer, and the projection points are processed by the hash mapping layer to obtain potential characteristics; the latent features, normals to the basic shape, the signed distance field, and the local tangent space representation are transmitted to different nodes of a hidden layer in the multi-layer perceptual network, and after the hidden layer is processed, an output layer outputs preset physical attribute parameters, wherein the preset physical attribute parameters comprise sampling point bulk density, reflection coefficient, pitch angle, azimuth angle, and normals to sampling points determined based on the pitch angle and the azimuth angle.
This alternative implementation quickly obtains the image latent features by looking up the hash table and can obtain the parameters to be obtained by simple MLPs.
In this alternative implementation, referring to fig. 4, the neural network model includes a hash map layer and a multi-layer perceptron, where the input of the hash map layer may be the input of the multi-layer perceptron. The hash table in the hash mapping layer is continuously learned in the training process, initial values can be initialized randomly or can be initialized through truncated Zhengtai distribution, then the values in the hash table are gradually converted into truly useful potential features along with constraint of a loss function, and the potential features obtained through table lookup are input to the hidden layer to obtain different parameters.
In Hash mapping, projection points are inputFor example, 8 mesh vertices are determined adjacent to the proxel and then a hash table is looked up to obtain potential features of the input image. Preferably, two potential features, respectively +.>And->
Image featuresAnd SDF value->And inputting the sample points into the first MLPs of the hidden layer, and predicting the volume density of the sample points. Image feature->And SDF value->Inputting into a second MLPs, predicting reflection coefficient, wherein the reflection coefficient comprises diffuse reflection coefficient and mirror surface due to the use of Phong coloring modelReflectance and gloss. Potential features through the image->And signed distance field of x point +.>Predicted pitch angle->Image characteristics can be +.>And SDF value->Inputting into a third MLPs, predicting pitch angle, the pitch angle is two normals +.>And->Is an attribute independent of the local tangent space. Azimuth angle->Tangent to the local tangent space ∈ ->In connection with, another image feature may be usedAnd SDF value->Input to the fourth MLPs, the azimuth angle is predicted. Through pitch angle->And azimuth->The normal of the real three-dimensional material, namely the normal of the x point, can be calculated>. The embodiment realizes the separation of illumination parameters and material colors, and solves the problem that the conventional method generally mixes illumination and material representation together and is difficult to flexibly render textures.
The embodiment overcomes the defects that in the related technology, convergence is difficult when training is started from scratch, high-resolution grids and a large amount of training data are needed, a large amount of calculation and optimization processes are needed, the calculation complexity is high, and the real-time rendering is not suitable
As an optional implementation manner of this embodiment, after the neural network model obtains the preset attribute parameter, rendering and outputting are performed based on the preset attribute parameter, so as to obtain a color corresponding to the sampling point;
rendering and outputting based on the preset attribute parameters, wherein obtaining the color corresponding to the sampling point comprises: and inputting the reflection coefficient, the normal line of the sampling point and the normal line of the basic shape to a spherical harmonic renderer for rendering to obtain the color corresponding to the sampling point.
In this alternative implementation, the rendering output may include two parts, one is spherical harmonic rendering, i.e. outputting the sample point color with spherical harmonics, and the other is volume rendering, i.e. accumulating the colors of all sample points in the camera ray direction into the color of the screen.
The spherical harmonic renderer uses the spherical harmonic function to describe the shape and texture of the object surface and render it onto a screen. It can realize different rendering effects by changing the coefficients and the orders of the spherical harmonic function. In implementation, the coefficients of the spherical harmonic function are first defined, and these coefficients are used to define the shape and amplitude of the spherical harmonic function. Secondly, inputting parameters of an illumination model, adopting a Phong illumination model, and predicting reflection coefficients by using MLPsSpecular reflection coefficient->And gloss g models material reflection and inputs these illumination model parameters into the spherical harmonic renderer. Then inputting various parameters of the real three-dimensional material and the basic shape, including the normal line of the real three-dimensional material ++>And basic shape normal +.>. Finally, other parameters including the direction vector of the camera, the spherical harmonic function of the illumination, and the like are output. Finally, outputting predicted sampling point color by a spherical harmonic renderer>
The color at the predicted sample point may be obtained by volume rendering to obtain the color of the camera screen, thereby calculating the loss with view RGB color. The volume rendering mode can be the same as NeRF, and the colors of all sampling points and the volume density are multiplied and accumulated through the volume rendering to obtain the color of the final camera screen pixel. All sampling points for each view are similarly processed.
As an optional implementation manner of this embodiment, when training the neural network model, the total loss function is:
wherein the saidTo reconstruct the loss, is the loss of RGB of the texture of the output and input materials; said->The clustering loss is used for ensuring that similar texture is represented by similar potential characteristics; the saidFor distortion loss, for removing floating artifacts; said->Is a normal loss, used to monitor pitch and azimuth angles based on the negative gradient of bulk density.
In this alternative implementation, the potential features corresponding to the texture of the similar material have similar optimization objectives (including diffuse reflectance during trainingSpecular reflection coefficient->Gloss g and pitch ∈>)。
A clustering loss is introduced to avoid potential features of similar textures from falling into different optimal solutionsWherein KL represents KL divergence, P and Q are two student distributions (also called t-distribution), wherein the probability function of the distribution of Q is +.>The probability function of the P distribution isWherein->Is the degree of freedom of student distribution, < >>Representing potential characteristics->Representing a trainable cluster center.
For each resolution level hash table we use the cluster penalty to optimize the embedded features in the hash table.
Distortion lossIt is in order to remove the floating artefacts,
where s is the normalized ray distance and w is the weight of the transmittance in NeRF.
Loss of normal lineFor supervision of two corner +/based on negative gradient of volume density>Wherein->Representing the bulk density.
,/>
After training, the neural network model not only trains the weight parameters of each MLPs, but also learns the learnable hash table.
According to the embodiment, the hash table is added on the basis of NeRF, so that the prediction speed can be increased; the illumination parameters are separated from the colors of the materials, and the hierarchical real three-dimensional material structure is processed through the improved NeRF. The neural network can estimate the color of each point, but does not form texture, thus requiring texture synthesis. Texture mapping is performed after texture synthesis, so that texture can be mapped to any shape.
The application also provides a method for synthesizing the texture of the real material, referring to fig. 5, comprising the following steps:
step 501: and obtaining a three-dimensional model to be processed containing real materials, and processing the three-dimensional model to be processed to separate the basic shape and the real three-dimensional materials.
In this embodiment, the method based on the previous embodiment separates the true three-dimensional material from the basic shape, and then the structure and color of the true three-dimensional material can be predicted based on the basic shape through the neural network.
Step 502: and extracting the surface patch, processing the surface patch, and determining a sampling point on the surface patch, a projection point of the sampling point of the surface patch on the basic shape and preset parameters.
Step 503: and inputting the sampling points on the patch, the projection points of the sampling points of the patch on the basic shape and the preset parameters into the neural network model after training is completed so as to output the color of each sampling point in the patch.
In this embodiment, referring to the fig. 6 patch extraction schematic diagram, the projection points of the basic shape are obtained through the sampling points on the patch, then the latent features of the texture are found from the hash table through the projection points, the latent features that are not the true three-dimensional materials but the texture colors are stored on the panel, the patch has countless sampling points, and each sampling point needs to obtain the latent features by looking up the hash table. The sampling mode can be used for sampling through a poisson disk, so that the sampling points on the dough sheet are distributed uniformly. Rotation matrix from sampling local coordinate space to world coordinate space is written asThe rotation matrix from the local coordinate space of the basic shape to the world coordinate space is denoted +.>. For texture mapping, +/for each patch>And->Stored.
Step 504: and matching the predicted dough piece based on a dough piece matching algorithm to generate a texture of the material.
In this embodiment, based on a patch matching algorithm, patch matching, patches are stitched together to form a texture map with large resolution that stores potential features.
The application also provides a method for realizing the texture mapping of the real material, which comprises the steps of directly constructing texture features on the surface of the new shape based on the texture of the material, and controlling the mapping of the texture of the material on the surface of the new shape based on a UV parameterization mode.
When texture mapping is implemented, a new shape with known UV coordinates is given, for example, the surface of durian is mapped to the shape of shaddock, and the projection points of points x projected onto the surface of the new shape on the texture of the real texture of the three-dimensional model are queried. By passing throughThe UV coordinates of (2) are on the texture of the synthetic material, and the potential feature is obtained by bilinear interpolation>. Residual transformation from a local image patch of an original basic shape of a three-dimensional model to a sampled image patch is +.>It is prepared by synthesis->Texture and Synthesis->Nearest neighbor interpolation of the texture is obtained. By potential features->And SDF values, the network can predict the look and geometry of the query point x. Through the tangential space of the new shape, it is noted +.>To predict the normal of the new shape: />
Bulk density and reflectance are also potential features through new shapesAnd SDF value->To predict.
The color of the sampling point can be predicted by inputting the parameters into the network, and then a new view is obtained by volume rendering, wherein the new view contains a new shape and a real three-dimensional material. By way of example only, and in an illustrative,
the present embodiment supports editing and mapping of textures by explicitly representing textures in the neural representation and using UV parameterization. Compared with the traditional 2D parameterization method, the method can better process textures with a hierarchical structure, is not limited to smooth textures mapped to a 2D parameter space, and provides more flexible texture editing and mapping capability. The present method can apply texture to arbitrarily shaped meshes by mapping the synthesized texture onto a given mesh.
The present application also provides a computer readable medium storing a computer program operable to perform the method provided in the above-described figure embodiments.
The application also provides a schematic structural diagram of the electronic device corresponding to the embodiment. At the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile storage, as described in fig. 7, although other hardware required by other services may be included. The processor reads the corresponding computer program from the non-volatile memory into the memory and then runs to implement a model loading method as described above with respect to fig. 1. Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present application, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer media including memory storage devices.
All embodiments in the application are described in a progressive manner, and identical and similar parts of all embodiments are mutually referred, so that each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (8)

1. A model training method for texture of a real material, comprising:
obtaining a three-dimensional model, and processing the three-dimensional model to separate a basic shape and a real three-dimensional material contained in the three-dimensional model, wherein the real three-dimensional material consists of a hierarchical structure and three-dimensional textures, and the three-dimensional textures are colors of different points of the hierarchical structure; the shape below the hierarchy is a basic shape;
processing the basic shape and the real three-dimensional material to determine sampling points on the real three-dimensional material, projection points of the sampling points on the basic shape and preset parameters;
training a pre-established neural network model based on a preset loss function by utilizing the sampling points, the projection points and the preset parameters;
processing the basic shape and the real three-dimensional material to determine sampling points on the real three-dimensional material, projection points of the sampling points on the basic shape, and preset parameters including: determining sampling points on the real three-dimensional material based on camera rays; projecting the sampling points onto the basic shape along the normal line of the basic shape to obtain projection points corresponding to the sampling points; determining a signed distance field corresponding to the sampling point based on a distance between the sampling point and the corresponding projection point; determining a local tangent spatial representation based on the projection point, a normal to the base shape, the signed distance field;
the neural network model comprises a multi-layer perception network; the input layer of the multi-layer perception network is a hash mapping layer, and the projection points are processed by the hash mapping layer to obtain potential characteristics; the latent features, normals to the basic shape, the signed distance field, and the local tangent space representation are transmitted to different nodes of a hidden layer in the multi-layer perceptual network, and after the hidden layer is processed, an output layer outputs preset physical attribute parameters, wherein the preset physical attribute parameters comprise sampling point bulk density, reflection coefficient, pitch angle, azimuth angle, and normals to sampling points determined based on the pitch angle and the azimuth angle.
2. The model training method of the real texture according to claim 1, wherein after the neural network model obtains the preset attribute parameters, rendering and outputting are performed based on the preset attribute parameters to obtain colors corresponding to the sampling points;
rendering and outputting based on the preset attribute parameters, wherein obtaining the color corresponding to the sampling point comprises: and inputting the reflection coefficient, the normal line of the sampling point and the normal line of the basic shape to a spherical harmonic renderer for rendering to obtain the color corresponding to the sampling point.
3. The model training method of real texture according to claim 2, wherein the neural network model performs volume rendering based on the sampling point volume density and the color corresponding to the sampling point.
4. The method for training a model of a texture of a real material according to claim 1, wherein when training the neural network model, the total loss function is:
wherein the saidReconstruction loss is loss of RGB of output and input; />To reconstruct the super parameters; said->The clustering loss is used for ensuring that similar texture is represented by similar potential characteristics; the saidIs a cluster super-parameter; said->For distortion loss, for removing floating artifacts; said->Is a distortion superparameter; said->For normal loss, for negative gradient supervision pitch and azimuth based on volume density; the saidIs a normal superparameter.
5. A method for synthesizing texture of a real material based on the method of any one of claims 1 to 4, comprising:
acquiring a three-dimensional model to be processed containing real materials, and processing the three-dimensional model to be processed to separate out the contained basic shape and the real three-dimensional materials;
extracting a surface patch based on a real three-dimensional material, processing the surface patch, and determining sampling points on the surface patch, projection points of the sampling points of the surface patch on the basic shape and preset parameters;
inputting the sampling points on the patch, the projection points of the sampling points of the patch on the basic shape and the preset parameters into the neural network model after training is completed, so as to output the color of each sampling point in the patch;
and matching the predicted dough piece based on a dough piece matching algorithm to generate a texture of the material.
6. A method for implementing texture mapping of real materials based on claim 5, comprising: texture features are directly constructed on the new shape surface based on texture of the material, and mapping of texture of the material on the new shape surface is controlled based on a UV parameterization mode.
7. A computer readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-4.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of the preceding claims 1-4 when executing the program.
CN202311558857.0A 2023-11-22 2023-11-22 Model training method, texture synthesis and mapping method for texture of real material Active CN117274344B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311558857.0A CN117274344B (en) 2023-11-22 2023-11-22 Model training method, texture synthesis and mapping method for texture of real material

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311558857.0A CN117274344B (en) 2023-11-22 2023-11-22 Model training method, texture synthesis and mapping method for texture of real material

Publications (2)

Publication Number Publication Date
CN117274344A CN117274344A (en) 2023-12-22
CN117274344B true CN117274344B (en) 2024-02-06

Family

ID=89206730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311558857.0A Active CN117274344B (en) 2023-11-22 2023-11-22 Model training method, texture synthesis and mapping method for texture of real material

Country Status (1)

Country Link
CN (1) CN117274344B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003202216A (en) * 2002-01-08 2003-07-18 Canon Inc Method, device, system and program for three-dimensional image processing
JP2011013475A (en) * 2009-07-02 2011-01-20 Dainippon Printing Co Ltd Apparatus for generating patterned data, medium having anisotropic reflection and method for producing the medium
CN112132943A (en) * 2020-08-26 2020-12-25 山东大学 3D printing-oriented process texture synthesis system and method
CN115512073A (en) * 2022-09-19 2022-12-23 南京信息工程大学 Three-dimensional texture grid reconstruction method based on multi-stage training under differentiable rendering
CN115601511A (en) * 2022-12-14 2023-01-13 深圳思谋信息科技有限公司(Cn) Three-dimensional reconstruction method and device, computer equipment and computer readable storage medium
CN116109757A (en) * 2023-01-17 2023-05-12 中国科学技术大学 Hash coding dynamic three-dimensional human body rendering synthesis method based on inner hidden coordinates
CN116246023A (en) * 2023-03-03 2023-06-09 网易(杭州)网络有限公司 Three-dimensional model reconstruction method, apparatus, device, storage medium, and program product
CN116310215A (en) * 2023-03-03 2023-06-23 网易(杭州)网络有限公司 Data processing method, three-dimensional reconstruction method, device, equipment and storage medium thereof
CN116778063A (en) * 2023-05-31 2023-09-19 南京邮电大学 Rapid virtual viewpoint synthesis method and device based on characteristic texture grid and hash coding
CN116977522A (en) * 2023-06-02 2023-10-31 腾讯科技(深圳)有限公司 Rendering method and device of three-dimensional model, computer equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2528655B (en) * 2014-07-24 2020-10-07 Advanced Risc Mach Ltd Graphics Processing Systems
JP6426968B2 (en) * 2014-10-08 2018-11-21 キヤノン株式会社 INFORMATION PROCESSING APPARATUS AND METHOD THEREOF
WO2021042277A1 (en) * 2019-09-03 2021-03-11 浙江大学 Method for acquiring normal vector, geometry and material of three-dimensional object employing neural network
DE102020215766A1 (en) * 2020-12-11 2022-06-15 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung eingetragener Verein Additive manufacturing based on arrays of offset-related, signed distances
US20230260200A1 (en) * 2022-01-31 2023-08-17 Meta Platforms, Inc. Explicit Radiance Field Reconstruction from Scratch

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003202216A (en) * 2002-01-08 2003-07-18 Canon Inc Method, device, system and program for three-dimensional image processing
JP2011013475A (en) * 2009-07-02 2011-01-20 Dainippon Printing Co Ltd Apparatus for generating patterned data, medium having anisotropic reflection and method for producing the medium
CN112132943A (en) * 2020-08-26 2020-12-25 山东大学 3D printing-oriented process texture synthesis system and method
CN115512073A (en) * 2022-09-19 2022-12-23 南京信息工程大学 Three-dimensional texture grid reconstruction method based on multi-stage training under differentiable rendering
CN115601511A (en) * 2022-12-14 2023-01-13 深圳思谋信息科技有限公司(Cn) Three-dimensional reconstruction method and device, computer equipment and computer readable storage medium
CN116109757A (en) * 2023-01-17 2023-05-12 中国科学技术大学 Hash coding dynamic three-dimensional human body rendering synthesis method based on inner hidden coordinates
CN116246023A (en) * 2023-03-03 2023-06-09 网易(杭州)网络有限公司 Three-dimensional model reconstruction method, apparatus, device, storage medium, and program product
CN116310215A (en) * 2023-03-03 2023-06-23 网易(杭州)网络有限公司 Data processing method, three-dimensional reconstruction method, device, equipment and storage medium thereof
CN116778063A (en) * 2023-05-31 2023-09-19 南京邮电大学 Rapid virtual viewpoint synthesis method and device based on characteristic texture grid and hash coding
CN116977522A (en) * 2023-06-02 2023-10-31 腾讯科技(深圳)有限公司 Rendering method and device of three-dimensional model, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
IRON: Inverse Rendering by Optimizing Neural SDFs and Materials From Photometric Images;Kai Zhang et.al;《Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)》;5565-5574 *
点云的三维重建与纹理映射;康豆;《中国优秀硕士学位论文全文数据库 信息科技辑》;全文 *

Also Published As

Publication number Publication date
CN117274344A (en) 2023-12-22

Similar Documents

Publication Publication Date Title
US11328472B2 (en) Watertight ray triangle intersection
US20240013471A1 (en) Query-specific behavioral modification of tree traversal
US11200725B2 (en) Method for continued bounding volume hierarchy traversal on intersection without shader intervention
US11295514B2 (en) Inverse rendering of a scene from a single image
US11734879B2 (en) Graphics processing using directional representations of lighting at probe positions within a scene
Rosu et al. Permutosdf: Fast multi-view reconstruction with implicit surfaces using permutohedral lattices
CN105378796B (en) Scalable volume 3D reconstruct
Uchida et al. Noise-robust transparent visualization of large-scale point clouds acquired by laser scanning
US20230260200A1 (en) Explicit Radiance Field Reconstruction from Scratch
CN115439628A (en) Joint shape and appearance optimization by topological sampling
CN116664422A (en) Image highlight processing method and device, electronic equipment and readable storage medium
Mubarik et al. Hardware acceleration of neural graphics
Chen et al. Circle: Convolutional implicit reconstruction and completion for large-scale indoor scene
CN117274344B (en) Model training method, texture synthesis and mapping method for texture of real material
Chen et al. Manipulating, deforming and animating sampled object representations
CN115809696A (en) Virtual image model training method and device
CN112907733A (en) Method and device for reconstructing three-dimensional model and three-dimensional model acquisition and reconstruction system
CN116612244B (en) Image generation method and device, storage medium and electronic equipment
Liu et al. Improving RGB-D-based 3D reconstruction by combining voxels and points
CN117689822B (en) Three-dimensional model construction method and device, storage medium and electronic equipment
Li Differentiable Visual Computing: Challenges and Opportunities
CN116844212A (en) Dynamic face image alignment method, device, equipment and readable storage medium
Boubekeur et al. Simod: Making freeform deformation size-insensitive
Asthana Neural representations for object capture and rendering
CN116977465A (en) Texture acquisition and synthesis method and system based on nerve radiation field

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant