CN112330654A - Object surface material acquisition device and method based on self-supervision learning model - Google Patents

Object surface material acquisition device and method based on self-supervision learning model Download PDF

Info

Publication number
CN112330654A
CN112330654A CN202011279839.5A CN202011279839A CN112330654A CN 112330654 A CN112330654 A CN 112330654A CN 202011279839 A CN202011279839 A CN 202011279839A CN 112330654 A CN112330654 A CN 112330654A
Authority
CN
China
Prior art keywords
detected
light source
learning model
self
dcnn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011279839.5A
Other languages
Chinese (zh)
Inventor
刘越
毕天腾
翁冬冬
王涌天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202011279839.5A priority Critical patent/CN112330654A/en
Publication of CN112330654A publication Critical patent/CN112330654A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The device and the method for obtaining the material of the object surface based on the self-supervision learning comprise the following steps: a camera and spherical light source housing and a computer system; the camera is arranged at the center of the rectangular light source outer cover and is used for shooting the object to be detected; the spherical light source housing is placed on the surface of the object to be measured and used for providing a constant light source when the camera shoots; and the computer is used for training the DCNN self-supervision learning network based on the image information, the normal information and the light source information of the object to be detected shot by the camera, and extracting the surface material of the object to be detected. The image data of the material on the surface of the object is collected through a designed shooting device, and the DCNN is trained to accurately extract the material parameters in a self-supervision learning mode; the device has the advantages that the complex and expensive optical system is prevented from being used for measuring the material in all directions and strictly meeting the requirement on the collection environment, the difficulty in obtaining the material on the surface of the object is reduced, the cost of manpower and material resources is reduced, and the efficiency is improved.

Description

Object surface material acquisition device and method based on self-supervision learning model
Technical Field
The invention belongs to the technical field of reconstruction of surface materials of objects, and particularly relates to an object surface material acquisition device and method based on an automatic supervision learning model.
Background
The digital reconstruction of the object has wide application in the fields of virtual reality, augmented reality, art creation and the like, and the digital reconstruction method requires acquisition of geometric information, material information and current environment illumination information of the object, and the acquisition of the information plays a decisive role in reconstructing the appearance of the object.
For the acquisition of the material of an object, the traditional method mainly adopts a complex and expensive optical system to perform compact measurement on the object in each direction in a strict experimental environment to realize the reconstruction of the material on the surface of the object, but the method has great limitation and cannot be widely applied in practice.
The DCNN algorithm and the photographing apparatus are widely used to obtain material information from image data of the surface of an object. For example, in the journal "Material Editing Using a physical Based Rendering Network", the author proposes to use a DCNN (deep convolutional neural Network) algorithm to obtain the normal, Material and illumination information of the surface of an object from one image of the object; the object surface material is described using BRDF (bidirectional reflectance distribution function), where different parameters of the BRDF represent different object materials. Training a DCNN model by using generated data generated by computer rendering, training a single image on the surface of an object by using a DCNN through a supervised learning model to obtain the parameters of the BRDF, and optimizing the parameters of the BRDF by using a rendering layer. However, in the real world, the surfaces of most objects are made of various materials and have complex textures, a large amount of training data required for training an effective DCNN is difficult to obtain in practice, and even if part of data acquired by huge manpower and material resources is consumed, all the materials cannot be covered, so that the scheme in the text cannot be completely the same as the image shot actually, and the application range is very limited.
Disclosure of Invention
The invention overcomes the defects of the prior art, and provides a device and a method for obtaining the material of the object surface based on self-supervision learning, so as to avoid using a complex and expensive optical system to measure the material in all directions and strictly require the acquisition environment, reduce the difficulty of obtaining the material of the object surface, reduce the cost of manpower and material resources and improve the efficiency.
According to an aspect of the present disclosure, the present invention provides an apparatus for obtaining a material of a surface of an object based on self-supervised learning, including: the camera and spherical light source housing and the computer are connected through a cable;
the camera is arranged at the center of the rectangular light source outer cover and is used for shooting the object to be detected;
the spherical light source housing is placed on the surface of the object to be measured and used for providing a constant light source when the camera shoots.
And the computer is used for training a DCNN (distributed computing network) self-supervision learning network based on the image information and the normal information of the object to be detected shot by the camera and the light source information, and extracting the surface material of the object to be detected.
In a possible implementation manner, the method is applied to the above apparatus, and the method includes:
collecting RGB images of the surface material of an object to be detected by using a camera, determining a normal map of the surface material of the object to be detected based on the normal information of the surface of the object to be detected, and storing the normal map;
inputting the RGB image of the surface material of the object to be tested into a DCNN self-supervision learning model of a computer for training, and outputting a material parameter matrix function BRDF of the surface of the object to be tested into a rendering layer in the supervision learning model;
rendering the normal information of the surface of the object to be measured, an environment diagram formed by calibrating the spherical light source and a material parameter matrix function BRDF of the surface of the object to be measured into a new RGB image of the surface material of the object to be measured according to a rendering equation;
calculating the errors of the RGB images of the new surface material of the object to be tested and the RGB images of the surface material of the object to be tested input into the DCNN self-supervision learning model;
and when the error is minimum, obtaining the material of the surface of the object to be detected according to a material parameter matrix function BRDF of the surface of the object to be detected, which is output by the DCNN self-supervision learning model.
In one possible implementation, the light sources on the spherical light source housing are uniformly distributed.
In a possible implementation manner, the obtaining, by the material parameter matrix function BRDF of the surface of the object to be measured, a material of the surface of the object to be measured includes:
and the output of the material parameter matrix function BRDF of the surface of the object to be detected is the material parameter of the surface of the object to be detected, and the material of the surface of the object to be detected is determined according to the material parameter of the surface of the object to be detected.
In one possible implementation, the rendering layer in the supervised learning model has back-propagation properties.
In a possible implementation manner, the rendering layer derives the material parameter matrix function BRDF of the surface of the object to be detected based on a chain rule of the material parameter matrix function BRDF of the surface of the object to be detected, so as to realize back propagation.
In one possible implementation, the rendering layer renders and error propagates back for each pixel of the RGB image.
In one possible implementation, the DCNN self-supervised learning model includes: 4 downsampling modules, 3 upsampling modules, 1 convolution module and 1 output module.
In one possible implementation, the downsampling module includes 2 convolution units and 1 pooling layer; each convolution unit comprises 1 convolution layer and 1 activation function layer;
the up-sampling module comprises 1 cascade layer, 2 convolution units and 1 up-sampling unit, wherein the up-sampling unit comprises 1 deconvolution layer and 1 activation function layer;
the convolution module comprises 2 convolution units and 1 up-sampling unit;
the output module comprises 1 cascade layer, 2 convolution units and 1 convolution layer.
This device and computer based on object surface material is obtained in self-supervision study, wherein, camera and spherical light source dustcoat with the computer is connected through the cable, includes: a camera and a spherical light source housing; the camera is arranged at the center of the rectangular light source outer cover and is used for shooting the object to be detected; the spherical light source housing is placed on the surface of the object to be measured and used for providing a constant light source when the camera shoots; and the computer is used for training a DCNN (distributed computing network) self-supervision learning network based on the image information and the normal information of the object to be detected shot by the camera and the light source information, and extracting the surface material of the object to be detected. The image data of the material on the surface of the object is collected through a designed shooting device, and the DCNN is trained to accurately extract the material parameters in a self-supervision learning mode; the device has the advantages that the complex and expensive optical system is prevented from being used for measuring the material in all directions and strictly meeting the requirement on the collection environment, the difficulty in obtaining the material on the surface of the object is reduced, the cost of manpower and material resources is reduced, and the efficiency is improved.
Drawings
The accompanying drawings are included to provide a further understanding of the technology or prior art of the present application and are incorporated in and constitute a part of this specification. The drawings expressing the embodiments of the present application are used for explaining the technical solutions of the present application, and should not be construed as limiting the technical solutions of the present application.
Fig. 1 is a schematic structural diagram of an object surface material acquisition apparatus based on an auto-supervised learning model according to an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating a method for obtaining a material of an object surface based on an auto-supervised learning model according to an embodiment of the disclosure;
FIG. 3 illustrates a schematic diagram of an embodiment of the present disclosure based on an unsupervised learning model;
FIG. 4 illustrates a schematic diagram of a downsampling module based on an auto-supervised learning model according to an embodiment of the present disclosure;
FIG. 5 illustrates a schematic diagram of an upsampling module based on an auto-supervised learning model according to an embodiment of the present disclosure;
FIG. 6 illustrates a schematic diagram of an output module based on an auto-supervised learning model according to an embodiment of the present disclosure;
FIG. 7 shows a schematic diagram of a convolution module based on an unsupervised learning model according to an embodiment of the present disclosure.
Detailed Description
The following detailed description of the embodiments of the present invention will be provided with reference to the accompanying drawings and examples, so that how to apply the technical means to solve the technical problems and achieve the corresponding technical effects can be fully understood and implemented. The embodiments and the features of the embodiments can be combined without conflict, and the technical solutions formed are all within the scope of the present invention.
Additionally, the steps illustrated in the flow charts of the figures may be performed in a computer such as a set of computer-executable instructions. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
Fig. 1 shows a schematic structural diagram of an object surface material acquisition apparatus and method based on an auto-supervised learning model according to an embodiment of the present disclosure.
As shown in fig. 1, the apparatus may include a camera and spherical light source housing and a computer (not shown). Wherein, the camera and the spherical light source outer cover are connected with the computer through cables. The camera can be arranged at the central position of the spherical light source housing and is used for shooting an object to be detected, the spherical light source housing is placed on the surface of the object to be detected and is used for providing a constant light source when the camera shoots, and light sources on the spherical light source housing are uniformly distributed; and the computer is used for training a DCNN (distributed computing network) self-supervision learning network based on the image information and the normal information of the object to be detected shot by the camera and the light source information, and extracting the surface material of the object to be detected.
The camera is used for sampling images of a certain range of the surface of an object to be measured, and the spherical uniform light source is responsible for providing a constant light source when the camera shoots. The light source can be calibrated and calculated in advance, stored in the system in the form of an environment map and used by a subsequent processing algorithm.
The whole device is arranged on the surface of an object to be detected, the spherical light source provides brightness, the camera shoots the surface of the object to be detected, and because the surface of the object to be detected is a plane and the lens viewpoint of the camera is positioned at the spherical center, the normal information of the surface of the object to be detected can also be determined and stored in the system in the form of a normal map for use by a subsequent processing algorithm.
The method for obtaining the object surface material based on the self-supervised learning and combined with the DCNN self-supervised learning algorithm is characterized in that the collected image of the device for obtaining the object surface material based on the self-supervised learning is taken as input, and the rendering layer is utilized to construct the reconstruction error of the object surface image to be detected so as to realize the self-supervised learning by combining the normal information and the environment map (illumination information) stored in the system, so that the information of the object surface material to be detected is obtained from the image data by training the DCNN self-supervised learning network under the condition of no labeled data.
The utility model discloses an object surface material acquisition device based on self-supervision learning model includes: the camera and spherical light source housing and the computer are connected through a cable; the camera is arranged at the center of the rectangular light source outer cover and is used for shooting the object to be detected; the spherical light source housing is placed on the surface of the object to be measured and used for providing a constant light source when the camera shoots; and the computer is used for training a DCNN (distributed computing network) self-supervision learning network based on the image information and the normal information of the object to be detected shot by the camera and the light source information, and extracting the surface material of the object to be detected. The image data of the material on the surface of the object is collected through a designed shooting device, and the DCNN is trained to accurately extract the material parameters in a self-supervision learning mode; the device has the advantages that the complex and expensive optical system is prevented from being used for measuring the material in all directions and strictly meeting the requirement on the collection environment, the difficulty in obtaining the material on the surface of the object is reduced, the cost of manpower and material resources is reduced, and the efficiency is improved.
Fig. 2 shows a flowchart of an object surface material obtaining method based on an auto-supervised learning model according to an embodiment of the present disclosure. As shown in fig. 2, the method may include:
step S1: the method comprises the steps of collecting RGB images of the surface material of an object to be detected by a camera, determining a normal map of the surface material of the object to be detected based on normal information of the surface of the object to be detected, and storing the normal map.
As shown in fig. 1, the image acquired by the camera is an RGB image, and since the normal line of the surface of the object to be measured and the ambient light are relatively fixed due to the installation position of the camera, the normal line information of the surface of the object to be measured can be determined, and then the normal line image corresponding to the resolution of the image is stored in the system.
Step S2: and inputting the RGB image of the surface material of the object to be tested into a DCNN self-supervision learning model of a computer for training, and outputting a material parameter matrix function BRDF of the surface of the object to be tested into a rendering layer in the supervision learning model.
Fig. 3 shows a schematic diagram of an unsupervised learning network according to an embodiment of the present disclosure.
In an example, as shown in fig. 3, the DCNN auto-supervised learning model may include 4 down-sampling modules, 3 up-sampling modules, 1 convolution module, and 1 output module, wherein the down-sampling modules and the up-sampling modules may be connected to implement feature fusion of the down-sampling modules and the up-sampling modules.
Fig. 4-7 are schematic diagrams illustrating a downsampling module, an upsampling module, an output module, and a convolution module based on an auto-supervised learning model according to an embodiment of the present disclosure.
In one example, as shown in fig. 4, the downsampling module of the DCNN auto-supervised learning model may include 2 convolution units and 1 pooling layer. Each convolution unit may include 1 convolution layer and 1 activation function layer.
As shown in fig. 5, the upsampling module of the DCNN auto-supervised learning model may include 1 cascaded layer, 2 convolution units and 1 upsampling unit. Each up-sampling unit includes 1 deconvolution layer and 1 activation function layer.
As shown in fig. 6, the convolution module of the DCNN auto-supervised learning model may include 2 convolution units and 1 up-sampling unit. As shown in fig. 7, the output module of the DCNN auto-supervised learning model may include 1 cascade layer, 2 convolution units and 1 convolution layer.
By using a U-type network for image conversion from an image of a DCNN self-supervised learning model as shown in fig. 3, RGB images of the surface material of the object to be measured collected by a camera as shown in fig. 1 are used as input, and a parameter matrix function BRDF of the surface material of the object to be measured with the same resolution is output.
The material parameter matrix Function BRDF (Bidirectional reflection Distribution Function) is a biquadratic Function defining the reflection of light on an opaque surface, and is used to characterize how the radiance in a given incident direction affects the radiance in a given emergent direction.
In an example, the rendering layers in the supervised learning model are back-propagated. As shown in fig. 3, network layers such as an up-sampling layer and a down-sampling layer of the DCNN self-supervised learning model can implement forward propagation and backward propagation, and a rendering layer trained based on the DCNN self-supervised learning model can also implement forward propagation and backward propagation like other layers of the DCNN self-supervised learning model, so that a function of rendering an image and a function of backward transferring a rendering image error are completed in the process of training the DCNN self-supervised learning model.
In an example, the rendering layer may derive the material parameter matrix function BRDF of the surface of the object to be detected based on a chain rule of the material parameter matrix function BRDF of the surface of the object to be detected to realize back propagation, and the rendering layer performs rendering and error back propagation on each pixel of the RGB image.
For example, the material parameter matrix function BRDF is a function including the material parameters of the surface of the object to be measured, and includes 3 sub-terms of the same expression, each sub-term includes two variables, each variable has 6 coefficients, so there are 108 parameters, and the kind and material of the surface to be measured can be determined by the 108 parameters.
The expression of the material parameter matrix function BRDF is as follows:
Figure BDA0002780405030000081
where m denotes the parameter vector in the BRDF function, k ═ 1, 2, 3 denote the three color channels of RGB,
Figure BDA0002780405030000082
and
Figure BDA0002780405030000083
two parameters controlling diffuse reflection and high light reflection, respectively.
hxIs a half angle vector defined as:
Figure BDA0002780405030000084
step S3: and rendering the normal information of the surface of the object to be measured, an environment diagram formed by calibrating the spherical light source and a material parameter matrix function BRDF of the surface of the object to be measured into a new RGB image of the surface material of the object to be measured according to a rendering equation.
The physical formation process of the image can be expressed by a rendering equation, and how the incident light is reflected to the emergent direction by the surface material of the object to be measured is described.
The expression of the rendering equation is: l iso(x,ωo)|=Le(x,ωo)+Lr(x,ωo);
Namely: l iso(x,ωo)|=Le(x,ωo)+∫Ωf(x,ωio)Li(x,ωi)(ωi·nx)dωi
Wherein L isoIndicating the direction from a point x on the surface of the object to be measuredωoN of (a) light, nxRepresenting the normal to point x. L iseAnd LrRespectively representing the light emitted from the surface of the object to be measured and the reflected light, both from the upper hemispherical surface omegaiDirectionally incident light ray LiAs a function of (c). And f is BRDF, which represents the emissivity of the surface of the object, namely the material parameter matrix function to be acquired by the surface of the object to be measured.
In image-based rendering, without considering the object's own lighting, the above rendering equation can be written as:
Figure BDA0002780405030000085
where I represents the value of a pixel, m is the number of pixels in the environment map, LiRepresenting the value of the ith pixel in the environment map. OmegaiDescribing the actual contribution of different pixels in the environment map in a spherical coordinate system, specifically:
Figure BDA0002780405030000086
Figure BDA0002780405030000087
wherein the content of the first and second substances,
Figure BDA0002780405030000088
and
Figure BDA0002780405030000089
representing the width and height of the environment map, respectively.
Figure BDA00027804050300000810
Indicating a rounding down.
Therefore, a new RGB image of the surface material of the object to be detected can be generated by combining the normal information of the surface of the object to be detected, the environment diagram formed by calibrating the spherical light source and the material parameter matrix function BRDF of the surface of the object to be detected based on the rendering equation of the image.
Step S4: and calculating the errors of the new RGB image of the surface material of the object to be detected and the RGB image of the surface material of the object to be detected input into the DCNN self-supervision learning model.
Step S5: and when the error is minimum, obtaining the material of the surface of the object to be detected according to a material parameter matrix function BRDF of the surface of the object to be detected, which is output by the DCNN self-supervision learning model.
The back propagation in the DCNN self-supervised learning model is to adjust the weight parameters in the training process of the DCNN self-supervised learning model, and for the rendering layer of the DCNN self-supervised learning model, the back propagation can be realized by deriving the parameters in the BRDF, that is:
Figure BDA0002780405030000091
in the training of the DCNN self-supervision learning model, the rendering layer generates a new RGB image by using the normal information and the illumination information stored in the system and receiving the surface material parameters of the object to be tested generated by the front end of the DCNN self-supervision learning model. In order to enable the DCNN self-supervision learning model to generate and output material parameters in the image shot by the surface of the object to be tested, the convergence of the DCNN self-supervision learning model is constrained by calculating the error (Euclidean distance loss function) of the new RGB image of the surface material of the object to be tested and the RGB image of the surface material of the object to be tested input into the DCNN self-supervision learning model:
Figure BDA0002780405030000092
wherein the content of the first and second substances,
Figure BDA0002780405030000093
RGB image representing the surface material of the object to be detected input into the DCNN self-supervision learning model; f (m'x,nxAnd L) the rendering layer generates a new RGB image of the surface material of the object to be detected.
In the DCNN self-supervision learning process, a loss function of the Euclidean distance is minimized as a target, when the loss function of the Euclidean distance is minimized, the RGB image of the surface material of the new object to be tested output by the rendering layer is completely consistent with the RGB image of the surface material of the object to be tested input into the DCNN self-supervision learning model, and then the surface material parameter of the object to be tested required by the input image of the rendering layer of the DCNN self-supervision learning model is constrained, so that the surface material parameter of the object to be tested is obtained.
The method for acquiring the material of the surface of the object based on the self-supervision learning model comprises the steps of collecting RGB images of the material of the surface of the object to be detected by using a camera, determining a normal map of the material of the surface of the object to be detected based on normal information of the surface of the object to be detected, and storing the normal map; inputting the RGB image of the surface material of the object to be tested into a DCNN self-supervision learning model of a computer for training, and outputting a material parameter matrix function BRDF of the surface of the object to be tested into a rendering layer in the supervision learning model; rendering the normal information of the surface of the object to be measured, an environment diagram formed by calibrating the spherical light source and a material parameter matrix function BRDF of the surface of the object to be measured into a new RGB image of the surface material of the object to be measured according to a rendering equation; calculating the errors of the RGB images of the new surface material of the object to be tested and the RGB images of the surface material of the object to be tested input into the DCNN self-supervision learning model; and when the error is minimum, obtaining the material of the surface of the object to be detected according to a material parameter matrix function BRDF of the surface of the object to be detected, which is output by the DCNN self-supervision learning model. By adopting the DCNN self-supervision learning model, the image acquired by the device for acquiring the surface material based on the self-supervision learning is taken as input, and the normal information and the illumination information stored in the system are combined, so that the image reconstruction error is constructed by using the rendering layer to realize the self-supervision learning, the labor and material cost is reduced, and the efficiency is improved.
Although the embodiments of the present invention have been described above, the above descriptions are only for the convenience of understanding the present invention, and are not intended to limit the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. An object surface material acquisition device based on an automatic supervision learning model, characterized in that the device comprises: the camera and spherical light source housing and the computer are connected through a cable;
the camera is arranged at the center of the spherical light source housing and is used for shooting the object to be detected;
the spherical light source housing is placed on the surface of the object to be measured and used for providing a constant light source when the camera shoots.
And the computer is used for training a DCNN (distributed computing network) self-supervision learning network based on the image information and the normal information of the object to be detected shot by the camera and the light source information, and extracting the surface material of the object to be detected.
2. The apparatus for obtaining the surface material of an object as claimed in claim 1, wherein the light source on the spherical light source housing is uniformly distributed.
3. An object surface material acquisition method based on an automatic supervision learning model, which is applied to the device of claims 1-2, and comprises the following steps:
collecting an RGB image of the surface material of an object to be detected by using a camera, determining a normal map of the surface material of the object to be detected based on the normal information of the surface of the object to be detected, and storing the normal map;
inputting the RGB image of the surface material of the object to be tested into a DCNN self-supervised learning model of the computer for training, and outputting a material parameter matrix function BRDF of the surface of the object to be tested into a rendering layer in the supervised learning model;
rendering the normal information of the surface of the object to be measured, an environment diagram formed by calibrating the spherical light source and a material parameter matrix function BRDF of the surface of the object to be measured into a new RGB image of the surface material of the object to be measured according to a rendering equation;
calculating the errors of the RGB images of the new surface material of the object to be tested and the RGB images of the surface material of the object to be tested input into the DCNN self-supervision learning model;
and when the error is minimum, obtaining the material of the surface of the object to be detected according to a material parameter matrix function BRDF of the surface of the object to be detected, which is output by the DCNN self-supervision learning model.
4. The method for obtaining the material of the surface of the object according to claim 3, wherein obtaining the material of the surface of the object to be measured by a material parameter matrix function BRDF comprises:
and the output of the material parameter matrix function BRDF of the surface of the object to be detected is the material parameter of the surface of the object to be detected, and the material of the surface of the object to be detected is determined according to the material parameter of the surface of the object to be detected.
5. The object surface material acquisition method according to claim 3, wherein the rendering layer in the supervised learning model has a back propagation property.
6. The method for obtaining the material of the surface of the object according to claim 5, wherein the rendering layer derives the material parameter matrix function BRDF of the surface of the object to be detected based on a chain rule of the material parameter matrix function BRDF of the surface of the object to be detected, so as to realize back propagation.
7. The object surface material acquisition method according to claim 5, wherein the rendering layer performs rendering and error back-propagation for each pixel of an RGB image.
8. The object surface material acquisition method according to claim 3, wherein the DCNN self-supervised learning model comprises: 4 downsampling modules, 3 upsampling modules, 1 convolution module and 1 output module.
9. The method for obtaining the material on the surface of the object according to claim 3, wherein the down-sampling module comprises 2 convolution units and 1 pooling layer; each convolution unit comprises 1 convolution layer and 1 activation function layer;
the up-sampling module comprises 1 cascade layer, 2 convolution units and 1 up-sampling unit, wherein the up-sampling unit comprises 1 deconvolution layer and 1 activation function layer;
the convolution module comprises 2 convolution units and 1 up-sampling unit;
the output module comprises 1 cascade layer, 2 convolution units and 1 convolution layer.
CN202011279839.5A 2020-11-16 2020-11-16 Object surface material acquisition device and method based on self-supervision learning model Pending CN112330654A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011279839.5A CN112330654A (en) 2020-11-16 2020-11-16 Object surface material acquisition device and method based on self-supervision learning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011279839.5A CN112330654A (en) 2020-11-16 2020-11-16 Object surface material acquisition device and method based on self-supervision learning model

Publications (1)

Publication Number Publication Date
CN112330654A true CN112330654A (en) 2021-02-05

Family

ID=74318663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011279839.5A Pending CN112330654A (en) 2020-11-16 2020-11-16 Object surface material acquisition device and method based on self-supervision learning model

Country Status (1)

Country Link
CN (1) CN112330654A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115375847A (en) * 2022-08-25 2022-11-22 北京百度网讯科技有限公司 Material recovery method, three-dimensional model generation method and model training method
WO2023088348A1 (en) * 2021-11-22 2023-05-25 北京字节跳动网络技术有限公司 Image drawing method and apparatus, and electronic device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103115923A (en) * 2013-01-28 2013-05-22 上海新纤仪器有限公司 High-luminous-intensity light source microscope as well as image identification and analysis device and application
CN104751464A (en) * 2015-03-30 2015-07-01 山东大学 Real sense material measurement device and method based on camera light source array modes
US20180047208A1 (en) * 2016-08-15 2018-02-15 Aquifi, Inc. System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function
CN110567977A (en) * 2019-10-11 2019-12-13 湖南讯目科技有限公司 Curved glass defect detection system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103115923A (en) * 2013-01-28 2013-05-22 上海新纤仪器有限公司 High-luminous-intensity light source microscope as well as image identification and analysis device and application
CN104751464A (en) * 2015-03-30 2015-07-01 山东大学 Real sense material measurement device and method based on camera light source array modes
US20180047208A1 (en) * 2016-08-15 2018-02-15 Aquifi, Inc. System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function
CN110567977A (en) * 2019-10-11 2019-12-13 湖南讯目科技有限公司 Curved glass defect detection system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GUILIN LIU等: "Material Editing Using a Physically Based Rendering Network", 《ARXIV:1708.00106V2[CS.CV]》 *
TIANTENG BI等: "SIR-Net: Self-Supervised Transfer for Inverse Rendering via Deep Feature Fusion and Transformation From a Single Image", 《IEEE ACCESS》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023088348A1 (en) * 2021-11-22 2023-05-25 北京字节跳动网络技术有限公司 Image drawing method and apparatus, and electronic device and storage medium
CN115375847A (en) * 2022-08-25 2022-11-22 北京百度网讯科技有限公司 Material recovery method, three-dimensional model generation method and model training method
CN115375847B (en) * 2022-08-25 2023-08-29 北京百度网讯科技有限公司 Material recovery method, three-dimensional model generation method and model training method

Similar Documents

Publication Publication Date Title
Gardner et al. Deep parametric indoor lighting estimation
Kang et al. Learning efficient illumination multiplexing for joint capture of reflectance and shape.
CN108764250B (en) Method for extracting essential image by using convolutional neural network
JP2022032937A (en) Computer vision method and system
CN113379698B (en) Illumination estimation method based on step-by-step joint supervision
CN112330654A (en) Object surface material acquisition device and method based on self-supervision learning model
CN112819941B (en) Method, apparatus, device and computer readable storage medium for rendering water surface
CN110033509B (en) Method for constructing three-dimensional face normal based on diffuse reflection gradient polarized light
US20230368459A1 (en) Systems and methods for rendering virtual objects using editable light-source parameter estimation
Song et al. Deep sea robotic imaging simulator
CN115797561A (en) Three-dimensional reconstruction method, device and readable storage medium
CN114998507A (en) Luminosity three-dimensional reconstruction method based on self-supervision learning
Mirbauer et al. SkyGAN: Towards Realistic Cloud Imagery for Image Based Lighting.
CN114581577A (en) Object material micro-surface model reconstruction method and system
Zhang et al. Illumination estimation for augmented reality based on a global illumination model
CN113888694A (en) SDF field micro-renderable-based transparent object reconstruction method and system
CN116091684B (en) WebGL-based image rendering method, device, equipment and storage medium
CN115656189B (en) Defect detection method and device based on luminosity stereo and deep learning algorithm
CN116524101A (en) Global illumination rendering method and device based on auxiliary buffer information and direct illumination
CN112687009B (en) Three-dimensional face representation method and parameter measurement device and method thereof
US11804007B2 (en) 3D digital model surface rendering and conversion
EP3819586B1 (en) Method for generating a three-dimensional model of an object
CN113947547A (en) Monte Carlo rendering graph noise reduction method based on multi-scale kernel prediction convolutional neural network
GB2603951A (en) Image Processing
Ni et al. Detection of real-time augmented reality scene light sources and construction of photorealis tic rendering framework

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210205