CN117953137B - Human body re-illumination method based on dynamic surface reflection field - Google Patents

Human body re-illumination method based on dynamic surface reflection field Download PDF

Info

Publication number
CN117953137B
CN117953137B CN202410353427.3A CN202410353427A CN117953137B CN 117953137 B CN117953137 B CN 117953137B CN 202410353427 A CN202410353427 A CN 202410353427A CN 117953137 B CN117953137 B CN 117953137B
Authority
CN
China
Prior art keywords
human body
illumination
light
rendering
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410353427.3A
Other languages
Chinese (zh)
Other versions
CN117953137A (en
Inventor
张盛平
孙艺朋靖
柳青林
孟权令
吕晓倩
王晨阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology Weihai
Original Assignee
Harbin Institute of Technology Weihai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology Weihai filed Critical Harbin Institute of Technology Weihai
Priority to CN202410353427.3A priority Critical patent/CN117953137B/en
Publication of CN117953137A publication Critical patent/CN117953137A/en
Application granted granted Critical
Publication of CN117953137B publication Critical patent/CN117953137B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a human body relighting method based on a dynamic surface reflection field, which comprises the following steps: decomposing the 4D space by utilizing multi-plane and hash representation, and encoding the multi-view dynamic human body video to obtain compact space-time position encoding; obtaining a symbol distance function value, a geometric feature and a color value of a light sampling point; obtaining the depth, normal direction, color and material of the corresponding pixel; modeling direct illumination, light visibility, and indirect illumination; and simultaneously, restraining the rendered image, and learning model parameters to obtain a dynamic human body relighting video. According to the invention, the high-efficiency 4D implicit representation is designed to model the human body surface reflection field, so that the problems of large fitting error and low motion freedom degree inherent in a template-based method are overcome, and accurate estimation of the dynamic human body surface reflection field is realized. In the illumination modeling, visibility and indirect light are introduced through ray tracing, so that the coloring effect of secondary ejection is accurately simulated, and more accurate material calculation and relighting effect are realized.

Description

Human body re-illumination method based on dynamic surface reflection field
Technical Field
The invention relates to the technical field of dynamic three-dimensional reconstruction and inverse rendering, in particular to a human body re-illumination method based on a dynamic surface reflection field.
Background
Dynamic human body weight illumination is an important research direction based on computer vision and graphics, and the application of the dynamic human body weight illumination covers a plurality of industries such as movie production, video game development, virtual reality and the like. The core aim is to manipulate the light and shadow to achieve a natural fusion of the dynamic human body with the new lighting environment.
Conventional approaches rely on controllable illumination systems in LIGHTSTAGE and advanced camera arrays to capture accurate body reflectivity, however expensive equipment limits their widespread use. To address these limitations, existing approaches propose explicit optimization of dynamic body geometry and reflected fields under unknown constant lighting conditions. Nevertheless, achieving fine dynamic reconstruction and high quality re-illumination effects remains a significant challenge for explicit representation. Under the promotion of the development of the implicit neural scene representation technology, the realization of realistic free viewpoint rendering is possible, and the exploration of the neural inverse rendering on the static object re-illumination method is promoted. However, they are difficult to extend to dynamic scenes, subject to representation limitations of static radiation fields. In order to simulate time-varying geometry and reflected fields with complex movements, the latest approach uses deformable body templates SMPL as explicit guides for body movements to simulate body movements. Limitations in fit errors and freedom of movement inherent in template-based modeling prevent existing flows, making it difficult to reconstruct dynamic geometric details in more challenging scenarios involving loose clothing and character interactions.
Disclosure of Invention
The invention aims to provide a human body re-illumination method based on a dynamic surface reflection field, which utilizes compact space-time implicit expression to learn human body motion with high degree of freedom and realizes fine dynamic human body geometric reconstruction and material estimation. In order to model an accurate shadow effect, the method estimates direct illumination and indirect illumination simultaneously, and adopts a physical-based rendering method to realize a vivid rendering effect.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a human body re-illumination method based on a dynamic surface reflection field comprises the following steps:
Decomposing the 4D space by utilizing multi-plane and hash representation, and encoding the input multi-view dynamic human body video by using time-space multi-plane representation to obtain compact time-space position encoding; the method comprises the following steps: the 4D space is decomposed into a compact multi-planar feature encoder and a time-aware hash encoder. In modeling, light is emitted from a camera center point to an imaging plane, light in 4D space is sampled, each light samples a certain number of points, and space-time encoding is performed for each point using the two encoders obtained above.
Inputting the space-time position codes into a geometric network to obtain a symbol distance function value and geometric characteristics of the light sampling points; the method comprises the following steps: and (3) inputting space-time position codes of the light sampling points into a multi-layer perceptron, and obtaining symbol distance function values and geometric features of the corresponding light sampling points through rendering loss fitting.
Inputting the geometric characteristics and space-time position codes of the light sampling points into a color network to obtain color values of the light sampling points; the method comprises the following steps: and splicing the space-time position codes of the light sampling points with the geometric features, inputting the space-time position codes into a multi-layer perceptron, and obtaining color values of corresponding points through rendering loss fitting.
Integrating the density, normal direction, color and material of the sampling points on each light ray by using a volume rendering technology to obtain the depth, normal direction, color and material of the corresponding pixels; thereby obtaining a depth map, a normal map, a color map and a texture map of the dynamic human body;
for modeling of illumination, the method estimates direct illumination and indirect illumination simultaneously, the direct illumination uses a spherical Gaussian function for modeling, and parameters can be compressed and optimized, so that the parameters are easy to converge; indirect light relies on the characteristics of the neural radiation field, modeling visibility and indirect illumination using ray tracing.
Determining the positions of the surface points by using the obtained depth map, and obtaining a final rendering image by using a physical-based rendering method for each surface point; the method comprises the following steps: and obtaining the spatial positions of the surface points by sampling the light rays by utilizing the depth information, and obtaining a final rendered image by using a micro-surface model to input geometry, materials, visibility and illumination through a physical-based rendering method for each surface point.
Taking the target video as a monitor, simultaneously restricting the rendering image obtained by the volume rendering and the physical-based rendering method in the steps, and learning model parameters by minimizing the restriction; the main constraint is rendering loss with target video as supervision, and the main constraint comprises smooth loss of materials and geometric constraint.
When in re-illumination, the new ambient light map is used for replacing direct illumination in illumination modeling, and a physical-based rendering method is used for synthesizing dynamic human re-illumination video under the new illumination.
The effects provided in the summary of the invention are merely effects of embodiments, not all effects of the invention, and one of the above technical solutions has the following advantages or beneficial effects:
The invention provides a human body relighting method based on a dynamic surface reflection field, which designs an efficient 4D implicit representation to model the human body surface reflection field, overcomes the problems of large fitting error and lower motion freedom inherent in the method based on a template, and realizes accurate estimation of the dynamic human body surface reflection field. In the illumination modeling, visibility and indirect light are introduced through ray tracing, so that the coloring effect of secondary ejection is accurately simulated, and more accurate material calculation and relighting effect are realized.
Drawings
FIG. 1 is a flow chart of a human body re-illumination method based on a dynamic surface reflection field.
Detailed Description
As shown in fig. 1, a human body re-illumination method based on a dynamic surface reflection field comprises the following steps:
S1, decomposing a 4D space by using multi-plane and hash representation, and encoding an input multi-view dynamic human body video by using a time-space multi-plane representation to obtain a compact space-time position code;
s2, inputting the space-time position codes into a geometric network to obtain a symbol distance function value and geometric characteristics of the light sampling points;
s3, inputting the geometric features and space-time position codes of the light sampling points into a color network to obtain color values of the light sampling points;
s4, obtaining the depth, normal direction, color and material of the corresponding pixel by using a volume rendering technology for the light sampling points; thereby obtaining a depth map, a normal map, a color map and a texture map of the dynamic human body;
S5, modeling direct illumination by using a spherical Gaussian function, and modeling light visibility and indirect illumination by using ray tracing;
s6, determining the positions of the surface points by using the obtained depth map, and obtaining a final rendering image for each surface point by using a physical-based rendering method;
S7, taking the target video as a monitor, simultaneously restraining the rendering image obtained through the volume rendering and the physical-based rendering method in the steps, and learning model parameters by minimizing the restraint;
and S8, when the human body is relight, replacing the direct illumination by using a new ambient light map to obtain a dynamic human body relight video.
In step S1, the 4D space is decomposed into a compact multi-planar feature encoder and a time-aware hash encoder. In modeling, light is emitted from a camera center point to an imaging plane, light in 4D space is sampled, each light samples a certain number of points, and space-time encoding is performed for each point using the two encoders obtained above. For each sampling point in spaceThe space-time coding can be defined as:
Wherein, Representing a multi-planar feature encoder,/>A hash encoder representing the temporal perception,Is a low-dimensional tensor decomposed from the 4D tensor,/>Representing a splice operation,/>Representing the hadamard product.
In step S2, the position codes of the light sampling points are input into a small multi-layer perceptron, and the symbol distance function values and geometric features of the corresponding light sampling points are obtained through rendering loss fitting. The process can be expressed as: wherein/> Is a geometric network,/>Is a sign distance function value,/>Is a geometric feature.
In step S3, the space-time position codes of the sampling points are spliced with the geometric features, the spliced space-time position codes are input into a small multi-layer perceptron, and color values of the corresponding light sampling points are obtained through rendering loss fitting. The process can be expressed as: wherein, the method comprises the steps of, wherein, For color network,/>Color values for the sample points.
In step S4, the density, normal direction, color and material of the sampling points on each ray are integrated by using the volume rendering technique, so as to obtain a depth map, a normal direction map, a color map and a material map of the dynamic human body. Taking a color chart as an example, this process can be expressed as:
Wherein, Representing the camera center,/>Representing the opposite direction of the light emitted from the camera center,/>The transmittance is represented by a value representing the transmittance,Representing volume density,/>For sampling point color values,/>The color of the resulting pixel value is rendered for the volume.
In step S5, for modeling of illumination, the method estimates direct illumination and indirect illumination at the same time, the direct illumination uses a spherical gaussian function for modeling, compression can optimize the parameter quantity, so that the parameter quantity is easy to converge, indirect light depends on the characteristics of a nerve radiation field, and the visibility and the indirect illumination are obtained by using a light tracking mode.
Direct illuminationCan be expressed as:
Wherein, Representing a mixed sphere gaussian function,/>Representation for lobe/>Is/are optimized for the parametersFor the total number of lobes,/>Is the incident direction of the light.
Indirect light relies on the characteristics of the neural radiation field, and the visibility and indirect illumination are obtained by using a light tracking modeThe concrete representation is as follows:
Wherein, For/>Location of timetable points,/>Color of pixel value obtained for volume rendering,/>Is the firstTransmittance of each sample point, emission direction from surface point/>The rays issued may be expressed as: /(I). In actual sampling, N (=512) points are acquired by using a discrete sampling mode, wherein the number of points is/areFor/>Sampling intervals of the sampling points.
In step S6, the spatial positions of the surface points are obtained by sampling the light using the depth information, and for each surface point, the final rendered image is obtained by using the micro-surface model to input geometry, material, visibility and illumination through a physical-based rendering method. The physics-based rendering formula is as follows:
Wherein, Is in the normal direction/>To be from/>Incident radiance of direction reception,/>Is the emergent direction/>Is made of surface material.
In step S7, taking the target video as a supervision, and simultaneously constraining the rendered image obtained by the volume rendering and the physical-based rendering method in the above steps, wherein the main constraint is the rendering loss taking the target video as the supervision, and secondly comprises the smooth loss of the material and the geometric constraint, and learning the model parameters by minimizing the constraint.
Principal constraint lossThe definition is as follows:
Wherein, Is the color resulting from volume rendering,/>For colors based on physical rendering,/>Is the true color for supervision.
In step S8, after modeling is completed, only a new ambient light map is needed to replace direct illumination during re-illumination, so as to obtain a dynamic human body re-illumination video.
While the foregoing description of the embodiments of the present invention has been presented in conjunction with the drawings, it should be understood that it is not intended to limit the scope of the invention, but rather, it is intended to cover all modifications or variations within the scope of the invention as defined by the claims of the present invention.

Claims (7)

1. A human body heavy illumination method based on a dynamic surface reflection field is characterized by comprising the following steps:
s1, decomposing a 4D space tensor into a low-dimensional tensor, and performing space-time coding on the low-dimensional tensor by using a multi-plane feature encoder and a time-aware hash encoder; encoding the input multi-view dynamic human body video by using a multi-plane feature encoder and a time-aware hash encoder to obtain space-time position codes;
S2, inputting space-time position codes into a geometric network to obtain a symbol distance function value and geometric characteristics of the light sampling points;
s3, inputting the geometric features and space-time position codes of the light sampling points into a color network to obtain color values of the light sampling points;
s4, using a volume rendering technology to obtain the depth, normal direction, color and material of the corresponding pixels for the light sampling points, so as to obtain a depth map, a normal direction map, a color map and a material map of the dynamic human body;
S5, modeling direct illumination by using a spherical Gaussian function, and modeling light visibility and indirect illumination by using ray tracing;
S6, determining the positions of the surface points by using the depth map obtained in the S4, and obtaining a rendered image by using a physical-based rendering method for each surface point;
The physics-based rendering formula is as follows:
;
Wherein, Is in the normal direction/>To be from/>Incident radiance of direction reception,/>As the direction of the outgoing light,Is made of surface material;
S7, taking the target video as a monitor, simultaneously calculating the rendering loss of the rendering image obtained by volume rendering in S4 and the rendering image and the target video obtained based on a physical rendering method in S6, wherein the total loss comprises the rendering loss, the smooth loss and the geometric loss of materials, and restraining the total loss to obtain model parameters by minimizing the restraint;
And S8, when the illumination is repeated, replacing the ambient light map with direct illumination, and synthesizing the dynamic human body video under new illumination by using a physical-based rendering method.
2. The method for human body heavy illumination based on dynamic surface reflection field as set forth in claim 1, wherein S1 is specifically: decomposing the 4D space tensor into a low-dimensional tensor, and encoding the low-dimensional tensor by using a multi-plane feature encoder and a time-aware hash encoder; in modeling, light rays are emitted from a camera center point to an imaging plane, each light ray in the 4D space is sampled, and a multi-plane feature encoder and a hash encoder are used for space-time position encoding for each light ray sampling point.
3. The method for human body heavy illumination based on dynamic surface reflection field as set forth in claim 1, wherein the step S2 is specifically: and (3) inputting space-time position codes of the light sampling points into a multi-layer perceptron, and obtaining symbol distance function values and geometric features of the corresponding light sampling points through rendering loss fitting.
4. The method for human body heavy illumination based on dynamic surface reflection field as set forth in claim 1, wherein the step S3 is specifically: and splicing the space-time position codes of the light sampling points with the geometric features, inputting the space-time position codes into a multi-layer perceptron, and obtaining color values of the corresponding light sampling points through rendering loss fitting.
5. The method for human body heavy illumination based on dynamic surface reflection field as set forth in claim 1, wherein S4 is specifically: and integrating the density, normal direction, color and material of the sampling points on each ray by using a volume rendering technology to obtain a depth map, a normal direction map, a color map and a material map of the dynamic human body.
6. The method for human body heavy illumination based on dynamic surface reflection field as set forth in claim 1, wherein the step S5 is specifically: modeling is carried out by using a spherical Gaussian function for direct illumination, and parameters are compressed and optimized, so that the direct illumination is easy to converge; indirect light relies on the characteristics of the neural radiation field, and uses a ray tracing approach to obtain visibility and indirect illumination.
7. The method for human body heavy illumination based on dynamic surface reflection field as set forth in claim 1, wherein S6 is specifically: and obtaining the spatial position of the surface point by sampling light by utilizing the information of the depth map, and obtaining a final rendering image by using a micro-surface model to input geometry, materials, visibility and illumination by a physical-based rendering method for each surface point.
CN202410353427.3A 2024-03-27 2024-03-27 Human body re-illumination method based on dynamic surface reflection field Active CN117953137B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410353427.3A CN117953137B (en) 2024-03-27 2024-03-27 Human body re-illumination method based on dynamic surface reflection field

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410353427.3A CN117953137B (en) 2024-03-27 2024-03-27 Human body re-illumination method based on dynamic surface reflection field

Publications (2)

Publication Number Publication Date
CN117953137A CN117953137A (en) 2024-04-30
CN117953137B true CN117953137B (en) 2024-06-14

Family

ID=90796628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410353427.3A Active CN117953137B (en) 2024-03-27 2024-03-27 Human body re-illumination method based on dynamic surface reflection field

Country Status (1)

Country Link
CN (1) CN117953137B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927341A (en) * 2021-04-02 2021-06-08 腾讯科技(深圳)有限公司 Illumination rendering method and device, computer equipment and storage medium
CN113240622A (en) * 2021-03-12 2021-08-10 清华大学 Human body scene image intrinsic decomposition and relighting method and device

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021223134A1 (en) * 2020-05-07 2021-11-11 浙江大学 Micro-renderer-based method for acquiring reflection material of human face from single image
CN112183637B (en) * 2020-09-29 2024-04-09 中科方寸知微(南京)科技有限公司 Single-light-source scene illumination re-rendering method and system based on neural network
US20220335636A1 (en) * 2021-04-15 2022-10-20 Adobe Inc. Scene reconstruction using geometry and reflectance volume representation of scene
US20240212106A1 (en) * 2021-04-28 2024-06-27 Google Llc Photo Relighting and Background Replacement Based on Machine Learning Models
CN114092625B (en) * 2021-11-19 2024-05-10 山东大学 Real-time multi-scale high-frequency material rendering method and system based on normal map
CN114429538B (en) * 2022-04-02 2022-07-12 中科计算技术创新研究院 Method for interactively editing nerve radiation field geometry
CN115131492A (en) * 2022-04-12 2022-09-30 腾讯科技(深圳)有限公司 Target object relighting method and device, storage medium and background replacement method
CN114972617B (en) * 2022-06-22 2023-04-07 北京大学 Scene illumination and reflection modeling method based on conductive rendering
CN115719399A (en) * 2022-09-30 2023-02-28 中国人民解放军国防科技大学 Object illumination editing method, system and medium based on single picture
CN116310018A (en) * 2022-12-07 2023-06-23 西北大学 Model hybrid rendering method based on virtual illumination environment and light query
CN116051696B (en) * 2023-01-10 2023-12-22 之江实验室 Reconstruction method and device of human body implicit model capable of being re-illuminated
CN116485994A (en) * 2023-03-08 2023-07-25 浙江大学 Scene reverse drawing method and device based on neural implicit expression
CN116934948A (en) * 2023-06-15 2023-10-24 清华大学 Relighting three-dimensional digital person construction method and device based on multi-view video
CN116958396A (en) * 2023-07-18 2023-10-27 咪咕文化科技有限公司 Image relighting method and device and readable storage medium
CN116977536A (en) * 2023-08-14 2023-10-31 北京航空航天大学 Novel visual angle synthesis method for borderless scene based on mixed nerve radiation field
CN117237527A (en) * 2023-08-25 2023-12-15 上海人工智能创新中心 Multi-view three-dimensional reconstruction method
CN117671126A (en) * 2023-12-12 2024-03-08 四川大学 Space change indoor scene illumination estimation method based on nerve radiation field

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113240622A (en) * 2021-03-12 2021-08-10 清华大学 Human body scene image intrinsic decomposition and relighting method and device
CN112927341A (en) * 2021-04-02 2021-06-08 腾讯科技(深圳)有限公司 Illumination rendering method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN117953137A (en) 2024-04-30

Similar Documents

Publication Publication Date Title
CN111783525B (en) Aerial photographic image target sample generation method based on style migration
US20220335636A1 (en) Scene reconstruction using geometry and reflectance volume representation of scene
CN114972617B (en) Scene illumination and reflection modeling method based on conductive rendering
CN113572962B (en) Outdoor natural scene illumination estimation method and device
US11663775B2 (en) Generating physically-based material maps
Li et al. [Retracted] Multivisual Animation Character 3D Model Design Method Based on VR Technology
CN115115688B (en) Image processing method and electronic equipment
CN110533707A (en) Illuminant estimation
CN114255313B (en) Three-dimensional reconstruction method and device for mirror surface object, computer equipment and storage medium
KR102291162B1 (en) Apparatus and method for generating virtual data for artificial intelligence learning
CN116416375A (en) Three-dimensional reconstruction method and system based on deep learning
CN117557714A (en) Three-dimensional reconstruction method, electronic device and readable storage medium
CN117649478B (en) Model training method, image processing method and electronic equipment
CN115359163A (en) Three-dimensional model generation system, three-dimensional model generation method, and three-dimensional model generation device
CN116134491A (en) Multi-view neuro-human prediction using implicit differentiable renderers for facial expression, body posture morphology, and clothing performance capture
CN112634456B (en) Real-time high-realism drawing method of complex three-dimensional model based on deep learning
Mittal Neural radiance fields: Past, present, and future
CN117953137B (en) Human body re-illumination method based on dynamic surface reflection field
CN111311722B (en) Information processing method and device, electronic equipment and storage medium
CN117333609B (en) Image rendering method, network training method, device and medium
CN117173314B (en) Image processing method, device, equipment, medium and program product
Zhang et al. Survey on controlable image synthesis with deep learning
CN117252787B (en) Image re-illumination method, model training method, device, equipment and medium
CN117523024B (en) Binocular image generation method and system based on potential diffusion model
US20240119671A1 (en) Systems and methods for face asset creation and models from one or more images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant