CN117934692A - SC-FEGAN depth model-based 3D scene self-adaptive mapping method - Google Patents

SC-FEGAN depth model-based 3D scene self-adaptive mapping method Download PDF

Info

Publication number
CN117934692A
CN117934692A CN202311839314.6A CN202311839314A CN117934692A CN 117934692 A CN117934692 A CN 117934692A CN 202311839314 A CN202311839314 A CN 202311839314A CN 117934692 A CN117934692 A CN 117934692A
Authority
CN
China
Prior art keywords
mapping
scene
map
fegan
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311839314.6A
Other languages
Chinese (zh)
Inventor
李滨
王谦
周纹纹
郭小晴
丁志彪
张珍珍
张鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Shunnet Media Co ltd
Original Assignee
Shandong Shunnet Media Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Shunnet Media Co ltd filed Critical Shandong Shunnet Media Co ltd
Priority to CN202311839314.6A priority Critical patent/CN117934692A/en
Publication of CN117934692A publication Critical patent/CN117934692A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a 3D scene self-adaptive mapping method based on an SC-FEGAN depth model, and relates to the technical field of image processing. The invention includes collecting and preparing a dataset; training a deep learning model by using a convolutional neural network; extracting depth information of a scene; matching the mapping materials; fusing the generated map with the original scene; evaluating the generated map by using modeling software, and generating a scene evaluation report; mapping application and export. Based on the artificial intelligent depth model SC-FEGAN, the invention intelligently selects the mapping size, the mapping material and the mapping color, and realizes self-adaptive mapping according to the related technical parameters of the 3D scene. The method utilizes a deep learning algorithm to carry out intelligent mapping processing on each module in the 3D scene, and comprises the steps of selecting mapping size, mapping material and mapping color, so that the mapping processing of the 3D scene is automatically completed.

Description

SC-FEGAN depth model-based 3D scene self-adaptive mapping method
Technical Field
The invention belongs to the technical field of image processing, and relates to a 3D scene self-adaptive mapping method based on an SC-FEGAN depth model.
Background
At present, in the 3D scene manufacturing process, mapping manufacturing is complex, scene and mapping combination are complex, and therefore the manufacturing period of the whole 3D scene is overlong, and labor cost is overhigh. In order to solve the problems and improve the manufacturing efficiency of the 3D scene, a depth model SC-FEGAN is introduced, and the intelligent analysis of the 3D scene is completed through a depth algorithm, so that the automatic mapping of the scene is realized, and the manufacturing efficiency of the 3D scene is improved.
Disclosure of Invention
In order to make up the defects of the prior art, the invention adopts a 3D scene self-adaptive mapping technology based on an artificial intelligent depth model SC-FEGAN, realizes highly intelligent mapping processing, and improves the sense of reality and visual effect of the whole scene. The invention is realized by the following technical scheme: the invention provides a 3D scene self-adaptive mapping method based on an SC-FEGAN depth model, which comprises the following steps:
step S1, collecting and preparing a data set;
Step S2, selecting a U-net neural network generator as a deep learning model for the data set in the step S1, wherein the model is used for learning scene structures and mapping features, training a 3D scene and mapping, and vectorizing training data of mapping in the 3D scene; the training data comprises original scene information and corresponding map information; the semi-supervised learning strategy is adopted, the vectorization data are used as a training set, the characteristics of mapping data in corresponding scenes are learned by the model, training of generating a model by mapping a depth scene is completed, and finally an SC-FEGAN mapping generating model is obtained;
S3, extracting depth information of a scene;
s4, the SC-FEGAN mapping generation model completes mapping material matching;
Step S5, combining parameters such as SC-FEGAN-based 3D scene map generation model adjustment map transparency (alpha) mixing, poisson fusion and the like trained in the step S2 to fuse the generated map with an original scene, so as to ensure consistency and naturalness of the map and the scene;
S6, manually perfecting the generated map by using modeling software, finding out obvious flaws which are not consistent with the 3D scene, including contrast, detail and fidelity, and repairing the problem map by manually fine-tuning alpha and texture size parameters by using problem map information displayed in a scene evaluation report of the modeling software;
And S7, mapping application and derivation.
The beneficial effects of the invention are as follows:
Based on the artificial intelligent depth model SC-FEGAN, the invention intelligently selects the mapping size, the mapping material and the mapping color, and realizes self-adaptive mapping according to the related technical parameters of the 3D scene. The method utilizes a deep learning algorithm to carry out intelligent mapping processing on each module in the 3D scene, and comprises the steps of selecting mapping size, mapping material and mapping color, so that the mapping processing of the 3D scene is automatically completed.
And 3D scene self-adaptive mapping is carried out by using an artificial intelligence depth model SC-FEGAN.
(1) The various modules within the 3D scene are analyzed and understood using a deep learning algorithm. Based on depth information of the scene, the map size, the map material, and the map color are intelligently selected.
(2) And generating a high-quality map by using the SC-FEGAN map generation model. The color and texture details of the map are adjusted to coordinate the map with the illumination and details of the scene.
Drawings
The invention is further described below with reference to the accompanying drawings.
Fig. 1 is a flowchart of the 3D scene adaptive mapping method of the present invention.
Detailed Description
In order that the above objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to the appended drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. The present invention may be embodied in many other forms than described herein and similarly modified by those skilled in the art without departing from the spirit of the invention, whereby the invention is not limited to the specific embodiments disclosed below.
The attached drawing is a specific embodiment of a 3D scene self-adaptive mapping method based on an SC-FEGAN depth model. This embodiment comprises the following steps of,
Step S1, collecting and preparing a data set;
Step S2, selecting a U-Net neural network generator as a deep learning model for the data set in the step S1, wherein the model is used for learning scene structures and mapping features, training a 3D scene and mapping, and vectorizing training data of mapping in the 3D scene; the training data comprises original scene information and corresponding map information; the vectorization data is used as a training set by adopting a semi-supervised learning strategy, so that the model learns the characteristics of the mapping data in the corresponding scene, the training of the depth scene mapping generation model is completed, the SC-FEGAN mapping generation model is finally obtained, and the SC-FEGAN mapping generation model has the capability of autonomously generating mapping according to the scene data and reading the scene depth information;
S3, extracting depth information of a scene;
s4, the SC-FEGAN mapping generation model completes mapping material matching;
Step S5, parameters such as transparency (alpha) mixing, poisson fusion and the like of the generated map are adjusted by combining the map generation model based on the SC-FEGAN trained in the step S2, and the generated map is fused with an original scene, so that consistency and naturalness of the map and the scene are ensured; the SC-FEGAN map generation model is a model for autonomous generation of the 3D scene map;
s6, manually perfecting the generated mapping by using modeling software (3 Dmax, maya and the like), finding out obvious flaws which are not consistent with a 3D scene, including contrast, detail and fidelity, and repairing the problem mapping by manually fine-tuning alpha and texture size parameters by using problem mapping information displayed in a scene evaluation report of the modeling software;
And S7, mapping application and derivation.
In step S1, collecting a dataset comprising various types of 3D scenes and corresponding maps; the data set comprises a plurality of scenes under different illumination conditions indoors and outdoors; for each scene, an original scene image and a corresponding map image need to be prepared. The original scene image refers to an original scene image without a mapping application.
In step S3, a trained convolutional neural network model, that is, the scene map generated in step S2 is used to generate a model, forward reasoning is performed on the input 3D scene image, depth information of the scene is extracted, and the scene depth information is corrected by using a mean square error loss function.
In step S4, the scene position parameter, the block size and the ambient light information in the scene depth information obtained in step S3 are input into the SC-FEGAN mapping generation model, the texture mapping is automatically generated by using the U-net neural network generator in the model, after the mapping is generated, the noise vector size and the mapping layer number parameter in the SC-FEGAN mapping generation model are manually adjusted to further control the generation effect of the mapping material, wherein the smaller the noise vector, the finer the mapping material is generated, but the generation efficiency is affected, the larger the mapping layer number is, the higher the mapping and scene fitting degree is, and the generation efficiency is also affected in the same way. After the adjustment is completed, the SC-FEGAN mapping generation model completes matching and merging of the mapping and the scene.
S4-1, selecting a mapping resolution and generating a material;
According to the scene depth information, the corresponding features of the scene depth and the mapping resolution extracted by the training set in the step S2 are utilized to determine the mapping resolution in the scene by utilizing a U-net neural network generator so as to ensure that the mapping is matched with the scene. The foreground object and detail portions require higher resolution mapping, while the background portion may use lower resolution mapping. Similarly, the scene depth parameters extracted by the training set in step S2, including the features of object size, material quality, and the like corresponding to the mapping pattern, are utilized to generate the mapping of the RGB channel image type by using the U-net neural network generator. The method realizes automatic matching of the mapping resolutions of different areas and automatic generation of mapping patterns.
S4-2, mapping color processing;
According to the illumination condition and color characteristics of the scene, the scene depth parameters extracted by the training set in the step S2 comprise the corresponding characteristics of the quantity of light sources, the brightness of light and shadow, and the like and the colors of the map, and the U-net neural network generator in the SC-FEGAN map generation model is utilized to complete the color processing of the map generated in the step S4-1, and mainly comprise the automatic adaptation and adjustment of the technical parameters of color balance, brightness and contrast so as to ensure the color coordination of the map and the scene.
S4-3, adjusting the detail of the texture of the map;
and performing texture synthesis and enhancement on the map by using the SC-FEGAN map generation model so as to meet the texture detail requirement of the scene. SC-FEGAN uses the generated countermeasure network to carry out mapping synthesis, and the definition, detail and the like of synthesized textures can be adjusted through corresponding technical parameters.
In step S7, the optimized and adjusted map generated in step S6 is applied to the corresponding 3D scene, and a map file for practical application is derived, including texture map and normal map.
The method can utilize the convolutional neural network to perform scene analysis and deep learning, generate the mapping material through the SC-FEGAN mapping generation model, and adaptively select the mapping size, color and adjust texture details according to the depth information of the scene. The specific setting and adjustment of the technical parameters and methods in the steps are flexibly determined according to the requirements and conditions of practical applications.

Claims (6)

1. The 3D scene self-adaptive mapping method based on the SC-FEGAN depth model is characterized by comprising the following steps of,
Step S1, collecting and preparing a data set;
Step S2, selecting a U-net neural network generator as a deep learning model for the dataset of the step S1, and mainly completing training of the U-net neural network generator in the SC-FEGAN 3D scene mapping generation model, wherein the model is used for learning scene structures and mapping features, training the 3D scene and mapping, and vectorizing training data of mapping in the 3D scene; the training data comprises original scene information and corresponding map information; the semi-supervised learning strategy is adopted, the vectorization data are used as a training set, the characteristics of mapping data in corresponding scenes are learned by the model, training of generating a model by mapping a depth scene is completed, and finally an SC-FEGAN mapping generating model is obtained;
S3, extracting depth information of a scene;
s4, the SC-FEGAN mapping generation model completes mapping material matching;
Step S5, adjusting transparency mixing of the map by combining the SC-FEGAN D scene map generation model trained in the step S2, fusing the generated map with the original scene by poisson fusion parameters, and ensuring consistency and naturalness of the map and the scene;
step S6, the generated map is manually perfected by using modeling software, obvious flaws which are not consistent with the 3D scene are found out, the obvious flaws comprise contrast, details and fidelity, the problem map information displayed in a scene evaluation report of the modeling software is utilized, and the problem map is repaired by manually fine-tuning alpha and texture size parameters;
And S7, mapping application and derivation.
2. The SC-FEGAN depth model based 3D scene adaptive mapping method according to claim 1, wherein in step S1, datasets are collected comprising various types of 3D scenes and corresponding maps; the data set comprises a plurality of scenes under different illumination conditions indoors and outdoors; for each scene, an original scene image and a corresponding map image need to be prepared.
3. The SC-FEGAN depth model-based 3D scene adaptive mapping method according to claim 1, wherein in step S3, the trained convolutional neural network model, that is, the model generated in step S2, is used to perform forward reasoning on the input 3D scene image, extract depth information of the scene, and extract the scene depth information using a mean square error loss function.
4. The 3D scene adaptive mapping method based on the SC-FEGAN depth model according to claim 1, wherein in step S4, the scene position parameter, the block size, and the ambient light information in the scene depth information obtained in step S3 are input into the SC-FEGAN mapping generation model, the SC-FEGAN mapping generation model automatically generates a texture mapping by using a U-net neural network generator according to the above parameters, and after the mapping is generated, the noise vector size and the mapping layer number parameter in the SC-FEGAN mapping generation model are manually adjusted to further control the generation effect of the mapping texture; the smaller the noise vector is, the finer the generated mapping material is, the larger the mapping layer number is, and the higher the mapping and scene laminating degree is.
5. The method for adaptive mapping of a 3D scene based on the SC-FEGAN depth model as recited in claim 4,
S4-1, selecting a mapping resolution and generating a material;
According to the scene depth information, determining the mapping resolution in the scene by using a U-net neural network generator according to the corresponding characteristics of the scene depth and the mapping resolution extracted by the training set in the step S2 so as to ensure that the mapping is matched with the scene; similarly, generating a map of the RGB channel image type by using a U-net neural network generator through the scene depth parameters extracted by the training set in the step S2 and the corresponding characteristics of the object size, the material and the map pattern;
S4-2, mapping color processing;
According to the illumination condition and color characteristics of the scene, the corresponding characteristics of the scene depth parameters, the number of light sources, the brightness of the shadow and the color of the map extracted by the training set in the step S2 are utilized to generate a U-net neural network generator in the model by using the SC-FEGAN map, and the color processing of the map generated in the step S4-1 is completed, wherein the automatic adaptation and adjustment of the technical parameters of color balance, brightness and contrast are included so as to ensure that the map is coordinated with the color of the scene;
s4-3, adjusting the detail of the texture of the map;
and performing texture synthesis and enhancement on the map by using the SC-FEGAN map generation model so as to meet the texture detail requirement of the scene.
6. The SC-FEGAN depth model based 3D scene adaptive mapping method according to claim 1, wherein in step S7, the optimized and adjusted mapping generated in step S6 is applied to the corresponding 3D scene, and a mapping file usable for practical applications is derived, including texture mapping and normal mapping.
CN202311839314.6A 2023-12-29 2023-12-29 SC-FEGAN depth model-based 3D scene self-adaptive mapping method Pending CN117934692A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311839314.6A CN117934692A (en) 2023-12-29 2023-12-29 SC-FEGAN depth model-based 3D scene self-adaptive mapping method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311839314.6A CN117934692A (en) 2023-12-29 2023-12-29 SC-FEGAN depth model-based 3D scene self-adaptive mapping method

Publications (1)

Publication Number Publication Date
CN117934692A true CN117934692A (en) 2024-04-26

Family

ID=90753157

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311839314.6A Pending CN117934692A (en) 2023-12-29 2023-12-29 SC-FEGAN depth model-based 3D scene self-adaptive mapping method

Country Status (1)

Country Link
CN (1) CN117934692A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109410310A (en) * 2018-10-30 2019-03-01 安徽虚空位面信息科技有限公司 A kind of real-time lighting Rendering algorithms based on deep learning network
CN112633103A (en) * 2020-12-15 2021-04-09 中国人民解放军海军工程大学 Image processing method and device and electronic equipment
US20210241495A1 (en) * 2018-08-23 2021-08-05 Sony Interactive Entertainment Inc. Method and system for reconstructing colour and depth information of a scene
CN114549726A (en) * 2022-01-19 2022-05-27 广东时谛智能科技有限公司 High-quality material chartlet obtaining method based on deep learning
CN114842121A (en) * 2022-06-30 2022-08-02 北京百度网讯科技有限公司 Method, device, equipment and medium for generating mapping model training and mapping
CN115311401A (en) * 2022-06-15 2022-11-08 网易(杭州)网络有限公司 Method and device for drawing model map and electronic equipment
CN116012841A (en) * 2022-12-19 2023-04-25 浙江大华技术股份有限公司 Open set image scene matching method and device based on deep learning
CN116433484A (en) * 2023-03-14 2023-07-14 华南理工大学 BRDF map splicing method based on U-Net
CN116612210A (en) * 2023-05-18 2023-08-18 广东时谛智能科技有限公司 Deep learning-based electro-embroidery material mapping generation method, device and medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210241495A1 (en) * 2018-08-23 2021-08-05 Sony Interactive Entertainment Inc. Method and system for reconstructing colour and depth information of a scene
CN109410310A (en) * 2018-10-30 2019-03-01 安徽虚空位面信息科技有限公司 A kind of real-time lighting Rendering algorithms based on deep learning network
CN112633103A (en) * 2020-12-15 2021-04-09 中国人民解放军海军工程大学 Image processing method and device and electronic equipment
CN114549726A (en) * 2022-01-19 2022-05-27 广东时谛智能科技有限公司 High-quality material chartlet obtaining method based on deep learning
CN115311401A (en) * 2022-06-15 2022-11-08 网易(杭州)网络有限公司 Method and device for drawing model map and electronic equipment
CN114842121A (en) * 2022-06-30 2022-08-02 北京百度网讯科技有限公司 Method, device, equipment and medium for generating mapping model training and mapping
CN116012841A (en) * 2022-12-19 2023-04-25 浙江大华技术股份有限公司 Open set image scene matching method and device based on deep learning
CN116433484A (en) * 2023-03-14 2023-07-14 华南理工大学 BRDF map splicing method based on U-Net
CN116612210A (en) * 2023-05-18 2023-08-18 广东时谛智能科技有限公司 Deep learning-based electro-embroidery material mapping generation method, device and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YOUNGJOO JO, ET.AL: "SC-FEGAN: Face Editing Generative Adversarial Network with User\'s Sketch and Color", COMPUTER VISION AND PATTERN RECOGNITIO, 18 February 2019 (2019-02-18) *

Similar Documents

Publication Publication Date Title
CN111292264B (en) Image high dynamic range reconstruction method based on deep learning
CN109584325B (en) Bidirectional colorizing method for animation image based on U-shaped period consistent countermeasure network
CN108875935B (en) Natural image target material visual characteristic mapping method based on generation countermeasure network
CN109255831A (en) The method that single-view face three-dimensional reconstruction and texture based on multi-task learning generate
CN110097609B (en) Sample domain-based refined embroidery texture migration method
CN104376529A (en) Gray level image colorization system and method based on GLCM
CN103065357B (en) Based on the figure for shadow-play model production method of common three-dimensional model
US20240202878A1 (en) Image Transformation Using Interpretable Transformation Parameters
CN110335350A (en) Virtual Terrain generation method based on features of terrain
CN108648264A (en) Underwater scene method for reconstructing based on exercise recovery and storage medium
CN110223251A (en) Suitable for manually with the convolutional neural networks underwater image restoration method of lamp
CN110895795A (en) Improved semantic image inpainting model method
CN110322530A (en) It is a kind of based on depth residual error network can interaction figure picture coloring
CN112819951A (en) Three-dimensional human body reconstruction method with shielding function based on depth map restoration
CN115619685A (en) Transformer method for tracking structure for image restoration
CN103632355A (en) Image automatic synthesis processing method and device thereof
CN111062899B (en) Guidance-based blink video generation method for generating confrontation network
CN116934936A (en) Three-dimensional scene style migration method, device, equipment and storage medium
CN112561785B (en) Silk cultural relic image data expansion method based on style migration
CN117934692A (en) SC-FEGAN depth model-based 3D scene self-adaptive mapping method
CN116152442B (en) Three-dimensional point cloud model generation method and device
CN104992410A (en) Monocular visual pattern processing method
CN116934972A (en) Three-dimensional human body reconstruction method based on double-flow network
CN112329799A (en) Point cloud colorization algorithm
CN115496843A (en) Local realistic-writing cartoon style migration system and method based on GAN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination