CN114820901B - Large scene free viewpoint interpolation method based on neural network - Google Patents

Large scene free viewpoint interpolation method based on neural network Download PDF

Info

Publication number
CN114820901B
CN114820901B CN202210369108.2A CN202210369108A CN114820901B CN 114820901 B CN114820901 B CN 114820901B CN 202210369108 A CN202210369108 A CN 202210369108A CN 114820901 B CN114820901 B CN 114820901B
Authority
CN
China
Prior art keywords
neural network
viewpoint
block
blocks
colors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210369108.2A
Other languages
Chinese (zh)
Other versions
CN114820901A (en
Inventor
许威威
吴秀超
许佳敏
鲍虎军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202210369108.2A priority Critical patent/CN114820901B/en
Publication of CN114820901A publication Critical patent/CN114820901A/en
Application granted granted Critical
Publication of CN114820901B publication Critical patent/CN114820901B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/005Tree description, e.g. octree, quadtree
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a large-scene free viewpoint interpolation method based on a neural network, which comprises the following four steps of 1, shooting a given scene to collect pictures, and reconstructing a global grid model and calculating camera parameters through the collected pictures; 2. dividing the reconstructed global grid model rule into a certain number of blocks, and creating two neural networks for each block, wherein the two neural networks are respectively used for training the viewpoint independent colors and the viewpoint dependent colors; 3. preparing light rays to be trained for each trained block, and realizing parallel training of neural networks corresponding to the blocks by introducing background point sampling; 4. the neural network for training the viewpoint-independent colors is encoded into the octree, the neural network for training the viewpoint-independent colors is reserved, and the interactive level high-quality free viewpoint interpolation rendering under a large scene is realized. High quality interactive rendering of large scenes with neural networks can be achieved with the method framework presented herein, with acceptable storage.

Description

Large scene free viewpoint interpolation method based on neural network
Technical Field
The invention relates to the field of computer vision, in particular to a method for performing new viewpoint interpolation by using a neural network.
Background
The new viewpoint interpolation method has a long history in visual computing. In recent years, new viewpoint interpolation based on images has been greatly developed by deep learning, mainly by encoding a light field through a neural network and rendering the light field by means of volume rendering (see Mildenhall B,Srinivasan P P,Tancik M,et al.Nerf:Representing scenes as neural radiance fields for view synthesis[C]//European conference on computer vision.Springer,Cham,2020:405-421.)., which is a method for improving the efficiency of sampling, generalizing the network model, reducing the data of shooting sampling, optimizing camera parameters, and the like.
The current method for coding the light field based on the neural network can realize higher-quality view interpolation and rendering and better process the change of the related colors of the view. The existing algorithm is mostly suitable for interpolation rendering of smaller scenes and objects, and is generally aimed at smaller camera baseline data. For larger scenes, the method for encoding the light field based on the neural network has the problems of low training speed and low rendering speed, and meanwhile, the shooting density of large scene data is generally lower than that of small objects, so that the training difficulty is increased.
Disclosure of Invention
The invention provides a large-scene free viewpoint interpolation method based on a neural network. The neural network can be used for encoding a large scene under the condition of sparse sampling data, and the interactive level speed rendering is realized.
In order to achieve the above purpose, the invention adopts the following technical scheme that the large scene free viewpoint interpolation method based on the neural network comprises the following steps:
(1) Shooting and sampling a given large scene, and reconstructing a global grid model and calculating camera parameters through an acquired picture;
(2) Dividing the reconstructed global grid model rule into a certain number of blocks, and creating two neural networks for each block, wherein the two neural networks are respectively used for expressing the viewpoint independent colors and the viewpoint dependent colors;
(3) The light rays to be trained are prepared for each trained block, and the neural network corresponding to the blocks is trained in parallel by introducing background point sampling, specifically:
3.1 Collecting training data required for each block;
3.2 Introducing background point sampling to realize parallel training;
3.3 Grouping the blocks, sharing the neural network parameters in the group;
3.4 Intra-block neural network training in two stages;
(4) The neural network for training the viewpoint-independent colors is encoded into the octree, the neural network for training the viewpoint-independent colors is reserved, and the interactive level high-quality free viewpoint interpolation rendering under a large scene is realized.
The invention has the beneficial effects that:
1. The whole scene is expressed in a blocking way, and compared with the single neural network which expresses the whole scene, the blocking neural network is easier to fit local details;
2. Background point sampling gets rid of the connection between blocks, and realizes independent training of blocks.
3. The color is split into two layers of expressions, namely, a view-independent color and a view-dependent color, the view-independent color is sampled near the geometry, the view-dependent color is sampled behind the geometry, and the two layers of expressions and the sampling mode are beneficial to training optimization under a sparse shooting picture;
4. the view-independent colors and bulk densities of each block are discretized (voxelized) and encoded into octree, and the neural network of view-dependent colors is embedded into the CUDA program so that free view interpolation and interactive rendering are achieved under acceptable storage.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic representation of the present invention for a scene representation;
FIG. 3 is a schematic diagram of background point sampling;
FIG. 4 is a schematic representation of the voxelization training of the present invention.
Fig. 5 is an interactive free view interpolation rendering illustration.
FIG. 6 is a diagram of the final rendering effect of the present invention.
Fig. 7 is a rendering effect comparison graph of the present invention and the nearest large scene free viewpoint interpolation method DeepBlending (see Hedman P,Philip J,Price T,et al.Deep blending for free-viewpoint image-based rendering[J].ACM Transactions on Graphics(TOG),2018,37(6):1-15.),FVS( see Riegler G,Koltun V.Free view synthesis[C]//European Conference on Computer Vision.Springer,Cham,2020:623-640.),SVS( see Riegler G,Koltun V.Stable view synthesis[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:12216-12225.)).
Detailed Description
As shown in fig. 1, the large scene free viewpoint interpolation method based on the neural network comprises the following four steps: 1. shooting a given scene to acquire pictures, and reconstructing a global grid model and calculating camera parameters through the acquired pictures; 2. dividing the reconstructed global grid model rule into a certain number of blocks, and creating two neural networks for each block, wherein the two neural networks are respectively used for training the viewpoint independent colors and the viewpoint dependent colors; 3. preparing light rays to be trained for each trained block, and realizing parallel training of neural networks corresponding to the blocks by introducing background point sampling; 4. the neural network for training the viewpoint-independent colors is encoded into the octree, the neural network for training the viewpoint-independent colors is reserved, and the interactive level high-quality free viewpoint interpolation rendering under a large scene is realized.
The present invention will be described in detail with reference to fig. 2-3.
The four steps of the invention will now be described in detail:
(1) Shooting and sampling a given scene, and reconstructing a global grid model and calculating camera parameters through an acquired picture, wherein the method specifically comprises the following steps of: reconstruction of the global grid model and computation of camera internal and external parameters is performed by software CapturingReality (see captureback readiness capture, http:// captureback readiness com.) using the captured pictures.
(2) Dividing the reconstructed global grid model rule into a certain number of blocks, creating two neural networks for each block, wherein the two neural networks are respectively used for expressing the view-independent colors and the view-dependent colors, and specifically comprise the following steps: given the size of one block, partitioning the global grid from the smallest corner of the global grid model, discarding those blocks without grid inside, and constructing two neural networks separately for each remaining block(Θ represents a network parameter) for expressing a view-independent color and a view-dependent color, respectively.
Neural networkIs a multi-layer perceptron with depth 8 and width 128, the input is the three-dimensional position x of the sampling point near the geometric surface, and the output is the viewpoint independent color c s and the volume density sigma s of this point:
Neural network Is a multi-layer perceptron with depth of 6 and width of 64, the input is the three-dimensional position x of the sampling point behind the geometric surface, and the output is the spherical harmonic coefficient k r and the bulk density sigma r of the point:
l and m are the degree and order of the spherical harmonics, respectively, l max is set to 2, Is a spherical harmonic coefficient term with a degree of l and a order of m. The viewpoint-dependent color c r is obtained by using the spherical harmonic coefficient k r and the light direction ω:
Is the basis function of the spherical harmonic, S (·) is the sigmoid activation function. The view-independent color and the view-dependent color of one ray are both obtained by the volume rendering formula (see Max N.Optical models for direct volume rendering[J].IEEE Transactions on Visualization and Computer Graphics,1995,1(2):99-108.). Combining the view-independent color and the view-dependent color yields the final rendered color c=max (min (c s+cr, 1), 0).
(3) The light rays to be trained are prepared for each trained block, and the neural network corresponding to the blocks is trained in parallel by introducing background point sampling, specifically: training data required for each block is collected. For each pixel of the taken picture, a ray r is projected using the camera parameters calculated in step (2) and a Fast Voxel Traversal algorithm is used (see Amanatides,John,and Andrew Woo."A fast voxel traversal algorithm for ray tracing."Eurographics.Vol.87.No.3.1987.) for a candidate block T c through which this ray passes, the first intersection point p of ray r and the global grid model is calculated and if the candidate block contains exactly p or is closer to the camera than p, then ray r is used as training data for this candidate block.
And introducing background point sampling to realize parallel training. The method comprises the following steps: calculating a first intersection point of the light ray with the global grid model after passing out of the block after uniformly sampling the point along the light ray in the block by using prior information of the global grid model, uniformly sampling a background point within a size range of a front half block and a rear half block of the intersection point (shown in figure 3), and passing through a networkAnd obtaining the view-point independent colors and the volume density of all the sampling points, and obtaining the view-point independent colors of one ray by using a volume rendering formula.
The blocks are grouped and the parameters of the neural network within the group are shared. The method comprises the following steps: firstly, a VSA algorithm (referring to David Cohen-Steiner,Pierre Alliez,and Mathieu Desbrun.2004.Variational shape approximation.In ACM SIGGRAPH 2004 Papers.905–914.) to cluster patches of a global grid model; counting the number of patches of different categories intersecting each block according to categories, taking the category corresponding to the maximum number as the category of the block, and for the blocks of the same category, using a K-means algorithm to cluster the blocks of the same category into different groups by taking the central position of the block as input, wherein the blocks in the groups share parameters of a neural network and are used for avoiding the defect of view-point related color training data caused by undersize of the blocks, thereby generating the problem of discontinuous transition of colors between groups.
The intra-block neural network is trained in two stages. The method comprises the following steps: training neural networks in all blocks of a scene in parallel in a first stage; and the second stage expands the blocks of each group, collects the training light of the blocks, fixes the neural network parameters expressing the viewpoint-independent colors, and only trains the neural network expressing the viewpoint-dependent colors to realize smooth transition of the viewpoint-dependent colors among the groups. For training of the viewpoint-independent colors, a voxelized training mode (as shown in fig. 4) is adopted in the block, specifically: each block is divided into 64 x 64 voxels, which, for sampling points within the block, firstly, 8 adjacent voxels around the sampling point are found, and the central point of the 8 voxels is used as a neural networkInput to 8 voxel correspondences/>Performing tri-linear interpolation on the output of the (b) to obtain the viewpoint independent color and volume density of the current sampling point; and integrating and accumulating the part of the light ray in the block by using a volume rendering formula to obtain the view point irrelevant color of the part of the light ray in the block.
On the basis, a convolutional neural network is additionally trained for eliminating noise existing in a rendering result. The method comprises the following steps: the input of the convolutional neural network is a noisy picture and a depth map obtained by a volume rendering formula, the edge map of the convolutional neural network is added, and the output is a denoised picture.
(4) Encoding the neural network for training the relevant colors of the viewpoints into an octree, reserving the neural network for training the relevant colors of the viewpoints, and realizing high-quality interactive level rendering, wherein the method specifically comprises the following steps: with the voxelized representation of blocks, an octree of depth 2 is built for each block, with leaf nodes storing non-empty sub-blocks (size 16 x 16). Embedding neural networks expressing view-dependent colors into CUDA programs, each CUDA block managing a different networkBy means of neural networks/>The characteristic of smaller size, handle/>The parameters of the (C) are stored in the shared memory of the RTX3090 display card in the form of half-precision floating point number to realize high-speed memory access.
High-quality interactive level rendering is realized, specifically: and giving a viewpoint and camera parameters, and projecting to obtain light rays corresponding to each pixel. For the rendering of each ray color, the method is divided into two parts of view-independent color rendering and view-dependent color rendering:
a. View independent color rendering. Using Fast Voxel Traversal algorithm (see Amanatides,John,and Andrew Woo."A fast voxel traversal algorithm for ray tracing."Eurographics.Vol.87.No.3.1987.) to get all blocks through which a ray passes, traversing the traversed blocks sequentially, finding non-empty leaf nodes by using octree recursion of each block to get sub-block elements, sample interpolation in sub-block elements, finally obtaining viewpoint independent colors of the ray by using a volume rendering formula, in this rendering step, obtaining depth values of the ray by using the volume rendering formula for sampling of the following viewpoint dependent colors rendering, simultaneously, obtaining the affiliated blocks by using three-dimensional points corresponding to the depth values, finding a network corresponding to each ray of the blocks by the blocks (Recorded with the following marks).
B. And rendering the viewpoint-related colors. Each CUDAblock is responsible for managing different networks. Each CUDA thread is responsible for reasoning about view-dependent color and volume density for a certain number of ordered sample points. The viewpoint-dependent color of each ray is ultimately derived from the volume rendering formula.
The color of each ray can finally be obtained by directly adding the view-independent color and the view-dependent color and truncating between 0 and 1. And after the colors of all the light rays at the given viewpoint are obtained, the light rays are input into a convolutional neural network to remove noise. The entire rendering flow achieves an interactable rendering speed (20 FPS) at a resolution of 1280 x 720 as shown in fig. 5.
The rendering quality of the large-scene free viewpoint interpolation method realized by the method is shown in fig. 6, and the rendering effects of reflection, high light, high-frequency textures and the like are covered. Fig. 7 is a comparison of the method with other latest large scene viewpoint interpolation methods, and proves that the method is higher than other methods in rendering quality and rendering definition of the relevant colors of viewpoints such as reflection, high light and the like.

Claims (9)

1. The large scene free viewpoint interpolation method based on the neural network is characterized by comprising the following steps of:
(1) Shooting and sampling a given large scene, and reconstructing a global grid model and calculating camera parameters through an acquired picture;
(2) Dividing the reconstructed global grid model rule into a certain number of blocks, and creating two neural networks for each block, wherein the two neural networks are respectively used for expressing the viewpoint independent colors and the viewpoint dependent colors;
(3) The light rays to be trained are prepared for each trained block, and the neural network corresponding to the blocks is trained in parallel by introducing background point sampling, specifically:
3.1 Collecting training data required for each block;
3.2 Introducing background point sampling to realize parallel training;
3.3 Grouping the blocks, sharing the neural network parameters in the group;
3.4 Intra-block neural network training in two stages;
(4) The neural network for training the viewpoint-independent colors is encoded into the octree, the neural network for training the viewpoint-independent colors is reserved, and the interactive level high-quality free viewpoint interpolation rendering under a large scene is realized.
2. The neural network-based large-scene free viewpoint interpolation method according to claim 1, wherein the step (1) is specifically: the reconstruction of the global grid model and the calculation of the camera internal and external parameters are performed by software CapturingReality using the captured pictures.
3. The neural network-based large-scene free viewpoint interpolation method according to claim 1, wherein the step (2) is specifically:
Given the size of one block, partitioning the global grid from the smallest corner of the global grid model, discarding those blocks without grid inside, and constructing two neural networks separately for each remaining block Respectively expressing a view-independent color and a view-dependent color, wherein θ represents a network parameter;
Neural network The input is the three-dimensional position x of the sample point near the geometric surface, the output is the viewpoint independent color c s and the volume density σ s of this point:
Neural network The input is the three-dimensional position x of the sample point behind the geometric surface, and the output is the spherical harmonic coefficient k r and the volume density σ r of this point:
l and m are the degree and order of the spherical harmonics, respectively, l max is set to 2, Is a spherical harmonic coefficient item with the degree of l and the order of m; the viewpoint-dependent color c r is obtained by using the spherical harmonic coefficient k r and the light direction ω:
is a fundamental function of spherical harmonics, S (·) is a sigmoid activation function; the view-independent color and the view-dependent color of one ray are obtained by a volume rendering formula; combining the view-independent color and the view-dependent color results in the final rendered color c=max (min (c s+cr, 1), 0).
4. A large scene free viewpoint interpolation method based on neural network according to claim 3, wherein the neural networkIs a multilayer perceptron with depth of 8 and width of 128, and the neural network/>Is a multi-layer perceptron with a depth of 6 and a width of 64.
5. The large-scene free-viewpoint interpolation method based on the neural network according to claim 1, wherein the step 3.1) is: for each pixel of the shot picture, obtaining a ray r by utilizing the camera parameter projection calculated in the step (2), and obtaining a candidate block T c penetrated by the ray by utilizing Fast Voxel Traversal algorithm; the first intersection point p of ray r and the global grid model is calculated and if the candidate block contains exactly p or is closer to the camera than p, ray r is used as training data for this candidate block.
6. The large-scene free-viewpoint interpolation method based on the neural network according to claim 1, wherein the step 3.2) is: calculating a first intersection point of the light ray and the global grid model after passing out of the block after uniformly sampling the point along the light ray in the block by utilizing prior information of the global grid model, uniformly sampling background points in a size range of a front half block and a rear half block of the intersection point, and passing through a networkAnd obtaining the view-point independent colors and the volume density of all the sampling points, and obtaining the view-point independent colors of one ray by using a volume rendering formula.
7. The large-scene free-viewpoint interpolation method based on the neural network according to claim 1, wherein the step 3.3) is: firstly, clustering patches of a global grid model by using a VSA algorithm; counting the number of the patches of different types intersected with each block according to the types, and taking the type corresponding to the maximum number as the type of the block; for the blocks of the same class, the central position of the block is used as input, the blocks of the same class are clustered into different groups by using a K-means algorithm, and the blocks in the groups share parameters of a neural network and are used for avoiding the defect of view-point related color training data caused by undersize of the blocks, so that the problem of discontinuous transition of colors among the groups is generated.
8. The large-scene free-viewpoint interpolation method based on the neural network according to claim 1, wherein the step 3.4) is: training neural networks in all blocks of a scene in parallel in a first stage; and the second stage expands the blocks of each group, collects the training light of the blocks, fixes the neural network parameters expressing the viewpoint-independent colors, and only trains the neural network expressing the viewpoint-dependent colors to realize smooth transition of the viewpoint-dependent colors among the groups.
9. The neural network-based large-scene free viewpoint interpolation method according to claim 1, wherein the step (4) is: establishing an octree with depth of 2 for each block, and realizing quick rendering of the viewpoint-independent color of each ray; and for rendering of the viewpoint-related colors, embedding a neural network expressing the viewpoint-related colors into a CUDA program, and storing network parameters in a shared memory of a display card to realize high-speed access.
CN202210369108.2A 2022-04-08 2022-04-08 Large scene free viewpoint interpolation method based on neural network Active CN114820901B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210369108.2A CN114820901B (en) 2022-04-08 2022-04-08 Large scene free viewpoint interpolation method based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210369108.2A CN114820901B (en) 2022-04-08 2022-04-08 Large scene free viewpoint interpolation method based on neural network

Publications (2)

Publication Number Publication Date
CN114820901A CN114820901A (en) 2022-07-29
CN114820901B true CN114820901B (en) 2024-05-31

Family

ID=82535362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210369108.2A Active CN114820901B (en) 2022-04-08 2022-04-08 Large scene free viewpoint interpolation method based on neural network

Country Status (1)

Country Link
CN (1) CN114820901B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439388B (en) * 2022-11-08 2024-02-06 杭州倚澜科技有限公司 Free viewpoint image synthesis method based on multilayer nerve surface expression

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127536A (en) * 2019-12-11 2020-05-08 清华大学 Light field multi-plane representation reconstruction method and device based on neural network
CN113327299A (en) * 2021-07-07 2021-08-31 北京邮电大学 Neural network light field method based on joint sampling structure
CN113628348A (en) * 2021-08-02 2021-11-09 聚好看科技股份有限公司 Method and equipment for determining viewpoint path in three-dimensional scene
CN114004941A (en) * 2022-01-04 2022-02-01 苏州浪潮智能科技有限公司 Indoor scene three-dimensional reconstruction system and method based on nerve radiation field
CN115516516A (en) * 2020-03-04 2022-12-23 奇跃公司 System and method for efficient floor plan generation from 3D scanning of indoor scenes
CN117197323A (en) * 2023-08-31 2023-12-08 浙江大学 Large scene free viewpoint interpolation method and device based on neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10839589B2 (en) * 2018-07-31 2020-11-17 Intel Corporation Enhanced immersive media pipeline for correction of artifacts and clarity of objects in computing environments

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127536A (en) * 2019-12-11 2020-05-08 清华大学 Light field multi-plane representation reconstruction method and device based on neural network
CN115516516A (en) * 2020-03-04 2022-12-23 奇跃公司 System and method for efficient floor plan generation from 3D scanning of indoor scenes
CN113327299A (en) * 2021-07-07 2021-08-31 北京邮电大学 Neural network light field method based on joint sampling structure
CN113628348A (en) * 2021-08-02 2021-11-09 聚好看科技股份有限公司 Method and equipment for determining viewpoint path in three-dimensional scene
CN114004941A (en) * 2022-01-04 2022-02-01 苏州浪潮智能科技有限公司 Indoor scene three-dimensional reconstruction system and method based on nerve radiation field
CN117197323A (en) * 2023-08-31 2023-12-08 浙江大学 Large scene free viewpoint interpolation method and device based on neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Scalable Image-based Indoor Scene Rendering with Reflections;JIAMIN XU 等;ACM Transactions on Graphics;20210831;第40卷(第4期);Article 60:1-14页 *
Scalable Neural Indoor Scene Rendering;Xiuchao Wu 等;ACM Transactions on Graphics;20220701;第41卷(第4期);Article 98:1-16页 *
X-Fields: Implicit Neural View-, Light- and Time-Image Interpolation;MOJTABA BEMANA 等;ACM Transactions on Graphics;20201231;第39卷(第6期);Article 257:1-15页 *
红外-点云的高精度注册及其微服务数字孪生***应用;任志雄 等;计算机辅助设计与图形学学报;20240319;第1-15页 *

Also Published As

Publication number Publication date
CN114820901A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
Yariv et al. Bakedsdf: Meshing neural sdfs for real-time view synthesis
Suhail et al. Light field neural rendering
Huang et al. Deepmvs: Learning multi-view stereopsis
Zhang et al. Differentiable point-based radiance fields for efficient view synthesis
CN113066168B (en) Multi-view stereo network three-dimensional reconstruction method and system
Li et al. Confidence-based large-scale dense multi-view stereo
CN113345082B (en) Characteristic pyramid multi-view three-dimensional reconstruction method and system
CN111951368B (en) Deep learning method for point cloud, voxel and multi-view fusion
WO2022198684A1 (en) Methods and systems for training quantized neural radiance field
CN116664782B (en) Neural radiation field three-dimensional reconstruction method based on fusion voxels
CN115984494A (en) Deep learning-based three-dimensional terrain reconstruction method for lunar navigation image
Liu et al. High-quality textured 3D shape reconstruction with cascaded fully convolutional networks
Weilharter et al. Highres-mvsnet: A fast multi-view stereo network for dense 3d reconstruction from high-resolution images
Häne et al. Hierarchical surface prediction
Sun et al. Ssl-net: Point-cloud generation network with self-supervised learning
CN114820901B (en) Large scene free viewpoint interpolation method based on neural network
CN115205463A (en) New visual angle image generation method, device and equipment based on multi-spherical scene expression
CN115359191A (en) Object three-dimensional reconstruction system based on deep learning
Zuo et al. View synthesis with sculpted neural points
CN112489198A (en) Three-dimensional reconstruction system and method based on counterstudy
Zhu et al. Learning-based inverse rendering of complex indoor scenes with differentiable monte carlo raytracing
Gu et al. Ue4-nerf: Neural radiance field for real-time rendering of large-scale scene
WO2022217470A1 (en) Hair rendering system based on deep neural network
Franke et al. Vet: Visual error tomography for point cloud completion and high-quality neural rendering
Wolf et al. Surface Reconstruction from Gaussian Splatting via Novel Stereo Views

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant