CN117058311A - Hidden representation-based large-scale cloud curved surface reconstruction method for scenic spots of forest scene - Google Patents

Hidden representation-based large-scale cloud curved surface reconstruction method for scenic spots of forest scene Download PDF

Info

Publication number
CN117058311A
CN117058311A CN202310904963.3A CN202310904963A CN117058311A CN 117058311 A CN117058311 A CN 117058311A CN 202310904963 A CN202310904963 A CN 202310904963A CN 117058311 A CN117058311 A CN 117058311A
Authority
CN
China
Prior art keywords
normal
garden
point
point cloud
cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310904963.3A
Other languages
Chinese (zh)
Inventor
覃池泉
黄燚
沈怡静
吴超楠
钱圣诞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Art and Design Technology Institute
Original Assignee
Suzhou Art and Design Technology Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Art and Design Technology Institute filed Critical Suzhou Art and Design Technology Institute
Priority to CN202310904963.3A priority Critical patent/CN117058311A/en
Publication of CN117058311A publication Critical patent/CN117058311A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Generation (AREA)

Abstract

The application provides a hidden representation-based large-scale cloud curved surface reconstruction method for a forest scene point, which comprises the following steps: 1) Point cloud normal calculation based on multi-neighborhood normal blending: and obtaining different neighborhoods of the point cloud by using two different point cloud neighborhood calculation modes, respectively obtaining point cloud normals according to the neighborhood points, and finally mixing the normal calculation results of the two to obtain the normal attribute of the garden point cloud. 2) Hidden-based surface fitting: and fitting an intrinsic structure of the garden point cloud by using the implicit neural network, and training the implicit neural network through normal constraint and geometric constraint to obtain an implicit curved surface. 3) And (5) isosurface extraction: and voxelizing the square space, collecting points, predicting the symbol distance value of each point through an implicit neural network, and extracting the equivalent surface by using a Marving Cube algorithm according to the symbol distance value of each point.

Description

Hidden representation-based large-scale cloud curved surface reconstruction method for scenic spots of forest scene
Technical Field
The application relates to a point cloud curved surface reconstruction method, in particular to a large-scale garden scene point cloud curved surface reconstruction method based on hidden representation.
Background
The point cloud surface reconstruction aims at constructing a three-dimensional continuous surface of a scene from point cloud data. The garden scene is a special scene, the number of the scene point clouds is large, the object geometry is complex, and how to recover the correct three-dimensional curved surface from the point clouds is still a difficult problem.
Early point cloud surface reconstruction methods can be broadly divided into interpolation-based methods that linearly interpolate between a point sample and a subset thereof, which can be effectively achieved by triangulating a triple point that follows the ball control property that no other points are located around, as in document 1: bernardini F, mittlman J, rushmeier H, et al, the ball-pivoting algorithm for surface reconstruction [ J]IEEE transactions on visualization and computer graphics,1999,5 (4): 349-359; document 2: edelsbluner H, muicke E P.three-dimensional alpha shapes [ J]ACM Transactions On Graphics (TOG), 1994, 13 (1): 43-72; document 3: kolluriiR, shewchuk JR, O' BrienJ F.Spectral surface reconstruction from noisy point clouds [ C]/(Proceedings of the 2004 European graphics/ACM SIGGRAPH symposium on Geometry processing.2004): 11-21; the method based on curved surface deformation fits the target point cloud by deforming a predefined watertight grid to realize curved surface reconstruction of the point cloud, as in document 4: hanocka R, metzer G, giryes R, et al Point2 ms: a self-prior for deformable meshes [ J]arXiv preprint arXiv:2005.11084 2020, text: 5: groueix T, fisher M, kim V G, et al Aapproach to learning 3d surface generation[C]//Proceedings ofthe IEEE conference on computer vision and pattern recognition.2018:216-224.
Some patents have previously focused on solving the problem of surface reconstruction, as in document 6: sun Lijian, wang Jun, liu Feixiang, etc. a high-precision curved surface reconstruction method and device [ P ] based on mutual fusion of normal vector and point cloud data, zhejiang province: CN115761137a, 2023-03-07; document 7: miao Yongwei, liu Fuchang, zhang Yueyao, etc. a point cloud curved surface implicit reconstruction method [ P ] based on a slicing learning strategy, zhejiang province: CN115830271a,2023-03-21, however, the above methods are mainly directed to object-level point clouds. The use of these methods on large scale scenes is either very time-complex or does not give the correct results.
In contrast, implicit representation-based surface reconstruction methods approximate SDF by mapping arbitrary points to their nearest tangential plane's signed distancesSignedDistance field, directed distance field) Theoretically, the hidden representation can fit arbitrary surfaces, so it is reasonable to use the hidden representation for surface reconstruction of large-scale complex scenes. This can be further divided into a traditional hidden representation and a neural hidden representation. A typical representation of the traditional implicit expression method is document 8: kazhdan M, bolitho M, hoppe H.Poisson surface reconstruction [ C]v/Proceedings of the fourth Eurographics symposium on Geometry processing 2006,7: application of poisson reconstruction on large-scale complex scenes has the problem of patch-direction inconsistency. The neural hidden representation then uses the deep neural network to approximate the scene surface, the basic idea being to exploit the inherent symmetry of the deep neural network to act as a priori during reconstruction as in document 9: sitzmann V, martel J, bergman A, et al Implicit neural representations with periodic activation functions [ J]Advances in Neural Information Processing Systems,2020, 33: 7462-7473; document 10: sitzmannV, chan E, tucker R, et al Metasdf: meta-learning signed distance functions [ J]Advances in Neural Information Processing Systems,2020, 33:10136-10147.. The use of the implicit representation has the following advantages: first, they can store implicit functions potentially conditioned on point clouds in the weights of the neural network, which makes such representations very flexible. Secondly, the neural hidden representation is basically independent of the quantity of the point clouds, and even has very high compression rate on the large-scale point cloud scene, which brings about the possibility of reconstructing the curved surface of the large-scale sceneCan be used. But this approach suffers from two problems, firstly, how to recover the grid representation from the network parameters, since the scene geometry is encoded in the parameters of the network. Second, most neural implicit representations require point cloud normal vectors as inputs, and the quality of the normal vectors will largely affect the quality of the reconstruction, whereas most large-scale scene point clouds are themselves vector-free, and therefore, how to estimate high quality normal vectors is a problem.
Disclosure of Invention
The application aims to: the application aims to solve the technical problem of providing a large-scale cloud curved surface reconstruction method for a forest scene point based on hidden representation aiming at the defects of the prior art.
In order to solve the technical problems, the application discloses a hidden representation-based large-scale cloud curved surface reconstruction method for a scene point of a forest, which comprises the following steps:
step 1, calculating the normal line of the scene point cloud of the large-scale garden based on a point cloud normal line calculation method of multi-neighborhood normal line mixing: obtaining different neighborhoods of the point cloud of the garden scene by using two different point cloud neighborhood calculation modes, respectively obtaining point cloud normals according to the neighborhood points, and finally mixing the two normals to obtain the normal attribute of the point cloud of the garden scene;
further, the calculating the normal line of the large-scale forest scene point cloud specifically comprises the following steps:
step 1-1, for an input garden scene point cloud P ε R N×3 Wherein R is N×3 Representing three-dimensional point cloud belonging to real number domain, N represents the number of points in the three-dimensional point cloud, firstly inquiring k adjacent points P k ∈R N×k×3 Spherical neighbor pointWherein r is i Representing the radius of the neighbor sphere, i representing the i-th neighbor sphere;
step 1-2, respectively inputting the k adjacent points and the spherical adjacent points into a normal prediction network to obtain a k normal N K ∈R N×3 Ball methodLine N R ∈R N×3
Step 1-3, normal mixing: firstly, adopting a normal smoothing method to smooth a k normal and a ball normal, and then mixing to obtain a final garden scene point cloud normal N M ∈R N×3
Further, the normal smoothing method comprises the following steps:
step 1-3-1, firstly constructing a KD tree for a garden scene point cloud;
step 1-3-2, inquiring k neighbor of the scene point cloud of the garden according to the KD tree to obtain a k neighbor distance D k K-nearest neighbor index I k
Step 1-3-3, according to Gaussian distribution function and k-nearest neighbor distance D k Calculate the smoothing weight W k The specific method comprises the following steps:
wherein σ, μ is variance and mean;
step 1-3-4, marking the normal obtained in the step 1-2 as N m Then weighting according to the smooth weight, wherein the concrete method is as follows:
N avg =N m [I k ]*unsqueeze(W k ,2)
wherein, when smoothing the normal line of k, N m =N k When the ball normal is smoothed, N m =N r ;N m [I k ]The value according to index I k At N m Taking a value of the middle value; n (N) avg Is a tensor of nxkx3,represents N avg Ith in the first dimension avg The j of the second dimension avg A value;
by adopting the method, the k smooth normal is obtained by smoothing the k normalSmoothing the ball normal to obtain a ball smoothed normal
Step 2, implicit surface fitting: fitting an intrinsic structure of the garden scene point cloud by using an implicit neural network, and training the implicit neural network through normal constraint and geometric constraint to obtain an implicit curved surface;
further, the implicit surface fitting specifically includes the following steps:
step 2-1, constructing an implicit neural network F;
step 2-2, sampling the garden scene point cloud to obtain a positive sample point X pos
Step 2-3, randomly picking points in a square space S surrounded by a garden scene point cloud to obtain a negative sample point X neg
Step 2-4, respectively inputting the positive sample point and the negative sample point into the implicit neural network to obtain a positive sample predicted value Y pos And negative sample predictive value Y neg
Step 2-5, calculating geometric losses by using the positive sample predicted value and the negative sample predicted value; calculating normal loss by using the positive sample predicted value, the negative sample predicted value and the final garden scene point cloud normal obtained in the step 1-3; calculating a gradient using the geometric loss and the normal loss; transferring the gradient obtained by calculation to an implicit neural network for parameter updating;
further, the calculation geometry loss L geo The specific method comprises the following steps:
wherein, |·| represents taking absolute values;ith representing positive sample prediction value pos A value; />Ith representing negative sample predictive value neg And a value, m, represents the number of positive and negative samples.
Further, the calculated normal loss L nor The specific method comprises the following steps:
where cose () represents the cos value of the angle between vectors and norm (x) represents the modulus of the vector.
Further, the specific method for calculating the gradient L comprises the following steps:
L=L geo +L nor
i.e. the geometrical loss and the normal loss are superimposed to obtain the gradient L.
Step 2-6, repeating steps 2-2 to 2-5 until the geometric loss and the normal loss converge within preset thresholds.
Step 3, iso-surface extraction: and voxelizing the square space, collecting points, predicting the symbol distance value of each point in the square space through an implicit neural network, extracting an equivalent surface by using a Maring Cube algorithm according to the symbol distance value of each point, and using the equivalent surface as a reconstruction result of the cloud curved surface of the large-scale forest scene point, thus finishing the reconstruction of the cloud curved surface of the large-scale forest scene point based on hidden representation.
Further, the isosurface extraction specifically comprises the following steps:
step 3-1, voxel the three-dimensional space S surrounded by the garden scene point cloud into a garden space voxel V;
further, the method for voxelizing the three-dimensional space S surrounded by the garden scene point cloud into a garden space voxel V specifically comprises the following steps:
set garden space voxel V epsilon R H×W×L Wherein H, W and L are each a garden space voxelThe height, width and length of (2) are calculated as follows:
wherein int (·) represents rounding up; v is the size of the voxel grid.
Step 3-2, obtaining a query point P at the center acquisition point of each voxel grid of the garden space voxels V q And records the index I of the voxel grid to which each query point belongs q
Further, the query point P is obtained from the central sampling point of each voxel grid of the garden space voxel V q And records the index I of the voxel grid to which each query point belongs q The specific method comprises the following steps:
(i) q ,j q ,k q ) The central point of each voxel grid is the query pointThe calculation method is as follows:
all are put togetherThe central points are overlapped on the rows to obtain a query point matrix Q epsilon (H×W×L)×3 Splicing all indexes to obtain an index value I O ∈R H×W×L
Step 3-3, inputting the query points into the implicit neural network to obtain the symbol distance value SDF of each query point q
Step 3-4, the symbol distance value SDF of each query point is obtained q Index I according to voxel grid q Assigning values to the garden space voxels to obtain garden space symbol distance voxels V SDF
Step 3-5, the garden space symbol is separated from the voxel V SDF And inputting a Marching Cube algorithm to obtain a final garden grid patch reconstruction result.
The beneficial effects are that:
1. the application adopts the hidden representation to reconstruct the large-scale garden point cloud data, can flexibly fit any large-scale complex surface, simultaneously compresses a huge number of point cloud scenes, reduces the time complexity of the application of the curved surface reconstruction method on the large-scale field garden scenes, improves the accuracy of the application of the curved surface reconstruction on the large-scale garden point cloud data, and can be applied to the large-scale garden scenes.
2. According to the application, point cloud normal calculation based on multi-neighborhood normal mixing is adopted to obtain high-quality normal lines in a large-scale garden point cloud, normal vector data lacking in a garden scene is supplemented, and meanwhile, accurate normal vector input can be provided for neural hidden representation, so that the method is an important basis for subsequent curved surface reconstruction.
3. Aiming at the problem that the hidden representation network cannot well recover the point cloud grid representation, the method and the device for representing the three-dimensional information of the large-scale garden scene by using the hidden representation can be used for training the hidden neural network by restraining the predicted values of the positive and negative samples by using geometric loss and normal loss, and enabling the scene point cloud to be sufficiently and accurately encoded in the hidden neural network by continuously and iteratively updating the losses until the losses are converged to a certain threshold value.
Drawings
The foregoing and/or other advantages of the application will become more apparent from the following detailed description of the application when taken in conjunction with the accompanying drawings and detailed description.
FIG. 1 is a schematic illustration of the process flow of the present application.
Fig. 2 is an input garden point cloud visualization result.
Fig. 3 is a visual result of k-normal staining.
Fig. 4 is a visual result of k-smoothed normal staining.
Fig. 5 is a visual result of ball normal staining.
Fig. 6 is a visual result of ball smoothing normal staining.
Fig. 7 is a visual result of garden point cloud normal staining.
Fig. 8 is a visual result of a neural latent representation.
Fig. 9 is a visual result of reconstructing a curved surface.
Detailed Description
In order to solve the problems in the prior art, the method firstly uses the existing normal vector estimation algorithm to estimate the point cloud normal vector of a large-scale scene, then uses the neural hidden representation to encode the scene point cloud structure in the parameters of the neural network, and finally uses the existing iso-surface extraction algorithm to extract the curved surface representation of the scene from the parameters of the neural network.
As shown in fig. 1, the method for reconstructing the cloud curved surface of the large-scale forest scene point based on hidden representation disclosed by the application is implemented according to the following steps:
step 1, calculating point cloud normals based on multi-neighborhood normals: and obtaining different neighborhoods of the point cloud by using two different point cloud neighborhood calculation modes, respectively obtaining point cloud normals according to the neighborhood points, and finally mixing the normal calculation results of the two to obtain the normal attribute of the garden point cloud.
Step 2, implicit surface fitting: and fitting an intrinsic structure of the garden point cloud by using the implicit neural network, and training the implicit neural network through normal constraint and geometric constraint to obtain an implicit curved surface.
Step 3, iso-surface extraction: and voxelizing the square space, collecting points, predicting the symbol distance value of each point through an implicit neural network, and extracting the equivalent surface by using a Marving Cube algorithm according to the symbol distance value of each point.
Step 1 comprises the following steps
Step 1.1, for an input garden point cloud P εR N×3 As shown in FIG. 2, first query its k neighbors P k ∈R N ×k×3 Spherical neighbor pointWhere N is the number of points of the garden point cloud, k is the number of neighbors, r i Is the number of sphere neighbor points of the ith point. In an actual implementation, k=50, r=0.1.
Step 1.2, respectively inputting the k adjacent points and the spherical adjacent points into a normal prediction network to obtain a k normal N k ∈R N×3 Ball normal N r ∈R N×3 Wherein the visualization result of the k-normal staining is shown in fig. 3, and the visualization result of the ball-normal staining is shown in fig. 5. Wherein the normal prediction network uses document X Wang W, lu X, shao D, et al weighted Point Cloud Normal Estimation [ J]arXiv preprint arXiv:2305.04007 2023. Pretrained network.
Step 1.3, normal mixing. First, the normal line of k is smoothed and the normal line of ball is obtainedAnd ball smoothing normalAs shown in fig. 4 and 6. Then mixing the two to obtain the final garden point cloud normal N epsilon R N×3 As shown in fig. 7. The mixing mode is as follows:
where normal is a normalization function, the calculation is as follows:
wherein x is i Represents the ith row of matrix x, x i,1 Column 1 of row i of matrix x, x i,2 ,x i,3 And the same is true.
The normal smoothing in step 1.3 comprises the following steps
And 1.3.1, firstly constructing a KD tree for the garden point cloud. The construction method uses the document x Buitinck L, loupe G, blondel M, et al API design for machine learning software: experiences from the scikit-learn project [ J ]. ArXiv preprint arXiv:1309.0238 2013.
Step 1.3.2, inquiring the k neighbor of the garden point cloud according to the KD tree to obtain the k neighbor distance D k K-nearest neighbor index I k . In actual implementation k=50.
Step 1.3.3, calculating a smoothing weight W according to the Gaussian distribution function and the k-nearest neighbor distance k . The calculation method is as follows:
where σ, μ is the variance and mean. Summarizing the actual implementation, σ=1, μ=0
Step 1-3-4, retrieving the normal N by using the k-nearest neighbor or sphere nearest neighbor retrieval method in step 1.2 m And then weighted according to the smoothing weights. The calculation mode is as follows:
N avg =N m [I k ]*unsqueeze(W k ,2)
when smoothing k normal, N m =N k When the ball normal is smoothed, N m =N r 。N m [I k ]Meaning according to index I k At N m And (3) taking the value. N (N) avg Is one ofA tensor of N x k x 3,represents N avg The ith value in the first dimension and the jth value in the second dimension. Smoothing the k normal to obtain a k smoothed normal->Obtaining a ball smoothing normal by smoothing the ball normal>
Step 2 comprises the following steps
And 2.1, building an implicit neural network F. The implicit neural network uses the literature x SitzmannV, martel J, bergman a, et al implicit neural representations with periodic activation functions [ J ]. Advances in Neural Information Processing Systems,2020, 33:7462-7473. Network described in.
Step 2.2, sampling the garden point cloud to obtain a positive sample point X pos ∈R m×3 . Where m is the positive sample point number. In actual implementation, m=100000.
Step 2.3, randomly picking points in a square space S surrounded by the garden point cloud to obtain a negative sample point X neg ∈R m×3 . The number of positive sample points and the number of negative sample points are consistent. Square space s= { x mi ,x ma ,y mi ,y ma ,z mi ,z ma The calculation mode of the } is as follows:
x mi =min P[:,1],x ma =max P[:,1]
y mi =min P[:,2],y ma =max P[:,2]
z mi =min P[:,3],z ma =max P[:,3]
where min represents a minimum value taken in a set of data and max represents a maximum value taken in a set of data. P [: 1 represents the complete extraction of the first column of matrix P, P [: 2], P [: and 3, the same is true.
Step (a)2.4, respectively inputting the positive sample point and the negative sample point into a neural implicit network to obtain a positive sample predicted value Y pos =F(X pos )∈R m And negative sample predictive value Y neg =F(X neg )∈R m
Step 2.5, inputting the positive sample predicted value and the negative sample predicted value into a geometric constraint module to calculate geometric losses, wherein the geometric losses are calculated as follows
Where |·| indicates taking absolute value for one value.An i-th value representing a positive sample prediction value. />And the same is true. And inputting the positive sample predicted value and the negative sample predicted value and the normal line of the garden point cloud into a normal line constraint module to calculate the normal line loss, wherein the normal line loss is calculated as follows:
the geometrical loss and the normal loss are superimposed and input into a back propagation module to calculate the gradient, and the calculation mode is as follows:
L=L geo +L nor
the gradient is transferred to an implicit neural network for parameter updating, which is performed according to the documents PaszkeA, gross S, massa F, et al pytorch: an imperative style, high-performance deep learning library [ J ]. Advances in neural information processing systems,2019, 32.
And 2.6, repeating the steps 2.2 to 2.5 until the geometric loss and the normal loss are converged within a certain threshold, and finally forming the hidden neural network part network layer visualization as shown in fig. 8.
Step 3 comprises the following steps
Step 3.1, voxel the three-dimensional space S surrounded by the garden point cloud into garden space voxels V epsilon R H×W×L . Wherein H, W and L are the height, width and length of the garden space voxels respectively. The calculation method is as follows:
wherein int (·) represents rounding up. v is the size of the voxel grid. In practical implementation, v=0.05
Step 3.2, obtaining a query point P at the center acquisition point of each voxel grid of the garden space voxels V q And records the index I of the voxel grid to which each query point belongs q . (i) q ,j q ,k q ) The center point of each voxel grid is calculated as follows:
finally, all the center points are overlapped on the row to obtain a query point matrix Q epsilon (H×W×L)×3 Splicing all indexes to obtain an index value I Q ∈R H×W×L
Step 3.3, inputting the query point matrix into the implicit neural network to obtain a symbol distance value SDF q ∈R H×W×L
Step 3.4, assigning the symbol distance value of each query point to the garden space voxels according to the index value to obtain a garden space symbol distance voxel V SDF =SDF q [I Q ]。
And 3.5, inputting the garden space symbol distance voxels into a Marching Cube algorithm to obtain a final garden grid surface patch reconstruction result, as shown in fig. 9. The Marching Cube algorithm adopts a document Lorensen W E, cline H E.Marching cubes: a high resolution 3D surface construction algorithm[J ]. ACM siggraph computer graphics,1987, 21 (4): 163-169.
In a specific implementation, the application provides a computer storage medium and a corresponding data processing unit, wherein the computer storage medium can store a computer program, and the computer program can run the application content of the hidden-representation-based large-scale cloud surface reconstruction method for the scenic spot of the forest scene and part or all of the steps in each embodiment when being executed by the data processing unit. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random-access memory (random access memory, RAM), or the like.
It will be apparent to those skilled in the art that the technical solutions in the embodiments of the present application may be implemented by means of a computer program and its corresponding general hardware platform. Based on such understanding, the technical solutions in the embodiments of the present application may be embodied essentially or in the form of a computer program, i.e. a software product, which may be stored in a storage medium, and include several instructions to cause a device (which may be a personal computer, a server, a single-chip microcomputer, MUU or a network device, etc.) including a data processing unit to perform the methods described in the embodiments or some parts of the embodiments of the present application.
The application provides a concept and a method for reconstructing a cloud curved surface of a large-scale forest scene point based on hidden representation, and particularly provides a method and a plurality of ways for realizing the technical scheme. The components not explicitly described in this embodiment can be implemented by using the prior art.

Claims (10)

1. A hidden representation-based large-scale cloud curved surface reconstruction method for a forest scene point is characterized by comprising the following steps:
step 1, calculating the normal line of the scene point cloud of the large-scale garden based on a point cloud normal line calculation method of multi-neighborhood normal line mixing: obtaining different neighborhoods of the point cloud of the garden scene by using two different point cloud neighborhood calculation modes, respectively obtaining point cloud normals according to the neighborhood points, and finally mixing the two normals to obtain the normal attribute of the point cloud of the garden scene;
step 2, implicit surface fitting: fitting an intrinsic structure of the garden scene point cloud by using an implicit neural network, and training the implicit neural network through normal constraint and geometric constraint to obtain an implicit curved surface;
step 3, iso-surface extraction: and voxelizing the square space, collecting points, predicting the symbol distance value of each point in the square space through an implicit neural network, extracting an equivalent surface by using a Maring Cube algorithm according to the symbol distance value of each point, and using the equivalent surface as a reconstruction result of the cloud curved surface of the large-scale forest scene point, thus finishing the reconstruction of the cloud curved surface of the large-scale forest scene point based on hidden representation.
2. The method for reconstructing a curved surface of a large-scale forest scene point cloud based on hidden representation according to claim 1, wherein the calculating the normal line of the large-scale forest scene point cloud in step 1 specifically comprises the following steps:
step 1-1, for an input garden scene point cloud P ε R N×3 Wherein R is N×3 Representing three-dimensional point cloud belonging to real number domain, N represents the number of points in the three-dimensional point cloud, firstly inquiring k adjacent points P k ∈R N×k×3 Spherical neighbor pointWherein r is i Representing the radius of the neighbor sphere, i representing the i-th neighbor sphere;
step 1-2, respectively inputting the k adjacent points and the spherical adjacent points into a normal prediction network to obtain a k normal N K ∈R N×3 Ball normal N R ∈R N×3
Step 1-3, normal mixing: firstly, adopting a normal smoothing method to smooth a k normal and a ball normal, and then mixing to obtain a final garden scene point cloud normal N M ∈R N×3
3. The hidden representation-based large-scale cloud surface reconstruction method for garden scene points according to claim 2, wherein the normal smoothing method in the step 1-3 comprises the following steps:
step 1-3-1, firstly constructing a KD tree for a garden scene point cloud;
step 1-3-2, inquiring k neighbor of the scene point cloud of the garden according to the KD tree to obtain a k neighbor distance D k K-nearest neighbor index I k
Step 1-3-3, according to Gaussian distribution function and k-nearest neighbor distance D k Calculate the smoothing weight W k The specific method comprises the following steps:
wherein σ, μ is variance and mean;
step 1-3-4, marking the normal obtained in the step 1-2 as N m Then weighting according to the smooth weight, wherein the concrete method is as follows:
N avg =N m [I k ]*unsqueeze(W k ,2)
wherein, when smoothing the normal line of k, N m =N k When the ball normal is smoothed, N m =N r ;N m [I k ]The value according to index I k At N m Taking a value of the middle value; n (N) avg Is a tensor of nxkx3,represents N avg Ith in the first dimension avg The j of the second dimension avg A value;
by adopting the method, the k smooth normal is obtained by smoothing the k normalObtaining a ball smoothing normal by smoothing the ball normal>
4. The hidden representation-based large-scale cloud surface reconstruction method for garden scene points according to claim 3, wherein the hidden surface fitting in the step 2 specifically comprises the following steps:
step 2-1, constructing an implicit neural network F;
step 2-2, sampling the garden scene point cloud to obtain a positive sample point X pos
Step 2-3, randomly picking points in a square space S surrounded by a garden scene point cloud to obtain a negative sample point X neg
Step 2-4, respectively inputting the positive sample point and the negative sample point into the implicit neural network to obtain a positive sample predicted value Y pos And negative sample predictive value Y neg
Step 2-5, calculating geometric losses by using the positive sample predicted value and the negative sample predicted value; calculating normal loss by using the positive sample predicted value, the negative sample predicted value and the final garden scene point cloud normal obtained in the step 1-3; calculating a gradient using the geometric loss and the normal loss; transferring the gradient obtained by calculation to an implicit neural network for parameter updating;
step 2-6, repeating steps 2-2 to 2-5 until the geometric loss and the normal loss converge within preset thresholds.
5. The hidden representation-based large-scale cloud surface reconstruction method for garden scene points according to claim 4, wherein the iso-surface extraction in the step 3 specifically comprises the following steps:
step 3-1, voxel the three-dimensional space S surrounded by the garden scene point cloud into a garden space voxel V;
step 3-2, obtaining a query point P at the center acquisition point of each voxel grid of the garden space voxels V q And records the index I of the voxel grid to which each query point belongs q
Step 3-3, inputting the query points into the implicit neural network to obtain the symbol distance value SDF of each query point q
Step 3-4, the symbol distance value SDF of each query point is obtained q Index I according to voxel grid q Assigning values to the garden space voxels to obtain garden space symbol distance voxels V SDF
Step 3-5, the garden space symbol is separated from the voxel V SDF And inputting a Marching Cube algorithm to obtain a final garden grid patch reconstruction result.
6. A hidden representation-based method as defined in claim 5A large-scale reconstruction method of a cloud curved surface of a forest scene point is characterized by comprising the steps of calculating geometric loss L in the steps 2-5 geo The specific method comprises the following steps:
wherein, |·| represents taking absolute values;ith representing positive sample prediction value pos A value; />Ith representing negative sample predictive value neg And a value, m, represents the number of positive and negative samples.
7. The hidden representation-based large-scale cloud surface reconstruction method for garden scene points as recited in claim 6, wherein said calculating normal loss L in step 2-5 nor The specific method comprises the following steps:
where cose () represents the cos value of the angle between vectors and norm (x) represents the modulus of the vector.
8. The hidden representation-based large-scale cloud surface reconstruction method for the forest scene points according to claim 7, wherein the gradient L is calculated in the steps 2-5, and the method specifically comprises the following steps:
L=L geo +L nor
i.e. the geometrical loss and the normal loss are superimposed to obtain the gradient L.
9. The hidden representation-based large-scale reconstruction method of a cloud curved surface of a garden scene point according to claim 8, wherein the step 3-1 of voxel forming the three-dimensional space S surrounded by the cloud of the garden scene point into a garden space voxel V comprises the following steps:
set garden space voxel V epsilon R H×W×L Wherein H, W and L are the height, width and length of the garden space voxels, respectively, and are calculated as follows:
wherein int (·) represents rounding up; v is the size of the voxel grid.
10. The hidden representation-based large-scale cloud surface reconstruction method for garden scene points as recited in claim 9, wherein said searching point P is obtained at the center searching point of each voxel grid of the garden space voxel V in step 3-2 q And records the index I of the voxel grid to which each query point belongs q The specific method comprises the following steps:
(i) q ,j q ,k q ) The central point of each voxel grid is the query pointThe calculation method is as follows:
all the center points are overlapped on the row to obtain a query point matrix Q epsilon (H×W×L)×3 Splicing all indexes to obtain an index value I Q ∈R H×W×L
CN202310904963.3A 2023-07-24 2023-07-24 Hidden representation-based large-scale cloud curved surface reconstruction method for scenic spots of forest scene Pending CN117058311A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310904963.3A CN117058311A (en) 2023-07-24 2023-07-24 Hidden representation-based large-scale cloud curved surface reconstruction method for scenic spots of forest scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310904963.3A CN117058311A (en) 2023-07-24 2023-07-24 Hidden representation-based large-scale cloud curved surface reconstruction method for scenic spots of forest scene

Publications (1)

Publication Number Publication Date
CN117058311A true CN117058311A (en) 2023-11-14

Family

ID=88654415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310904963.3A Pending CN117058311A (en) 2023-07-24 2023-07-24 Hidden representation-based large-scale cloud curved surface reconstruction method for scenic spots of forest scene

Country Status (1)

Country Link
CN (1) CN117058311A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117689747A (en) * 2023-11-23 2024-03-12 杭州图科智能信息科技有限公司 Multi-view nerve implicit surface reconstruction method based on point cloud guidance

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117689747A (en) * 2023-11-23 2024-03-12 杭州图科智能信息科技有限公司 Multi-view nerve implicit surface reconstruction method based on point cloud guidance

Similar Documents

Publication Publication Date Title
CN108875813B (en) Three-dimensional grid model retrieval method based on geometric image
CN111696210A (en) Point cloud reconstruction method and system based on three-dimensional point cloud data characteristic lightweight
CN106780458B (en) Point cloud framework extraction method and device
CN105654483B (en) The full-automatic method for registering of three-dimensional point cloud
Mencl et al. Graph-based surface reconstruction using structures in scattered point sets
CN108038906A (en) A kind of three-dimensional quadrilateral mesh model reconstruction method based on image
Zhang et al. KDD: A kernel density based descriptor for 3D point clouds
CN111028335B (en) Point cloud data block surface patch reconstruction method based on deep learning
CN110889901B (en) Large-scene sparse point cloud BA optimization method based on distributed system
CN113569979B (en) Three-dimensional object point cloud classification method based on attention mechanism
CN111524226B (en) Method for detecting key point and three-dimensional reconstruction of ironic portrait painting
Huang et al. Skeleton-based tracing of curved fibers from 3D X-ray microtomographic imaging
Birdal et al. A point sampling algorithm for 3d matching of irregular geometries
CN117058311A (en) Hidden representation-based large-scale cloud curved surface reconstruction method for scenic spots of forest scene
Liu et al. High-quality textured 3D shape reconstruction with cascaded fully convolutional networks
Sharma et al. Point cloud upsampling and normal estimation using deep learning for robust surface reconstruction
Dong et al. Topology-controllable implicit surface reconstruction based on persistent homology
CN117094925A (en) Pig body point cloud completion method based on point agent enhancement and layer-by-layer up-sampling
CN110322415A (en) High-precision surface three-dimensional reconstruction method based on point cloud
KR102129060B1 (en) Content-based 3d model retrieval method using a single depth image, 3d model retrieval server for performing the methods and computer readable recording medium thereof
CN108876922A (en) A kind of mesh amending method based on the regularization of interior dihedral angle supplementary angle
Steinlechner et al. Adaptive pointcloud segmentation for assisted interactions
Zhou et al. Progress and review of 3D image feature reconstruction
Zhang et al. Detection and filling of pseudo-hole in complex curved surface objects
Zeng et al. 3D plants reconstruction based on point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination