CN107610221A - A kind of threedimensional model generation method represented based on isomorphic model - Google Patents

A kind of threedimensional model generation method represented based on isomorphic model Download PDF

Info

Publication number
CN107610221A
CN107610221A CN201710810698.7A CN201710810698A CN107610221A CN 107610221 A CN107610221 A CN 107610221A CN 201710810698 A CN201710810698 A CN 201710810698A CN 107610221 A CN107610221 A CN 107610221A
Authority
CN
China
Prior art keywords
model
vector
dimensional
representation
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710810698.7A
Other languages
Chinese (zh)
Other versions
CN107610221B (en
Inventor
孙正兴
武蕴杰
宋有成
宋沫飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201710810698.7A priority Critical patent/CN107610221B/en
Publication of CN107610221A publication Critical patent/CN107610221A/en
Application granted granted Critical
Publication of CN107610221B publication Critical patent/CN107610221B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of threedimensional model generation method represented based on isomorphic model, including:For the part corresponding relation of model set, the unified structure for building Models Sets represents;By the way of subgraph coding, the homogeneous structure for establishing each model represents;By the way of bounding box and generalized cylinder, the part for establishing each model represents;Represented according to the representation of model and part, build the unified representation of model;The self-encoding encoder based on neutral net is trained, establishes the mapping relations in isomorphic representation space and two Dimension Numerical Value space;Two Dimension Numerical Value is sampled, decodes to obtain model isomorphic representation using self-encoding encoder;The model isomorphic representation obtained according to decoding, reconstructs threedimensional model, and judge the validity of reconstruction model;According to the validity of sampled data, the distribution of the useful space is estimated, and visualized;The effective 2-D data chosen according to user, decoding obtain model expression, rebuild new threedimensional model.

Description

Three-dimensional model generation method based on isomorphic model representation
Technical Field
The invention belongs to the technical field of computer graphics, and particularly relates to a three-dimensional model generation method based on isomorphic model representation.
Background
Three-dimensional modeling is an important task of computer graphics and is the basis for satisfying other follow-up studies and applications. Three-dimensional modeling usually requires interaction of users and consumes a lot of time, and many applications have certain requirements on the number of three-dimensional models, so how to quickly and effectively construct large-scale three-dimensional models is an important research direction of three-dimensional modeling.
In fact, three-dimensional modeling has resulted in a number of related techniques and methods. In the traditional cad modeling method, a modeler needs to perform a large amount of interaction from the vertex and the patch of the bottom layer, such as vertex coordinate input and patch movement, to complete the modeling process.
Since the traditional method is heavily burdened with interactions, a method of reconstructing a digital model from real objects has emerged. As in document 1: bao S Y, savaree S.Semantic structure from motion [ C ]// Computer Vision and Pattern Recognition (CVPR), 2011IEEE Conference on.IEEE,2011 2025-2032. They utilize video sequence, estimate camera parameters through the adjacent video frame, thus rebuild the three-dimensional point cloud frame, and then fuse the three-dimensional point clouds that the multiframe rebuilds, get the three-dimensional model rebuild. The method can generate the three-dimensional model completely and automatically, does not need interaction, but only can construct real objects existing in reality, and is difficult to meet the requirement of constructing large-scale three-dimensional model data.
A new three-dimensional model can also be constructed by editing an existing three-dimensional model. As in document 2: sorkine O, alexa M.As-edge-as-porous surface modeling [ C ]// Symposium on Geometry processing.2007,4. For an existing three-dimensional model, surface deformation points and fixed points are defined, the positions of the fixed points are kept unchanged while the positions of operation points are changed, and then the positions of all vertexes of the model are calculated, so that a new model is constructed. This method requires a relatively simple interaction to construct a new three-dimensional model by editing the existing model. Although the method can construct new models different from the original models, the number of the constructed new models is small, and the difference between the constructed models is small.
As the size of three-dimensional model sets grows, some researchers have proposed model set-driven model generation methods. Document 3: chaudhuri S, kalogerakis E, guibas L, et al, basic reasoning for assembly-based 3D modeling [ C ]// ACM Transformations On Graphics (TOG) ], ACM,2011,30 (4): 35. Firstly, the model set is divided and analyzed to obtain the relationship between the component set and the components. Then, the user selects different parts and combines the parts to construct a new three-dimensional model. The method can generate models with large differences, but in the modeling process, user interaction is very frequent, and a large amount of time is needed for constructing one model, so that the method is difficult to generate a large amount of three-dimensional models. In summary, the prior art has the main defects: firstly, the modeling process takes long time and is difficult to construct a certain number of model sets; secondly, the difference of modeling results is small, and the generated result model set is monotonous; thirdly, more interaction is required and the user is burdened.
Disclosure of Invention
The invention aims to: the technical problem to be solved by the invention is to provide a method for carrying out unified representation on a model set based on an isomorphic model and automatically generating a new three-dimensional model through simple interaction to carry out computer three-dimensional modeling aiming at the defects of the prior art.
In order to solve the technical problem, the invention discloses a three-dimensional model generation method based on an isomorphic model, which comprises the following steps:
step 1, isomorphic model representation generation: analyzing the structural relationship of three-dimensional models in a given three-dimensional model set of the same category, performing shape fitting on a single three-dimensional model, and performing isomorphic unified representation on all three-dimensional models in the model set to obtain a unified representation vector of the single three-dimensional model;
step 2, model representation coding: converting the unified expression vector of each three-dimensional model into low-dimensional codes through a fully connected neural network to form a coding space;
step 3, constructing a visual space: on the basis of calculating the low-dimensional code represented by the model, judging the effectiveness of the model corresponding to each data point in the coding space, and visualizing the effectiveness distribution condition of the coding space.
Step 4, interactively generating a three-dimensional model: and (4) clicking and interacting the visual coding space in the step (3) by a user, selecting a data instance in the coding space, decoding according to the selected data instance, and generating a three-dimensional model.
The step 1 comprises the following steps:
step 1-1, generating a unified structural representation of a model set: the unified structure of the set of models is represented as an undirected graph G = { V, E }, where V is the set of all nodes, each node represents each component type, assuming that all models of the set of models have n types of components, V = { V = 1 ,v 2 …v n In which v is i The component I represents the component I, and the value range of i is 1-n; e is a set of all edges, each edge represents the relationship between every two types of components, and if the model set has m components in total, then E = { E = 1 ,e 1 …e m In which e is j Representing the j adjacent condition, wherein the value range of j is 1-m;
step 1-2, generating a single three-dimensional model structure representation: using a vector l with the length of n + m as a structural representation of a single three-dimensional model, extracting the existence condition and the adjacency relation of single three-dimensional model components, representing the existence condition and the adjacency relation as an undirected graph G = { V, E }, judging whether each member of V and each member of E belong to G or not according to the undirected graph G = { V, E }, and enabling the ith member of V to belong to G, and enabling the ith component l of the vector l to be enabled i Is 1, otherwise l i Is 0; the jth member of E belongs to g, let the j + n components of the vector l j+n Is 1, otherwise l j+n Is 0;
step 1-3, fitting the shape of a single model: for a single three-dimensional model, consider each part A it contains, its shape fitting vector is Geo A
Geo A =[Box A ,Gc A ],
Wherein, box A Bounding Box parameter of A, gc A Generalized cylinder parameter of A, bounding Box parameter Box A The following were used:
Box A =[c A ,l 1 ,l 2 ,l 3 ,d 1 ,d 2 ,d 3 ],
wherein c is A As bounding box center coordinates,/ 1 ,l 2 ,l 2 The lengths of three main shafts, d, of which the length of the bounding box is from large to small 1 ,d 2 ,d 3 The orientation vectors of three main axes with the lengths from large to small are obtained; the generalized cylinder is a three-dimensional body which is in any shape and is fitted through a variant cylinder with adaptive deformation of axes and radiuses, point sampling is carried out on the surface of the generalized cylinder, and a coordinate set { P } of all sampling points is used as a parameter of the generalized cylinder. To calculate the generalized cylinder of the part a, the skeleton line of the part is first extracted. The skeleton line refers to a thin curve consistent with the original part shape connectivity and topological structure. After extracting the skeleton line, uniformly sampling on the skeleton line to obtain skeleton line sampling points, and then, sampling points Ske of every two adjacent skeleton lines 1 ,Ske 2 In between, calculate the sampling point Ske 2 The normal plane of the normal plane and the intersection points of all the triangular surface patchesComprises the following steps:
wherein, (Sx) 1 ,Sy 1 ,Sz 1 ),(Sx 2 ,Sy 2 ,Sz 2 ) Are each Ske 1 Three-dimensional coordinates of (2) and Ske 2 Three-dimensional coordinates of (a);
normal to normal plane and sampling point Ske 2 And (3) solving a normal plane equation as follows:
(Sx 1 -Sx 2 )*(x-Sx 2 )+(Sy 1 -Sy 2 )*(y-Sy 2 )+(Sz 1 -Sz 2 )*(z-Sy 2 )=0,
for a triangular patch f i And calculating the linear equations of the three edges respectively:
wherein (x) 1 ,y 1 ,z 1 ),(x 2 ,y 2 ,z 2 ),(x 3 ,y 3 ,z 3 ) Are respectively a triangular patch f i Three-dimensional coordinates of the three vertex spaces;
solving the joint method plane equation and any one of the linear equations, and assuming that the solution is (X, Y, Z), assuming that the first linear equation is used for solving:
if the equation is not solved, the normal plane is parallel to the straight line, and the normal plane is represented as f i There is no intersection point on the first edge of (1);
if the equation has a solution, judging:
(X-x 1 )*(X-x 2 )<0,
(Y-y 1 )*(Y-y 2 )<0,
(Z-z 1 )*(Z-z 2 )<0,
if all three equations are satisfied, the coordinates (X, Y, Z) of the intersection of the normal plane and the straight line are f i Is determined to have an intersection, otherwise (X, Y, Z) is at f i Is judged to have no intersection point outside the range of the first edge;
calculating all intersection points (X, Y, Z) within the range determined as a certain side of the triangular patch as generalized cylindrical sampling points of the part A, namely:
Gc A ={X i ,Y i ,Z i };
wherein (X) i ,Y i ,Z i ) Denotes the intersection point coordinate Gc of the ith determined to be within the range of one edge of a certain triangular patch A Is the set of coordinates of all eligible intersection points.
Step 1-4, generating unified representation vectors: and (3) fitting the single three-dimensional model structural representation g in the step 1-2 and the single three-dimensional model in the step 1-3 with the shape of the single three-dimensional model to construct a unified representation vector L of the single model: adding a single three-dimensional model structure expression G into a vector L, and judging each member V of V according to the undirected graph G = { V, E } in the step 1-1 i Whether it belongs to g, if it belongs to v i Shape fitting vector Geo of corresponding part A Adding a vector L, otherwise, adding a full 0 vector with the same length into the vector L, and then calculating local coordinates Con of connection points of any two adjacent parts A and B of the single three-dimensional model A,B
Con A,B =[Con A,B A ,Con A,B B ],
Wherein Con A,B Is the coordinate vector, con, of the connection point of the A and B parts in the respective local coordinate system A,B A As coordinates of the connection point in the local coordinate system of the component A, con A,B B Coordinates of the connection point in a local coordinate system of the part B;
each member E of E is judged j Whether it belongs to g, if it belongs to e j Corresponding local coordinates Con of connection points A,B If not, adding the full 0 vector with the same length into the uniform expression vector L;
step 1-5, analyzing the symmetry relation of model parts: judging symmetry of any two parts A and B in the model set according to the shape fitting vectors, and calculating the difference degree D (A and B) of the shape fitting vectors:
D(A,B)=‖G A -G B2 /max(‖G A2 ,‖G B2 ),
wherein G is A And G B The shape fitting vector for part A and the shape fitting vector for part B, if D (A, B)&And (lt) 20%, judging to be symmetrical, and representing by using a matrix S, wherein if the component type A is symmetrical to the component type B, an element S (A, B) =1 corresponding to A and B in the matrix S, and otherwise, S (A, B) =0.
In steps 1 to 4, con is calculated by the following formula A,B A And Con A,B B
Wherein, c A ,l 1 A ,l 2 A ,l 3 A ,d 1 A ,d 2 A ,d 3 A As parameters of the bounding box of component A, c B ,l 1 B ,l 2 B ,l 3 B ,d 1 B ,d 2 B ,d 3 B Is the bounding box parameter of part B, and C is the coordinate of the connection point of parts A and B in the global coordinate system. Calculated by the method of steps 1-3.
The step 2 comprises the following steps:
step 2-1, constructing a neural network model: constructing a neural network model with the depth of 5, wherein the neural network model comprises an input layer, a first hidden layer, a second hidden layer, a third hidden layer and an output layer, and the adjacent two layers are connected in a full-connection mode; setting the length of the uniform expression vector L obtained in the step 1-4 as s, the number of input layer full-connection neurons as s, the number of hidden layer one full-connection neurons as 1.2 × s, the number of hidden layer two full-connection neurons as 0.5 × s, the number of hidden layer three full-connection neurons as 0.25 × s, the number of output layer full-connection neurons as 2, and the number of offset neurons of each layer in the neural network model as 1;
step 2-2, training a neural network model by adopting a back propagation and random gradient descent method: the parameter to be trained for each hidden layer is W i And b i The input is vector x and the output is vector y = x × W i +b i (ii) a When the reverse propagation is performed, the input is vector y and the output is vectorIn the training process, each three layers are trained, and the loss function is defined as the reconstruction errorWhereinIs a reconstructed vector calculated by inversion from the input vector x. For each hidden layer, respectively using random gradient descent method to minimizeObtain the parameter W i And b i
And 2-3, coding the representation vector by using the neural network model trained in the step 2-2, wherein the coding result is a two-dimensional vector, and if the representation vector is L, the representation vector is coded into L = ((L) × W) through three hidden layers 1 +b 1 )*W 2 +b 2 )*W 3 +b 3 . Wherein W 1 ,W 2 ,W 3 The weight coefficients of the first, second and third hidden layers, respectively, b 1 ,b 2 ,b 3 Respectively, the offsets of the first, second, and third hidden layers. And l is the final output two-dimensional code.
The step 3 comprises the following steps:
step 3-1: and (4) multiplying the output two-dimensional coding range in the step (2-3) by a scaling coefficient, wherein the coefficient is generally set to be 1.2 to 1.5, and obtaining a two-dimensional numerical space. Uniformly sampling the two-dimensional numerical space, wherein the sampling data is l';
step 3-2: decoding the sample data by using the neural network model trained in step 3, and the decoding result is X '= ((l' -b) 3 )*W 3 T -b 2 )*W 2 T -b 1 )*W 1 T
Step 3-3: reconstructing a grid model according to the decoding result of the step 3-2: obtaining whether the shape fitting vector Geo of each part is all 0 and whether the coordinates Con of the connection point of every two parts are all 0 according to the decoding result X', obtaining the existence condition and the connection condition of the parts of the model, reconstructing the grids of each part according to the existing shape fitting vector of each part, and combining the grids of each part to obtain a reconstructed model;
step 3-4: judging the effectiveness of the reconstruction model in the step 3-3: the validity criterion is divided into two parts: if any two parts in the reconstructed model have connection conditions in the step 1-1 and are not connected in the reconstructed model, judging that the connectivity is invalid; if any two parts in the reconstruction model have symmetry in the steps 1-5 and are not symmetric in the reconstruction model, the symmetry is judged to be invalid, if any standard is invalid, the sampled data of the reconstruction model does not have validity, otherwise the sampled data of the reconstruction model has validity;
step 3-5: constructing a visual space: to visualize the validity of the sampled data, a visual space is constructed. Firstly, a two-dimensional visual plane is constructed, and all pixel points on the plane are set to be white. Then, judging the two-dimensional data points with effectiveness in the step 3-4, and setting the color of pixel points corresponding to the coordinates on a two-dimensional plane as light color; and the color of a pixel point corresponding to the coordinate on the two-dimensional plane is set to be dark color.
Step 4 comprises the following steps:
step 4-1, when a user clicks a visual space, acquiring two-dimensional data corresponding to a user click coordinate;
step 4-2, inputting the two-dimensional data obtained in the step 4-1 into a self-encoder for decoding to obtain a model expression vector;
and 4-3, representing the vector by the model obtained in the step 4-2, and reconstructing a grid model.
The method analyzes the component composition, the connection relation and the component shape of the model set based on the existing model set, utilizes the neural network to encode the representation after constructing the unified representation of all the models, adopts the encoding space and judges the effectiveness of the adopted data. And on the basis of effectiveness judgment, visualizing effectiveness distribution of the coding space, decoding corresponding two-dimensional data and reconstructing a model according to click interaction of a user on the visualization space, and generating a new three-dimensional model.
Has the advantages that: the invention has the following advantages: firstly, the invention can complete the three-dimensional modeling process only by simple mouse click operation of a user. Secondly, the method can automatically ensure the effectiveness of the modeling result. By the technology of the invention, the visualization of the model set can be realized, and a user is allowed to synthesize a new model through a simple clicking operation.
Drawings
The above and other advantages of the present invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
FIG. 1 is a schematic process flow diagram of the present invention.
FIG. 2 is a schematic diagram of an isomorphic model representation.
FIG. 3a is a rendering of a single model.
Figure 3b is a schematic diagram of shape fitting of a single model.
FIG. 4 is a schematic diagram of the mapping of an isomorphic representation to two dimensions.
Fig. 5a is a schematic view of the effective space after the effectiveness test.
FIG. 5b is a new model generated from user interaction.
Detailed Description
The invention is further explained by the following embodiments in conjunction with the drawings.
As shown in fig. 1, the invention discloses a method for generating a three-dimensional model based on isomorphic model representation, which specifically comprises the following steps:
the method comprises the following steps: constructing an isomorphic model representation: and carrying out unified isomorphic structure representation on the three-dimensional models with structural difference and shape difference by analyzing the structural relationship and shape fitting of the models in the model set.
Step two, representing encoding: on the basis of unified model representation, high-dimensional model representation data are converted into low-dimensional codes through a full-connection neural network.
Step three, constructing a visual space: on the basis of calculating the low-dimensional code represented by the model, judging the effectiveness of the model corresponding to each data point in the coding space, and visualizing the effectiveness distribution condition of the coding space.
Step four, generating a three-dimensional model interactively: and clicking and interacting the visual coding space in the third step by the user, selecting the data instance in the coding space, and decoding according to the selected data instance to generate the three-dimensional model.
The main flow of each step is specifically described as follows:
wherein, step one includes the following steps: step 11, generating a unified structural representation of the model set: and extracting the unified structural representation of the model set according to the component construction information and the component corresponding information of all the models in the model set. The unified structure of the model set is represented as an undirected graph G = { V, E }, V is a set of all nodes, all models in the model set have n types of components, and V = { V }, wherein V is a set of all nodes 1 ,v 2 …v n }(v i Indicating a class i component). E is a set of all edges, m types of adjacent conditions of two different parts are set in the model set,then E = { E = { (E) 1 ,e 1 …e m }(e j Indicating the j-th adjacency).
Step 12, generating a single three-dimensional model structure representation: extracting the existence and adjacency relation of the single model component, and representing as an undirected graph g = { v, e }. Then, for the undirected graph G = { V, E } extracted in step 11, it is determined whether each member of V and each member of E belong to G. Using a vector l of length n + m as a single three-dimensional model structure representation, the ith member of V belongs to g, then l i Is 1, otherwise i Is 0. If the jth member of E belongs to g, then l j+n Is 1, otherwise l j+n Is 0.
Step 13, fitting the shape of the single three-dimensional model: for each individual three-dimensional model, the bounding box and generalized cylinder for each part are computed.
The bounding box for each component uses a bounding box center c, the bounding box three principal axis lengths l 1 ,l 2 ,L 3 And three main axial directions d of the bounding box 1 ,d 2 ,d 3 And (4) showing. The present application uses document 4: tagliasacchi A, alhashim I, olson M, et al]// Computer Graphics Forum. Black well Publishing Ltd,2012,31 (5): 1735-1744. The method extracts the skeleton line, samples the points on the skeleton line, and calculates the slice of each sampling point. The generalized cylinder is represented using a surface slice point set P of the generalized cylinder. The connection points of two adjacent parts use the coordinate x of the connection point in the respective local coordinate system 1 ,y 1 ,z 1 And x 2 ,y 2 ,z 2 And (4) showing.
Step 14, generating a unified representation vector: from the single three-dimensional model structural representation g and the single three-dimensional model shape fit in step 12 and step 13, a unified representation vector L of the single three-dimensional model can be constructed. A single three-dimensional model unified structure representation g is first added to the vector L. Judging each member V of V according to the undirected graph G = { V, E } in the step 11 i Whether it belongs to g, if so, v i And (4) fitting the shape of the corresponding part into the vector L, otherwise adding the full 0 vector with the same length into the vector L. Each member E of the judgment E j Whether it belongs to g or notThen e will be j The corresponding connection point represents the added vector L, otherwise the full 0 vector of the same length is added to the vector L.
Step 15, analyzing the symmetry relation of the model parts: and judging symmetry of any two types of components in the model set according to the shape fitting vectors, and if the difference degree of the shape fitting vectors is not more than 20%, judging the components to be symmetrical and expressing the components by using a matrix S. If component type a is symmetric to component type B, S (a, B) =1, otherwise S (a, B) =1.
The second step comprises the following steps: and step 21, constructing a neural network model. And constructing a neural network model with the depth of 5. The two adjacent layers of the network are connected in a full connection mode. And (4) if the length of the vector L in the step (14) is s, the number of input layer full-connection neurons is s, the number of hidden layer one full-connection neurons is 1.2 × s, the number of hidden layer two full-connection neurons is 0.5 × s, the number of hidden layer three full-connection neurons is 0.25 × s, and the number of output layer full-connection neurons is 2. The number of offset neurons per layer is 1.
And step 22, training the neural network model by adopting a back propagation and random gradient descent method. The present application uses document 5: hinton G E, salakhutdinov R.reducing the dimensional of data with neural networks [ J]The method of science,2006,313 (5786): 504-507. The parameter to be trained for each hidden layer is W i And b i Input is vector x, output is vector y = x × W i +b i . When the reverse propagation is performed, the input is vector y and the output is vectorIn the training process, each three layers are trained, and the loss function is defined as the reconstruction errorFor each hidden layer, respectively using random gradient descent method to minimizeObtain the parameter W i And b i
And step 23, encoding the representation vector by using the trained neural network, wherein the encoding result is a two-dimensional vector. Let the vector be X, the final code is l = ((X W) 1 +b 1 )*W 2 +b 2 )*W 3 +b 3
The third step comprises the following steps: step 31: the data is sampled. And uniformly sampling the two-dimensional numerical space.
Step 32: the sampled data is decoded. Decoding by using the self-encoder trained in the third step, and if the sample data is l ', the decoding result is X ' = ((l ' -b) 3 )*W 3 T -b 2 )*W 2 T -b 1 )*W 1 T
Step 33: and (4) model reconstruction. The result is decoded in step 32 and the resulting model representation vector is used to reconstruct the mesh model. And in the reconstruction process, the existence condition and the connection condition of the components of the model are obtained according to the representation constitution. Then, a mesh for each part is reconstructed from the shape fit of the individual parts. And combining the grids of the components to obtain a reconstructed model.
Step 34: and judging the effectiveness of the reconstruction model. A decision step 33 reconstructs the validity of the model. The present application uses document 6: averkiou M, kim V G, zheng Y, et al, shapesynth, sizing model concentrations for coordinated shape expansion and synthesis [ C ]// Computer Graphics Forum.2014,33 (2): 125-134. The standard is divided into two parts: connectivity criteria and symmetry criteria. If the two types of components in the reconstructed model are connected in step 11 and are not connected in the reconstructed model, the connectivity is determined to be failed. If the two types of components in the reconstructed model have symmetry in step 15 but are not symmetric in the reconstructed model, it is determined that the symmetry is invalid. If any standard fails, the reconstructed model has no effectiveness, otherwise the reconstructed model has effectiveness.
Step 35: and visualizing the effectiveness distribution condition. And visualizing the effectiveness distribution of the two-dimensional array space according to the effectiveness of all the sampling data obtained in the step 34. The sampling data with validity is set, the color of the pixel point on the corresponding visual space is set to be light color, the sampling data without validity is set to be dark color, and the color of the pixel on the corresponding visual space is set to be dark color.
The fourth step comprises the following steps: and 41, acquiring corresponding two-dimensional data according to the clicking operation of the user on the visual space.
Step 42: and (4) inputting the two-dimensional data acquired in the step (41) into a self-encoder for decoding to obtain a model expression vector.
Step 43: the mesh model is reconstructed from the model representation vectors obtained in step 42.
Examples
In this embodiment, a three-dimensional model of 13 chairs is used as a system input model library, unified isomorphic representation is performed on the 13 chairs, respective representation vectors are calculated, two-dimensional coding is performed on the representation vectors through a three-layer fully-connected neural network, sampling is performed in a two-dimensional coding space, effectiveness is detected, an effective space is rendered after visualization, and a new model is generated according to a click operation of a user.
The specific implementation process is as follows:
in step one, a user imports a model set, and then the unified architectural diagram generation in step 11 is executed. First, the structure diagram of each model is extracted, as shown in the second column from left to right in fig. 2. And then, extracting the unified structure diagram, wherein the extracted unified structure diagram is shown in the third column from left to right in fig. 2. After the unified structure diagram is extracted, the structure of each model is isomorphic, and the isomorphic structure diagram of each model is the fourth column from left to right in fig. 2. The execution step then performs a single three-dimensional model shape fit in step 13, fitting each model using bounding boxes and generalized cylinders. The individual models and shape fitting results are shown in fig. 3a and 3 b.
In the second step, the model representation vectors obtained in the first step are trained to be a fully connected neural network, and the representation vectors of each model are mapped through the neural network to obtain a two-dimensional coding result, and fig. 4 is a schematic diagram of the distribution of the two-dimensional coding vectors of the 13 chair models in the space.
In the third step, respectively obtaining the maximum value and the minimum value of the X coordinate and the Y coordinate according to all two-dimensional coding results { X, Y } in the second step, and setting the maximum value and the minimum value as X max ,x min ,y max ,y min . Then the two-dimensional coding space range is set to { x, y |1.2 x min -x max <x<1.2*x max -x min ,1.2*y min -y max <y<1.2*y max -y min }. And mapping the two-dimensional code space to a region with the resolution of 500 x 500 on the screen, and respectively solving the corresponding two-dimensional code of each pixel point in the region to serve as uniform sampling data. And decoding all the uniform sampling points through a trained neural network to obtain a model expression vector, reconstructing a model according to the model expression vector, and judging the effectiveness of the reconstructed model. The validity criterion is divided into connection validity and symmetry validity. If the validity standard is met, setting the corresponding pixel points to be dark, otherwise, setting the corresponding pixel points to be light, and visualizing all the pixel points. Fig. 5a is a rendering of the active space, with dark areas being active areas and light areas being inactive areas.
In step four, the user clicks any point in the visualization space, and the white point in fig. 5a is the user click point. And solving the corresponding two-dimensional code according to the click position of the user, and decoding a model expression vector through a neural network. And reconstructing a model according to the model representation vector and displaying the model. The reconstruction model results are shown in fig. 5 b.
The present invention provides a method for generating a three-dimensional model based on isomorphic model representation, and a plurality of methods and approaches for implementing the technical solution, and the above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, a plurality of improvements and modifications can be made without departing from the principle of the present invention, and these improvements and modifications should also be regarded as the protection scope of the present invention. All the components not specified in the present embodiment can be realized by the prior art.

Claims (6)

1. A three-dimensional model generation method based on isomorphic model representation is characterized by comprising the following steps:
step 1, isomorphic model representation generation: giving a three-dimensional model set of the same category, and performing isomorphic unified representation on all three-dimensional models in the model set to obtain a unified representation vector of a single three-dimensional model by analyzing the structural relationship of the three-dimensional models in the set and performing shape fitting on the single three-dimensional model;
step 2, model representation coding: converting the unified expression vector of each three-dimensional model into low-dimensional codes through a fully connected neural network to form a coding space;
step 3, constructing a visual space: judging the effectiveness of each data point in the coding space corresponding to the three-dimensional model, and visualizing the effectiveness distribution condition of the coding space;
step 4, interactively generating a three-dimensional model: and selecting a data example in the coding space, and decoding according to the selected data example to generate a three-dimensional model.
2. The method of claim 1, wherein step 1 comprises the steps of:
step 1-1, generating a unified structural representation of a model set: the unified structure of the model set is represented as an undirected graph G = { V, E }, where V is the set of all nodes, each node represents each component type, and V = { V } assuming that all models of the model set have n types of components in common 1 ,v 2 …v n H, wherein v i The component represents the ith component, and the value range of i is 1-n; e is a set of all edges, each edge represents the relationship between every two types of components, and if the model set has m components in adjacency, E = { E = 1 ,e 1 …e m In which e is j J represents the j adjacent condition, and the value range of j is 1-m;
step 1-2, generating a single three-dimensional model structure representation: using a vector l of length n + m as a single three-dimensional modelThe structure of (1) represents that the existence condition and the adjacency relation of a single three-dimensional model component are extracted and represented as an undirected graph G = { V, E }, whether each member of V and each member of E belong to G or not is judged according to the undirected graph G = { V, E }, and if the ith member of V belongs to G, the ith component l of the vector l is made to be i Is 1, otherwise l i Is 0; the jth member of E belongs to g, let the j + n components of the vector l j+n Is 1, otherwise l j+n Is 0;
step 1-3, fitting the shape of a single three-dimensional model: for a single three-dimensional model, consider each part A it contains, its shape fitting vector is Geo A
Geo A =[Box A ,Gc A ],
Wherein, box A Bounding Box parameter of A, gc A Generalized cylinder parameter of A, bounding Box parameter Box A The following:
Box A =[c A ,l 1 ,l 2 ,l 3 ,d 1 ,d 2 ,d 3 ],
wherein c is A As bounding box center coordinates,/ 1 ,l 2 ,l 3 The lengths of three main shafts respectively from large to small of the bounding box length, d 1 ,d 2 ,d 3 The orientation vectors of three main axes with the lengths from large to small are obtained; in order to calculate the generalized cylinder of the component A, firstly, extracting a skeleton line of the component, wherein the skeleton line refers to a thin curve consistent with the original component in shape connectivity and topological structure, after the skeleton line is extracted, uniformly sampling is carried out on the skeleton line to obtain skeleton line sampling points, and then, every two adjacent skeleton line sampling points Ske are used for sampling points Ske 1 ,Ske 2 In between, calculate the sampling point Ske 2 The normal direction of the normal plane and the intersection points of all the triangular surface patchesComprises the following steps:
wherein, (Sx) 1 ,Sy 1 ,Sz 1 ),(Sx 2 ,Sy 2 ,Sz 2 ) Are each Ske 1 Three-dimensional coordinates of (2) and Ske 2 Three-dimensional coordinates of (a);
normal to normal plane and sampling point Ske 2 And (3) solving a normal plane equation as follows:
(Sx 1 -Sx 2 )*(x-Sx 2 )+(Sy 1 -Sy 2 )*(y-Sy 2 )+(Sz 1 -Sz 2 )*(z-Sy 2 ) =0, for a triangular patch f i And calculating the linear equations of the three edges respectively:
wherein (x) 1 ,y 1 ,z 1 ),(x 2 ,y 2 ,z 2 ),(x 3 ,y 3 ,z 3 ) Respectively a triangular patch f i Three-dimensional coordinates of the three vertex spaces;
solving the joint method plane equation and any one of the linear equations, and assuming that the solution is (X, Y, Z), assuming that the first linear equation is used for solving:
if the equation is not solved, the normal plane is parallel to the straight line, and the normal plane is represented as f i There is no intersection point on the first edge of (1);
if the equation has a solution, judging:
(X-x 1 )*(X-x 2 )<0,
(Y-y 1 )*(Y-y 2 )<0,
(Z-z 1 )*(Z-z 2 )<0,
if threeWhen all the formulas are satisfied, the coordinates (X, Y, Z) of the intersection of the normal plane and the straight line are f i Is determined to have an intersection, otherwise (X, Y, Z) is at f i Is judged to have no intersection point outside the range of the first edge;
calculating all intersection points (X, Y, Z) determined as the range of a certain edge of the triangular patch, and taking the intersection points as generalized cylindrical sampling points of the part A, namely:
Gc A ={X i ,Y i ,Z i };
wherein (X) i ,Y i ,Z i ) Denotes the coordinates of the intersection point, gc, determined as being within the range of one edge of one triangular patch A The intersection point coordinates which meet the conditions are collected;
step 1-4, generating unified expression vectors: and (3) fitting the single three-dimensional model structural representation g in the step 1-2 and the single three-dimensional model in the step 1-3 with the shape of the single three-dimensional model to construct a unified representation vector L of the single model: adding a single three-dimensional model structure representation G into a vector L, and judging each member V of V according to an undirected graph G = { V, E } in the step 1-1 i Whether it belongs to g, if it belongs to v i Shape fitting vector Geo of corresponding part A Adding a vector L, otherwise, adding a full 0 vector with the same length into the vector L, and then calculating local coordinates Con of connection points of any two adjacent parts A and B of the single three-dimensional model A,B
Con A,B =[Con A,B A ,Con A,B B ],
Wherein Con A,B Is the coordinate vector, con, of the connection point of the A and B parts in the respective local coordinate system A,B A As coordinates of the connection point in the local coordinate system of the component A, con A,B B Coordinates of the connecting point in a local coordinate system of the part B;
each member E of E is judged j Whether it belongs to g, if it belongs to e j Corresponding local coordinates Con of connection points A,B If not, adding the full 0 vector with the same length into the uniform expression vector L;
step 1-5, analyzing the symmetry relation of model parts: and for any two parts A and B in the model set, judging symmetry according to the shape fitting vector, and calculating the difference degree D (A and B) of the shape fitting vector:
D(A,B)=‖G A -G B2 /max(‖G A2 ,‖G B2 ),
wherein G is A And G B The shape fitting vector for part A and the shape fitting vector for part B, if D (A, B)&And (lt) 20%, determining to be symmetrical, and representing by using a matrix S, wherein if the component type A is symmetrical to the component type B, the element S (A, B) =1 corresponding to A and B in the matrix S, otherwise, S (A, B) =0.
3. Method according to claim 2, characterized in that in steps 1-4 Con is calculated separately by the following formula A,B A And Con A,B B
Wherein, c A ,l 1 A ,l 2 A ,l 3 A ,d 1 A ,d 2 A ,d 3 A As bounding box parameters of component A, c B ,l 1 B ,l 2 B ,l 3 B ,d 1 B ,d 2 B ,d 3 B Is the bounding box parameter of part B, and C is the coordinate of the connection point of parts a, B in the global coordinate system.
4. A method according to claim 3, characterized in that step 2 comprises the steps of:
step 2-1, constructing a neural network model: constructing a neural network model with the depth of 5, wherein the neural network model comprises an input layer, a first hidden layer, a second hidden layer, a third hidden layer and an output layer, and the adjacent two layers are connected in a full-connection mode; setting the length of the unified expression vector L obtained in the step 1-4 as s, wherein the number of input layer full-connection neurons is s, the number of hidden layer one full-connection neurons is 1.2 × s, the number of hidden layer two full-connection neurons is 0.5 × s, the number of hidden layer three full-connection neurons is 0.25 × s, the number of output layer full-connection neurons is 2, and the number of offset neurons of each layer in the neural network model is 1;
step 2-2, training a neural network model by adopting a back propagation and random gradient descent method: the parameter to be trained for each hidden layer is W i And b i The input is vector x and the output is vector y = x × W i +b i (ii) a When the reverse propagation is performed, the input is vector y and the output is vectorIn the training process, each three layers are trained, and the loss function is defined as the reconstruction errorWhereinA reconstructed vector calculated by the reverse direction according to the input vector x; for each hidden layer, respectively using random gradient descent method to minimizeObtain the parameter W i And b i
And 2-3, encoding the representation vector by using the neural network model trained in the step 2-2, wherein the encoding result is a two-dimensional vector, the representation vector is L, and the representation vector is encoded into L = ((L) = W) through three hidden layers 1 +b 1 )*W 2 +b 2 )*W 3 +b 3 . Wherein W 1 ,W 2 ,W 3 The weight coefficients of the first, second and third hidden layers, respectively, b 1 ,b 2 ,b 3 Are respectively asAnd the offsets of the first hidden layer, the second hidden layer and the third hidden layer are l, and the l is the final output two-dimensional code.
5. The method of claim 4, wherein step 3 comprises the steps of:
step 3-1: taking the output two-dimensional coding range in the step 2-3, multiplying by a scaling coefficient to obtain a two-dimensional value sampling space, and uniformly sampling the two-dimensional value sampling space, wherein the sampling data is l';
step 3-2: decoding the sampled data by using the neural network model trained in the step 3, and obtaining a decoding result of X '= ((l' -b) = 3 )*W 3 T -b 2 )*W 2 T -b 1 )*W 1 T
Step 3-3: reconstructing a grid model according to the decoding result of the step 3-2: obtaining whether the shape fitting vector Geo of each part is all 0 and whether the coordinates Con of the connection point of every two parts are all 0 according to the decoding result X', obtaining the existence condition and the connection condition of the parts of the model, reconstructing grids of each part according to the existing shape fitting vector of each part, and combining the grids of each part to obtain a reconstructed model;
step 3-4: judging the effectiveness of the reconstruction model in the step 3-3: the validity criterion is divided into two parts: if any two types of components in the reconstructed model have connection conditions in the step 1-1 and are not connected in the reconstructed model, judging that the connectivity is invalid; if any two types of components in the reconstruction model have symmetry in the steps 1-5 and are not symmetric in the reconstruction model, the symmetry is judged to be invalid, if any standard is invalid, the sampled data of the reconstruction model does not have validity, otherwise the sampled data of the reconstruction model has validity;
step 3-5: constructing a visual space: constructing a two-dimensional visual plane, setting all pixel points on the plane to be white, judging two-dimensional data points with effectiveness in the step 3-4, and setting the color of the pixel points corresponding to the coordinates on the two-dimensional plane to be light color; and the color of a pixel point corresponding to the coordinate on the two-dimensional plane is set to be dark color.
6. The method of claim 5, wherein step 4 comprises the steps of:
step 4-1, when a user clicks a visual space, acquiring two-dimensional data corresponding to a user click coordinate;
step 4-2, inputting the two-dimensional data obtained in the step 4-1 into a self-encoder for decoding to obtain a model expression vector;
and 4-3, representing the vector by the model obtained in the step 4-2, and reconstructing a grid model.
CN201710810698.7A 2017-09-11 2017-09-11 Three-dimensional model generation method based on isomorphic model representation Active CN107610221B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710810698.7A CN107610221B (en) 2017-09-11 2017-09-11 Three-dimensional model generation method based on isomorphic model representation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710810698.7A CN107610221B (en) 2017-09-11 2017-09-11 Three-dimensional model generation method based on isomorphic model representation

Publications (2)

Publication Number Publication Date
CN107610221A true CN107610221A (en) 2018-01-19
CN107610221B CN107610221B (en) 2020-06-05

Family

ID=61062493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710810698.7A Active CN107610221B (en) 2017-09-11 2017-09-11 Three-dimensional model generation method based on isomorphic model representation

Country Status (1)

Country Link
CN (1) CN107610221B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109389671A (en) * 2018-09-25 2019-02-26 南京大学 A kind of single image three-dimensional rebuilding method based on multistage neural network
CN110059073A (en) * 2019-03-18 2019-07-26 浙江工业大学 Web data automatic visual method based on Subgraph Isomorphism
CN110633628A (en) * 2019-08-02 2019-12-31 杭州电子科技大学 RGB image scene three-dimensional model reconstruction method based on artificial neural network
CN110889893A (en) * 2019-10-25 2020-03-17 中国科学院计算技术研究所 Three-dimensional model representation method and system for expressing geometric details and complex topology
WO2023103415A1 (en) * 2021-12-09 2023-06-15 上海望友信息科技有限公司 Component modeling and parameterization method and system, electronic device, and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976359A (en) * 2010-09-26 2011-02-16 浙江大学 Method for automatically positioning characteristic points of three-dimensional face
CN103236058A (en) * 2013-04-25 2013-08-07 内蒙古科技大学 Method for obtaining volume of interest of four-dimensional heart image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976359A (en) * 2010-09-26 2011-02-16 浙江大学 Method for automatically positioning characteristic points of three-dimensional face
CN103236058A (en) * 2013-04-25 2013-08-07 内蒙古科技大学 Method for obtaining volume of interest of four-dimensional heart image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MANZHANG 等: "Perception-based shape retrieval for 3D building models", 《ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING》 *
宋沫飞: "交互式数字几何建模及其关键技术研究", 《中国博士学位论文全文数据库信息科技辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109389671A (en) * 2018-09-25 2019-02-26 南京大学 A kind of single image three-dimensional rebuilding method based on multistage neural network
CN109389671B (en) * 2018-09-25 2020-09-22 南京大学 Single-image three-dimensional reconstruction method based on multi-stage neural network
CN110059073A (en) * 2019-03-18 2019-07-26 浙江工业大学 Web data automatic visual method based on Subgraph Isomorphism
CN110059073B (en) * 2019-03-18 2021-04-06 浙江工业大学 Web data automatic visualization method based on subgraph isomorphism
CN110633628A (en) * 2019-08-02 2019-12-31 杭州电子科技大学 RGB image scene three-dimensional model reconstruction method based on artificial neural network
CN110633628B (en) * 2019-08-02 2022-05-06 杭州电子科技大学 RGB image scene three-dimensional model reconstruction method based on artificial neural network
CN110889893A (en) * 2019-10-25 2020-03-17 中国科学院计算技术研究所 Three-dimensional model representation method and system for expressing geometric details and complex topology
WO2023103415A1 (en) * 2021-12-09 2023-06-15 上海望友信息科技有限公司 Component modeling and parameterization method and system, electronic device, and storage medium

Also Published As

Publication number Publication date
CN107610221B (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN107610221B (en) Three-dimensional model generation method based on isomorphic model representation
Jiang et al. Shapeflow: Learnable deformation flows among 3d shapes
CN109147048B (en) Three-dimensional mesh reconstruction method by utilizing single-sheet colorful image
Zhou et al. Edge bundling in information visualization
Gao et al. Mps-nerf: Generalizable 3d human rendering from multiview images
Hu et al. Structure‐aware 3D reconstruction for cable‐stayed bridges: A learning‐based method
Li et al. Generalized polycube trivariate splines
CN109544666A (en) A kind of full automatic model deformation transmission method and system
Ben Makhlouf et al. Reconstruction of a CAD model from the deformed mesh using B-spline surfaces
CN108986221A (en) A kind of three-dimensional face grid texture method lack of standardization approached based on template face
Li et al. HSurf-Net: Normal estimation for 3D point clouds by learning hyper surfaces
CN102279981A (en) Three-dimensional image gridding method
CN110378047A (en) A kind of Longspan Bridge topology ambiguity three-dimensional rebuilding method based on computer vision
CN108520513A (en) A kind of threedimensional model local deformation component extraction method and system
CN115659445A (en) Method for rendering and displaying CAD model on webpage in lightweight mode based on Open Cascade
Huang et al. Meshode: A robust and scalable framework for mesh deformation
Yang et al. Connectivity-aware Graph: A planar topology for 3D building surface reconstruction
Zhang et al. Learning geometric transformation for point cloud completion
CN112785684B (en) Three-dimensional model reconstruction method based on local information weighting mechanism
Wang et al. SparseFormer: Sparse transformer network for point cloud classification
Zhang et al. Fast Mesh Reconstruction from Single View Based on GCN and Topology Modification.
Huang et al. Phrit: Parametric hand representation with implicit template
Zhang et al. MeshLink: a surface structured mesh generation framework to facilitate automated data linkage
Lin et al. 3D mesh reconstruction of indoor scenes from a single image in-the-wild
CN113436335B (en) Incremental multi-view three-dimensional reconstruction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant