CN105243137A - Draft-based three-dimensional model retrieval viewpoint selection method - Google Patents

Draft-based three-dimensional model retrieval viewpoint selection method Download PDF

Info

Publication number
CN105243137A
CN105243137A CN201510645547.1A CN201510645547A CN105243137A CN 105243137 A CN105243137 A CN 105243137A CN 201510645547 A CN201510645547 A CN 201510645547A CN 105243137 A CN105243137 A CN 105243137A
Authority
CN
China
Prior art keywords
viewpoint
model
point
coordinate
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510645547.1A
Other languages
Chinese (zh)
Other versions
CN105243137B (en
Inventor
金龙存
翁耿森
聂四品
彭新一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201510645547.1A priority Critical patent/CN105243137B/en
Publication of CN105243137A publication Critical patent/CN105243137A/en
Application granted granted Critical
Publication of CN105243137B publication Critical patent/CN105243137B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Library & Information Science (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention discloses a draft-based three-dimensional model retrieval viewpoint selection method, comprising the following steps of step 1, manually classifying models in a database; step 2, determining a universal set of viewpoints by performing triangle subdivision on an icosahedron; step 3, calculating the entropy value of each model at each viewpoint; step 4, according to a calculation result in the step 3, determining the number of the viewpoints of each type of model; step 5, according to a calculation result in the step 4, performing a clustering operation on the universal set of the viewpoints so as to determine a selected viewpoint; and step 6, according to the viewpoint generated in the step 5, generating a two-dimensional projection view. The draft-based three-dimensional model retrieval viewpoint selection method has the advantages of excellent matching result, effective improvement on system operating efficiency and the like.

Description

A kind of three-dimensional model search viewpoint selection method based on sketch
Technical field
The present invention relates to a kind of three-dimensional model search technology based on sketch calculated in machine image processing field, particularly a kind of three-dimensional model search viewpoint selection method based on sketch, is mainly used in based on the viewpoint selection in the three-dimensional model search of sketch.
Background technology
At present, in the three-dimensional model search viewpoint selection field based on sketch, the strategy of viewpoint selection mainly contains two methods.One is the general viewpoint using presetting fixed view as all models.The people such as Shao propose in " carrying out the distinguishing three-dimensional model search based on sketch by the Model Matching of stalwartness ": the method for mating three-dimensional model based on 7 fixed view.The advantage of this method is obvious, and no matter this is a spherical model, or an individual model, all can mate by the viewpoint that predefine is good, in the process of coupling, can reduce in viewpoint selection calculated amount, contribute to the operational efficiency of raising system.But its shortcoming also clearly, have ignored the different demands of different model to number of views and viewpoint position, be not that each viewpoint can reflect the aspect of model well, fixed view can affect matching result to a certain extent.
Then, the viewpoint generated by viewpoint cluster carries out the coupling of sketch to another viewpoint selection strategy.Little based on the research work of this strategy at present.
Summary of the invention
The object of the invention is to overcome the shortcoming of prior art and deficiency, a kind of three-dimensional model search viewpoint selection method based on sketch is provided, this three-dimensional model search viewpoint selection method solves the problem that presetting fixed view impacts matching result, uses the method for cluster viewpoint to carry out the selection of viewpoint.
Object of the present invention is achieved through the following technical solutions: a kind of three-dimensional model search viewpoint selection method based on sketch, comprises the following steps:
Step 1, manually the model in database to be classified;
Step 2, determine the complete or collected works of viewpoint by carrying out triangle segmentation to regular dodecahedron;
Step 3, calculate the entropy of each model in each viewpoint;
Step 4, determined the number of views of every model I by the result of calculation of step 3;
Step 5, by the result of calculation of step 4, cluster operation is carried out to viewpoint complete or collected works, determine selected viewpoint;
Step 6, according to step 5 generate viewpoint generate two-dimensional projection views.
The present invention has following advantage and effect relative to prior art:
1, the present invention is by classifying the model in model database, and the complexity then by calculating each model in every model I determines number of views, and the method finally by cluster determines the final viewpoint selected.The present invention, according to the different demands of different model to number of views and viewpoint position, makes each viewpoint can reflect the aspect of model well, thus secures viewpoint, and matching result is good, effectively improves the operational efficiency of system.
2, in existing three-dimensional model search field viewpoint selection part, there is no the research to the viewpoint selection based on cluster viewpoint, the present invention adopts the viewpoint selection method based on cluster viewpoint, has filled up this respect technological gap.
Accompanying drawing explanation
Fig. 1 is process flow diagram of the present invention.
Fig. 2 is Loop subdivision Algorithm Analysis figure of the present invention.
Embodiment
Below in conjunction with embodiment and accompanying drawing, the present invention is described in further detail, but embodiments of the present invention are not limited thereto.
Embodiment
As shown in Figure 1, a kind of three-dimensional model search viewpoint selection method based on sketch, comprises the following steps:
Step 1, manually the model in database to be classified;
Step 2, determine the complete or collected works of viewpoint by carrying out triangle segmentation to regular dodecahedron.
Step 2 specifically comprises the following steps:
Step 2-1, with the centre of sphere of the initial point of the three-dimensional cartesian coordinate system circumsphere that is regular dodecahedron, radius is 2 drafting regular dodecahedron;
Step 2-2, realize the segmentation in face by 1 gore being split into 4 little gores.
According to the different modes of dot generation, the point generated can be divided into two classes after segmentation:
(1) first kind point: the point calculated by original leg-of-mutton limit, as Vq, Vp and Vr point in Fig. 2;
(2) Equations of The Second Kind point: the point calculated by original vertex of a triangle, as the V in Fig. 2 1', V 2' and V 3' point;
For above first kind point and this two classes point of Equations of The Second Kind point, belonging to it, whether limit is border, is divided into two kinds of situations to calculate the coordinate of newly-generated point:
For first kind point:
(1) if former leg-of-mutton limit is boundary edge, then the computing formula of the coordinate of newly-generated point is:
V = V 1 + V 2 2 , - - - ( 1 )
Wherein, V 1, V 2be respectively two summits of boundary edge.It is by formula (1), known,
V q = V 1 + V 2 2 ;
(2) if the non-boundary edge in former leg-of-mutton limit, then the computing formula of the coordinate of newly-generated point is:
V = 3 8 * ( V 2 + V 3 ) + 1 8 * ( V 1 + V 4 ) , - - - ( 2 )
Wherein, V 2, V 3two summits of this non-boundary edge, V 1, V 4be all crossing with these two summits in these non-boundary edge both sides and in nearest two summits, this limit.By formula (2), known:
V r = 3 8 * ( V 2 + V 3 ) + 1 8 * ( V 1 + V 4 ) ,
For Equations of The Second Kind point:
(1) if former leg-of-mutton limit is boundary edge, then the computing formula of the coordinate of newly-generated point is:
V = 6 8 * V 1 + 1 8 * V 2 + 1 8 * V 3 , - - - ( 3 )
Wherein, V 1the upper corresponding summit of original triangle shape, V 2, V 3two summits that the upper V1 of original triangle shape is adjacent respectively.By formula (3), known:
V 1 ′ = 6 8 * V 1 + 1 8 * V 2 + 1 8 * V 3 ,
If the non-boundary edge in former leg-of-mutton limit, then the computing formula of the coordinate of newly-generated point is:
V ′ = ( 1 - β n ) V + βnΣ i = 0 n - 1 V i , - - - ( 4 )
Wherein, β n = 1 n [ 5 8 - ( 3 8 + 1 4 c o s 2 π n ) 2 ] , - - - ( 5 )
As n=3, as n>3, v irepresent the summit in the former polygon having a limit to be connected with V.From formula (4) and (5):
V 4 ′ = ( 1 - 3 16 ) V + 3 16 Σ i = 0 2 V i ( V i = V 2 , V 3 , V 5 ) ;
Step 3, calculate the entropy of each model in each viewpoint.Step 3 specifically comprises the following steps:
Step 3-1: model x is at viewpoint p jthe computing formula of the entropy at place is as follows:
E = - Σ i = 0 m A i S log 2 A i S ,
Wherein, E represents the entropy of this model at this viewpoint place, and m represents the quantity in the face of this model, A irepresent i-th effective area of face under this viewpoint, S represent this model rendering region the total area (due to by this scaling of model in unit ball, therefore S can be represented on an equal basis with the area of unit circle); A 0represent the area of background parts, namely play up region total area S and deduct the total area of this model projection on this face, be A 0.The value of E is larger, represents that the complexity of this model is also larger, and in general, the viewpoint number of needs also can be more.
Step 3-2:A (x-x 0)+B (y-y 0)+C (z-z 0)=0, (9)
Wherein, A, B, C equal x respectively n, y n, z n, i.e. the three-dimensional coordinate of the normal vector of this plane, x 0, y 0, z 0represent the coordinate of certain known point on this plane, namely this plane equation can be used for the plane equation representing projecting plane.
For each vertex v of model i(x i, y i, z i), can this point calculated and the normal vector straight-line equation identical with viewpoint normal vector.Owing to being general perspective, therefore summit and its normal vector at the straight line of projecting plane mapping point can be considered as: viewpoint p i(x p, y p, z p) with the center p on projecting plane i' (x 0, y 0, z 0) the method phasor that forms, therefore its space line equation can be expressed as:
x - x i m = y - y i n = z - z i r , - - - ( 10 )
Wherein, x i, y i, z ifor the coordinate of each vertex v i, m, n, r are the normal vector of straight line, and the solution of equations formed by the derivation of equation (9) and formula (10), can obtain this summit mapping point v on the projection surface i' (x i', y i', z i'), regenerate summit, face mapping f ' (v on the projection surface 1', v 2', v 3').
Step 3-3: if find p ' in the plane one group cross p ' and orthogonal base vector e 1, e 2(with these two groups of base vectors).For the q (x of on projecting plane q, y q, z q), calculate its two-dimensional coordinate q ' (x on projecting plane q', y q') can with formulae discovery below:
PQ=PO+OQ,(12)
Wherein, PQ, PO, OQ are respectively p to q, p to o, the vector of o to q.
That is:
xq'*e1+yq*e2=PO+OQ,(13)
Wherein, e1 is the base vector in q x-axis direction on this plane, and e2 is the base vector in q y-axis direction on this plane.
In formula (12), be all known on the right of formula, e 1, e 2also can calculate.Now this formula can be regarded as about x q', y q' linear equation in two unknowns.Can by both members dot product e respectively 1, e 2, that is:
xq'*e1·e1+yq*e2·e1=(PO+OQ)·e1,
xq'*e1·e2+yq*e2·e2=(PO+OQ)·e2,
Wherein, e1 is the base vector in q x-axis direction on this plane, and e2 is the base vector in q y-axis direction on this plane.PQ, OQ are respectively p to q, the vector of o to q;
Due to e 1, e 2orthogonal, therefore e 2e 1=0, x can be calculated respectively thus q', y q', namely Q is on the projection surface with the coordinate that P ' is initial point;
Step 3-4: built-in function---the polybool calculating two polygon unions in Matlab.It is defined as follows:
[x,y]=function(operation,x 1,y 1,x 2,y 2);
Wherein, x, y are the rreturn value of function, and x is two polygon (x 1, y 1), (x 2, y 2) asking the x-axis direction coordinate of the clockwise sequence of the polygon vertex after union, y is its coordinate in the y-axis direction.Operation represents the operation performed two polygons, when the operation inputted is ' union ' time, namely ask two polygonal unions.
Step 4, determined the number of views of every model I by the result of calculation of step 3, step 4 specifically comprises the following steps:
Step 4-1: the average entropy E first calculating all viewpoints of each model m, and then calculate each viewpoint of each model relative to average entropy E mstandard deviation S d.
Step 4-2: Euclidean distance C=sqrt (S d^2+E m^2).Wherein, S d, E mrepresent the value after being normalized by the maximal value of respective value in every model I respectively.
Step 4-3:Nc=a*C*N 0(14)
Wherein, N 0the complete or collected works that feature viewpoint is extracted, here, N 0be the half of 42 viewpoints, namely 21.A is a constant, owing to only considering the half of view region, therefore makes a equal 0.5.C is obtained by step 4-2, represents model complexity.N cbe final number of views.
Step 5, by the result of calculation of step 4, cluster operation is carried out to viewpoint complete or collected works, determine selected viewpoint.Step 5 specifically comprises the following steps:
(1) input: k (cluster number) and p=m*n matrix, a Stochastic choice k initial cluster center, as: make q=k*n, q (i :)=p (i :);
(2) for each object in p, p (i :), compares the distance of itself and q (i :) respectively, if it is added in the matrix of another r=k*n, is designated as r (i, j) under it;
(3) for the every a line in matrix r, recalculate with the barycenter in r with the element in a line being element in lower target p, then the value of q (i :) and this value are exchanged;
(4) repeat (2) (3), the change of (i :) value is less than given threshold value until all q;
Step 6, the viewpoint generated by step 5 are created on the two-dimensional projection views of these viewpoint drags.
Above-described embodiment is the present invention's preferably embodiment; but embodiments of the present invention are not restricted to the described embodiments; change, the modification done under other any does not deviate from Spirit Essence of the present invention and principle, substitute, combine, simplify; all should be the substitute mode of equivalence, be included within protection scope of the present invention.

Claims (5)

1., based on a three-dimensional model search viewpoint selection method for sketch, it is characterized in that, comprise the following steps:
Step 1, manually the model in database to be classified;
Step 2, determine the complete or collected works of viewpoint by carrying out triangle segmentation to regular dodecahedron;
Step 3, calculate the entropy of each model in each viewpoint;
Step 4, determined the number of views of every model I by the result of calculation of step 3;
Step 5, by the result of calculation of step 4, cluster operation is carried out to viewpoint complete or collected works, determine selected viewpoint;
Step 6, according to step 5 generate viewpoint generate two-dimensional projection views.
2., as claimed in claim 1 based on the three-dimensional model search viewpoint selection method of sketch, it is characterized in that, described step 2 comprises the following steps:
Step 2-1, with the centre of sphere of the initial point of the three-dimensional cartesian coordinate system circumsphere that is regular dodecahedron, radius is 2 drafting regular dodecahedron;
Step 2-2, realize the segmentation in face by 1 gore being split into 4 little gores;
According to the different modes of dot generation, the point generated after segmentation is divided into two classes:
(3) first kind point: the point calculated by original leg-of-mutton limit;
(4) Equations of The Second Kind point: the point calculated by original vertex of a triangle;
For described first kind point and Equations of The Second Kind point, belonging to it, whether limit is border, is divided into two kinds of situations to calculate the coordinate of newly-generated point:
For first kind point:
(1) if former leg-of-mutton limit is boundary edge, then the computing formula of the coordinate of newly-generated point is:
V = V 1 + V 2 2 , - - - ( 1 )
Wherein, V 1, V 2be respectively two summits of boundary edge, from formula (1),
V q = V 1 + V 2 2 ;
(2) if the non-boundary edge in former leg-of-mutton limit, then the computing formula of the coordinate of newly-generated point is:
V = 3 8 * ( V 2 + V 3 ) + 1 8 * ( V 1 + V 4 ) , - - - ( 2 )
Wherein, V 2, V 3two summits of this non-boundary edge, V 1, V 4all crossing with these two summits in these non-boundary edge both sides and nearest apart from this limit two summits, from formula (2):
V r = 3 8 * ( V 2 + V 3 ) + 1 8 * ( V 1 + V 4 ) ;
For Equations of The Second Kind point:
If former leg-of-mutton limit is boundary edge, then the computing formula of the coordinate of newly-generated point is:
V = 6 8 * V 1 + 1 8 * V 2 + 1 8 * V 3 , - - - ( 3 )
Wherein, V 1the upper corresponding summit of original triangle shape, V 1, V 3two summits that the upper V1 of original triangle shape is adjacent respectively, by formula (3), known:
V 1 , = 6 8 * V 1 + 1 8 * V 2 + 1 8 * V 3 ,
If the non-boundary edge in former leg-of-mutton limit, then the computing formula of the coordinate of newly-generated point is:
V ′ = ( 1 - β n ) V + βnΣ i = 0 n - 1 V i , - - - ( 4 )
Wherein, β n = 1 n [ 5 8 - ( 3 8 + 1 4 c o s 2 π n ) 2 ] , - - - ( 5 )
As n=3, as n>3, v irepresent the summit in the former polygon having a limit to be connected with V, from formula (4) and formula (5):
V 4 ′ = ( 1 - 3 16 ) V + 3 16 Σ i = 0 2 V i ,
Wherein, Vi=V2, V3 or V5.
3., as claimed in claim 1 based on the three-dimensional model search viewpoint selection method of sketch, it is characterized in that, described step 3 comprises the following steps:
Step 3-1, model x are at viewpoint p jthe computing formula of the entropy at place is as follows:
E = - Σ i = 0 m A i S log 2 A i S ,
Wherein, E represents the entropy of this model at this viewpoint place, and m represents the quantity in the face of this model, A irepresent i-th effective area of face under this viewpoint, S represents the total area in this model rendering region, due to by this scaling of model in unit ball, therefore represent S on an equal basis with the area of unit circle; A 0represent the area of background parts, namely play up region total area S and deduct the total area of this model projection on this face, be A 0; The value of E is larger, represents that the complexity of this model is also larger, and the viewpoint number of needs also can be more;
Step 3-2, A (x-x 0)+B (y-y 0)+C (z-z 0)=0, (9)
Wherein, A, B, C equal x respectively n, y n, z n, i.e. the three-dimensional coordinate of the normal vector of this plane, (x 0, y 0, z 0) representing the coordinate of certain known point on this plane, namely this plane equation can be used for the plane equation representing projecting plane;
For each vertex v of model i(x i, y i, z i), can this point calculated and the normal vector straight-line equation identical with viewpoint normal vector; Owing to being general perspective, therefore summit and its normal vector at the straight line of projecting plane mapping point can be considered as: viewpoint p i(x p, y p, z p) with the center p on projecting plane i' (x 0, y 0, z 0) the method phasor that forms, therefore its space line equation can be expressed as:
x - x i m = y - y i n = z - z i r , - - - ( 10 )
Wherein, x i, y i, z ifor the coordinate of each vertex v i, m, n, r are the normal vector of straight line, and the solution of equations formed by the derivation of equation (9) and formula (10), can obtain this summit mapping point v on the projection surface i' (x i', y i', z i'), regenerate summit, face mapping f ' (v on the projection surface 1', v 2', v 3');
Step 3-3: if find p ' in the plane one group cross p ' and orthogonal base vector e 1, e 2; For the q (x of on projecting plane q, y q, z q), calculate its two-dimensional coordinate q ' (x on projecting plane q', y q') can with formulae discovery below:
PQ=PO+OQ,(12)
PQ, PO, OQ are respectively p to q, p to o, the vector of o to q;
That is:
xq’*e1+yq*e2=PO+OQ,(13)
Wherein, e1 is the base vector in q x-axis direction on this plane, and e2 is the base vector in q y-axis direction on this plane;
In formula (12), be all known on the right of formula, e 1, e 2also can calculate; Now this formula can be regarded as about x q', y q' linear equation in two unknowns; Can by both members dot product e respectively 1, e 2, that is:
xq’*e1·e1+yq*e2·e1=(PO+OQ)·e1,
xq’*e1·e2+yq*e2·e2=(PO+OQ)·e2,
Wherein, e1 is the base vector in q x-axis direction on this plane, and e2 is the base vector in q y-axis direction on this plane; PQ, OQ are respectively p to q, the vector of o to q;
Due to e 1, e 2orthogonal, therefore e 2e 1=0, x can be calculated respectively thus q', y q', namely Q is on the projection surface with the coordinate that P ' is initial point;
Step 3-4: the built-in function polybool calculating two polygon unions in Matlab, described polybool is defined as follows:
[x,y]=function(operation,x 1,y 1,x 2,y 2);
Wherein, x, y are the rreturn value of function, and x is two polygon (x 1, y 1), (x 2, y 2) asking the x-axis direction coordinate of the clockwise sequence of the polygon vertex after union, y is its coordinate in the y-axis direction; Operantion represents the operation performed two polygons, when the operation inputted is ' union ' time, namely ask two polygonal unions.
4., as claimed in claim 1 based on the three-dimensional model search viewpoint selection method of sketch, it is characterized in that, described step 4 comprises the following steps:
Step 4-1: the average entropy E first calculating all viewpoints of each model m, and then calculate each viewpoint of each model relative to average entropy E mstandard deviation S d;
Step 4-2: Euclidean distance C=sqrt (S d^2+E m^2); Wherein, S d, E mrepresent the value after being normalized by the maximal value of respective value in every model I respectively;
Step 4-3:Nc=a*C*N 0, (14)
Wherein, N 0be the complete or collected works that feature viewpoint is extracted, make N 0be the half of 42 viewpoints, that is: N 0=21; A is a constant, owing to only considering the half of view region, therefore makes a equal 0.5; C is obtained by step 4-2, represents model complexity, N cbe final number of views.
5., as claimed in claim 1 based on the three-dimensional model search viewpoint selection method of sketch, it is characterized in that, described step 5 comprises the following steps:
(1) input: cluster number k and p=m*n matrix, a Stochastic choice k initial cluster center, makes q=k*n, q (i :)=p (i :);
(2) for each object in p, p (i :), compares the distance of itself and q (i :) respectively, if it is added in the matrix of another r=k*n, is designated as r (i, j) under it;
(3) for the every a line in matrix r, recalculate with the barycenter in r with the element in a line being element in lower target p, then the value of q (i :) and this value are exchanged;
(4) repeat (2) (3), the change of (i :) value is less than given threshold value until all q.
CN201510645547.1A 2015-09-30 2015-09-30 A kind of three-dimensional model search viewpoint selection method based on sketch Expired - Fee Related CN105243137B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510645547.1A CN105243137B (en) 2015-09-30 2015-09-30 A kind of three-dimensional model search viewpoint selection method based on sketch

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510645547.1A CN105243137B (en) 2015-09-30 2015-09-30 A kind of three-dimensional model search viewpoint selection method based on sketch

Publications (2)

Publication Number Publication Date
CN105243137A true CN105243137A (en) 2016-01-13
CN105243137B CN105243137B (en) 2018-12-11

Family

ID=55040785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510645547.1A Expired - Fee Related CN105243137B (en) 2015-09-30 2015-09-30 A kind of three-dimensional model search viewpoint selection method based on sketch

Country Status (1)

Country Link
CN (1) CN105243137B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171794A (en) * 2018-01-19 2018-06-15 东莞市燕秀信息技术有限公司 Plan view projecting method, device, equipment and medium based on threedimensional model
CN108537887A (en) * 2018-04-18 2018-09-14 北京航空航天大学 Sketch based on 3D printing and model library 3-D view matching process
CN109213884A (en) * 2018-11-26 2019-01-15 北方民族大学 A kind of cross-module state search method based on Sketch Searching threedimensional model
CN110115026A (en) * 2016-12-19 2019-08-09 三星电子株式会社 The method and system of 360 degree of contents is generated on rectangular projection in an electronic
CN113032613A (en) * 2021-03-12 2021-06-25 哈尔滨理工大学 Three-dimensional model retrieval method based on interactive attention convolution neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101004748A (en) * 2006-10-27 2007-07-25 北京航空航天大学 Method for searching 3D model based on 2D sketch
CN101281545A (en) * 2008-05-30 2008-10-08 清华大学 Three-dimensional model search method based on multiple characteristic related feedback
CN104850633A (en) * 2015-05-22 2015-08-19 中山大学 Three-dimensional model retrieval system and method based on parts division of hand-drawn draft

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101004748A (en) * 2006-10-27 2007-07-25 北京航空航天大学 Method for searching 3D model based on 2D sketch
CN101281545A (en) * 2008-05-30 2008-10-08 清华大学 Three-dimensional model search method based on multiple characteristic related feedback
CN104850633A (en) * 2015-05-22 2015-08-19 中山大学 Three-dimensional model retrieval system and method based on parts division of hand-drawn draft

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BO LI ETC,: ""A comparison of methods for sketch-based 3D shape retrieval"", 《COMPUTER VISION AND IMAGE UNDERSTANDING》 *
BO LI ETC,: ""Sketch-Based 3D Model Retrieval by Viewpoint Entropy-Based Adaptive View Clustering"", 《EUROGRAPHICS WORKSHOP ON 3D OBJECT RETRIEVAL. EUROGRAPHICS ASSOCIATION》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110115026A (en) * 2016-12-19 2019-08-09 三星电子株式会社 The method and system of 360 degree of contents is generated on rectangular projection in an electronic
CN110115026B (en) * 2016-12-19 2021-07-13 三星电子株式会社 Method and system for generating 360-degree content on rectangular projection in electronic device
CN108171794A (en) * 2018-01-19 2018-06-15 东莞市燕秀信息技术有限公司 Plan view projecting method, device, equipment and medium based on threedimensional model
CN108537887A (en) * 2018-04-18 2018-09-14 北京航空航天大学 Sketch based on 3D printing and model library 3-D view matching process
CN109213884A (en) * 2018-11-26 2019-01-15 北方民族大学 A kind of cross-module state search method based on Sketch Searching threedimensional model
CN109213884B (en) * 2018-11-26 2021-10-19 北方民族大学 Cross-modal retrieval method based on sketch retrieval three-dimensional model
CN113032613A (en) * 2021-03-12 2021-06-25 哈尔滨理工大学 Three-dimensional model retrieval method based on interactive attention convolution neural network

Also Published As

Publication number Publication date
CN105243137B (en) 2018-12-11

Similar Documents

Publication Publication Date Title
CN103985155B (en) Scattered point cloud Delaunay triangulation curved surface reconstruction method based on mapping method
CN105243137A (en) Draft-based three-dimensional model retrieval viewpoint selection method
CN100559398C (en) Automatic deepness image registration method
CN112257597B (en) Semantic segmentation method for point cloud data
CN110782524A (en) Indoor three-dimensional reconstruction method based on panoramic image
CN102254343B (en) Convex hull and OBB (Oriented Bounding Box)-based three-dimensional grid model framework extracting method
CN103246884B (en) Real-time body's action identification method based on range image sequence and device
CN104966317B (en) A kind of three-dimensional method for automatic modeling based on ore body contour line
CN102915564B (en) Oriented bounding box and axial bounding box-based shoe last matching method
CN101877146B (en) Method for extending three-dimensional face database
CN103500467A (en) Constructive method of image-based three-dimensional model
CN101853523A (en) Method for adopting rough drawings to establish three-dimensional human face molds
CN102682477A (en) Regular scene three-dimensional information extracting method based on structure prior
TW201333881A (en) Method of establishing 3D building model with multi-level details
CN107767453A (en) A kind of building LIDAR point cloud reconstruction and optimization methods of rule-based constraint
CN110889901B (en) Large-scene sparse point cloud BA optimization method based on distributed system
CN104123747A (en) Method and system for multimode touch three-dimensional modeling
CN105678235A (en) Three dimensional facial expression recognition method based on multiple dimensional characteristics of representative regions
CN104751511A (en) 3D scene construction method and device
CN107729806A (en) Single-view Pose-varied face recognition method based on three-dimensional facial reconstruction
CN107146287A (en) Two-dimensional projection image to threedimensional model mapping method
CN103839081B (en) A kind of across visual angle gait recognition method reached based on topology table
CN102063719B (en) Local matching method of three-dimensional model
CN109920050A (en) A kind of single-view three-dimensional flame method for reconstructing based on deep learning and thin plate spline
CN104318552A (en) Convex hull projection graph matching based model registration method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20181211

Termination date: 20210930