CN100341031C - Curve image processor and its processing method - Google Patents

Curve image processor and its processing method Download PDF

Info

Publication number
CN100341031C
CN100341031C CNB2003101143697A CN200310114369A CN100341031C CN 100341031 C CN100341031 C CN 100341031C CN B2003101143697 A CNB2003101143697 A CN B2003101143697A CN 200310114369 A CN200310114369 A CN 200310114369A CN 100341031 C CN100341031 C CN 100341031C
Authority
CN
China
Prior art keywords
control point
patch
curved surface
unit
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2003101143697A
Other languages
Chinese (zh)
Other versions
CN1499447A (en
Inventor
上崎亮
西村明夫
小林忠司
望月义幸
濑川香寿
山仓诚
西尾一孝
荒木均
西村健二
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Holdings Corp
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co Ltd filed Critical Matsushita Electric Industrial Co Ltd
Publication of CN1499447A publication Critical patent/CN1499447A/en
Application granted granted Critical
Publication of CN100341031C publication Critical patent/CN100341031C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/30Polynomial surface description

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Analysis (AREA)
  • Algebra (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

A curved surface image processing apparatus 100 according to the present invention that can render an object at higher speed and in higher quality by performing image processing using NURBS data includes: a data input unit 101 for receiving NURBS data; a coordinate transformation unit 102 for performing coordinate transformation on NURBS data; an animation control unit 103 for controlling animation data of each frame to be rendered; a data transformation unit 104 for transforming NURBS data into rational Bezier data; a patch division unit 105 for subdividing a rational Bezier surface patch; a normal determination unit 106 for calculating normals of control points of a divided surface patch; a perspective transformation unit 107 for performing perspective transformation on a divided surface patch; and a rendering unit 108 for rendering a surface patch.

Description

Curved image processing apparatus and curved image processing method
Technical Field
The present invention relates to a curved image processing apparatus and a method thereof for rendering a 3-dimensional object (object) using image information described by a Non Uniform Rational B-Spline (NURBS) function in the field of 3-dimensional computer graphics.
Background
In the field of image processing in recent years, with remarkable improvement in computer performance, there have been increasing cagd (computer aid geometry design) systems and geometric model systems that process free-form surfaces that can express complex shapes. Although there are several representation methods for forming free-form surfaces, among them, NURBS curves or NURBS surfaces have the advantage that smooth surfaces can be represented with a small number of control points. In addition to the control points, since the shape can be changed locally due to a large number of parameters for the shape such as the weight and the operation node (knob), and the shape can be expressed uniformly, and a good expression capability of expressing a circular arc, a straight line, a parabolic curve, and the like is provided, a rendering (rendering) technique of an image module created using NURBS data is required.
Next, the problems in the rendering technique using NURBS data are divided into 4 items (a) to (D), and the respective problems are explained in order.
First, as (a), a background technique in the entire process of generating a curved surface image using NURBS data will be described.
Fig. 3(a) and (b) are diagrams showing examples of NURBS curves and NURBS curved surfaces. The NURBS curve 31 is a parameter curve in which a parameter u is set as a medium variable, and a curve shape is manipulated by a plurality of control point rows 32, a weight w of each control point, and a node vector which is a node row indicating a change in influence of each control point accompanying a change in the parameter u. In addition, the control points 32 of NURBS data are not limited to points on the NURBS curve 31.
The NURBS curved surface 33 is a parametric surface having parameters u and v as medium variables, and its curved surface shape is operated by a control point row indicated by 34, a weight, and a node vector as a node row, like the NURBS curve 31.
In general, NURBS surfaces S (u, v) are represented by (equation 1).
Formula 1
S ( u , v ) = m - 1 n - 1 i = 0 j = 0 B i , m ( u ) B j , n ( v ) w ij Q ij m - 1 n - 1 i = 0 j = 0 B i , m ( u ) B j , n ( v ) w ij - - - ( 1 )
In the formula (1), w represents weight, and Q represents a control point. In addition, (expression 1) the function B is referred to as a B-spline basis function, and if the B-spline basis function is expressed by an asymptotic expression of de Boor-Cox, (expression 2) and (expression 3) are established.
Formula 2
B i , 1 ( t ) = 1 ( t i &le; t < t i + 1 ) 0 ( t < t i , t &GreaterEqual; t i + 1 ) - - - ( 2 )
Formula 3
B i , k ( t ) = t - t i t i + k - 1 t i B i , k - 1 ( t ) + t i + k - t t i + K - t i + 1 B i + 1 , k - 1 ( t ) - - - ( 3 )
In (formula 2) and (formula 3), k represents the degree, t represents a parameter, and ti represents a node.
Conventionally, in image processing of NURBS data, the arithmetic operations of these equations are very large, and there is a problem that the amount of NURBS data that can be expressed by an image processing system requiring real-time performance is limited. Further, when image processing using NURBS data is implemented as hardware, the circuit scale increases, which hinders miniaturization.
Therefore, under the condition of a 3-dimensional NURBS curved surface, a curved surface image processing apparatus is disclosed which performs the following processing in a preprocessing stage in order to reduce the amount of computation (see, for example, japanese patent application laid-open No. 2001-218977 (pages 7-23)).
In this curved surface image processing apparatus, the asymptotic expressions (expression 2) and (expression 3) are not calculated recursively, but these asymptotic expressions are generally developed into 3-degree expressions, and a coefficient matrix (4 × 4) for obtaining a B-spline basis function is calculated by substituting a node vector. These coefficient matrices are calculated for all columns of control points defining the NURBS surface. On the other hand, in real-time processing, each point on the NURBS surface is calculated using control point data and a coefficient matrix while changing the parameters u and v.
In order to further speed up the arithmetic processing, the curved surface image processing apparatus introduces a difference matrix obtained by multiplying a coefficient matrix by a matrix having the differences Δ u and Δ v between the parameters u and v as elements, and recursively calculates each point on the NURBS curved surface using the control point data, the difference matrix, and the node vector while changing the reference u and v in the real-time processing.
Next, as (B), a background technique in the case of polygon division using NURBS surfaces will be described.
Initially, a general definition was described for parametric surfaces such as Bezier (Bezier) surfaces or B-Spline (B-Spline) surfaces. Some types of free-form surfaces include Bezier (Bezier) surfaces and B-Spline (B-Spline) surfaces, but NURBS surfaces are widely used as more general representations of free-form surfaces. Parametric surfaces continuously define points (x, y, z) on the surface in a 3-dimensional space by two medium variables (u, v).
That is, 1 3-dimensional coordinate (x, y, z) is obtained for 1 set of medium variables (u, v). To mathematically describe these relationships, control points and basis functions with weights are used. A control point is a 3-dimensional coordinate necessary to determine the position and shape frame of a parametric surface, whose weight is a parameter that indicates how much the control point has an effect on the surface, or in the extreme, how the surface is directed to the control point. The 3-dimensional coordinates of the control points and their weights are described by a 2-dimensional arrangement with discrete coefficients i, j corresponding to the respective directions of the medium variables (u, v). The basis function is a function that corresponds the series of control points to the specific shape of the parametric surface.
Here, assuming that the 3-dimensional coordinates of the control points are < Q [ i ] [ j ] > (qx [ i ] [ j ], qy [ i ] [ j ], qz [ i ] [ j ]), the weight of the control points is qw [ i ] [ j ], the basis function in the u direction is B [ n ] [ i ] (u), and the basis function in the v direction is B [ m ] [ i ] (v), the coordinates < P > of the points on the curved surface for a certain medium variable group (u, v) are represented by (px, py, pz). In addition, in the following, a symbol < > represents a vector.
<P>=(∑∑B[n][i](u)*B[m][j](v)*qw[i][j]*<Q[i][j]>)/(∑∑B[n][i](u)*B[m][j](v)*qw[i][j]
That is to say that the first and second electrodes,
px=(∑∑B[n][i](u)*B[m][j](v)*qw[i][j]*<qx[i][j]>)/(∑∑B[n][i](u)*B[m][j](v)*qw[i][j]
py=(∑∑B[n][i](u)*B[m][j](v)*qw[i][j]*<qy[i][j]>)/(∑∑B[n][i](u)*B[m][j](v)*qw[i][j]
pz=(∑∑B[n][i](u)*B[m][j](v)*qw[i][j]*<qz[i][j]>)/(∑∑B[n][i](u)*B[m][j](v)*qw[i][j]
where, let I be 0, 1, 2, ·, (I-1), and J be 0, 1, 2,. and (J-1), and the symbol Σ denotes the sum of ranges I and J. Here, I is the number of control points in the y direction, and J is the number of control points in the v direction. In addition, n and m are the number of basis functions in the u-direction and the v-direction.
If the NURBS surface is used as a parametric surface, the basis function uses the node vector as a definition in addition to the medium variable and the degree (or order).
The node vector is formed by arranging values of media variables in order from the smaller one with a certain interval so as to have a characteristic of a curved surface shape. Different basis functions may be defined using different degrees and different node vectors for the respective directions of the medium variables u, v. The basis functions B [ n ] [ I ] (u) and B [ m ] [ I ] (v) of the NURBS curved surface are expressed by using node vectors (u [0], u [1],.u ], u [ I + n +1]) in the u direction and node vectors (v [0], v [1],.v., v [ J + m +1]) in the v direction and by the following Cox-debour asymptotic expression. I.e., in the u direction, is
B[n][i](u)=[(u-u[i])/(u[i+n]-u[i])]*B[n-1][i](u)
+[(u[i+n+1]-u/(u[i+n+1]-u[i+1]))*
B[n-1][i+1](u)
Wherein, in the above formula, the order n is not 0. Since the above expression is an asymptotic expression, a basis function of n-3 is obtained by a basis function of n-2, for example. If the repetition is repeated, a basis function with n equal to 0 is necessary, but the basis function B [ n ] [ i ] (u) when n equal to 0 is set to take a value of 1 only when u ranges between (u [ i ], u [ i +1]), and to take a value of 0. In addition, the elements of the node vector are equal in value or monotonically increase each time the coefficient increases, and the coefficient portion represented by the score in the asymptotic expression is defined as 0 when the denominator is 0. Incidentally, the asymptotic formula may also be expressed in order instead of the degree. The order is a value obtained by adding 1 to the order n. In addition, the v direction is also defined as the following basis function.
B[m][j](v)=[(v-v[j])/(v[j+m]-v[j])]*B[m-1][j](v)
+[(v[j+m+1]-v)/(v[j+m+1]-v[j+1]))*
B[m-1][j+1](v)
Then, when the NURBS curved surface is polygon-divided, it is necessary to substitute the parameters necessary for the above-described asymptotic equation and obtain the 3-dimensional coordinates < P > on the curved surface as (px, py, pz).
In the following, for the sake of simplicity, first, a NURBS curve is used instead of the NURBS surface. In fig. 19, the number of times n of the NURBS curve 1901 is3, and is defined by a control point sequence consisting of 4 control points) Q [0], Q [1], Q [2], and Q [3]) and a node vector (u [0], u [1],. and u [7]) having an element number of 8. The vector variable used to describe this NURBS curve 1901 is u only, which as follows represents the 2-dimensional coordinate < P > - (px, py).
<P>=(∑B[n][i](u)*qw[i]*<Q[i]>)
/(∑B[n][i](u)*qw[i])
px= (∑B[n][i](u)*qw[i]*qx[i])
/(∑B[n][i](u)*qw[i])
py= (∑B[n][i](u)*qw[i]*<qy[i])
/(∑B[n][i](u)*qw[i])
Fig. 19 and the above equation define NURBS curves in a visually easily distinguishable 2-dimensional space, but Z-coordinate pz may also be defined as a NURBS curve in a 3-dimensional space. Incidentally, when the number of times is n and the number of control points is I, the relationship between the number of elements of the node vector and the number of times and the number of control points can be obtained from the number of elements (I + n +1) of the node vector. In the case of fig. 19, 4+3+1 is 8. In addition, in FIG. 19, the range where the valid range of the node vector of the NURBS curve 1901 is (u 3, u 4) is described. Thus, a minimum of 4 control points are required to trace the NURBS curve for the number n-3. If the number of times n is3 and 1 control point is added, the number of elements of the node vector is also increased by 1, and the effective range of the node vector describing the NURBS curve is expanded to (u 3, u 5). This state is shown in fig. 20. As can be seen from fig. 19 and 20, the NURBS curve does not normally pass through the control points. However, as described later, when the arrangement of the node vectors indicates a rational bezier curve, the end points have control points.
Next, consider approximating the NURBS curve shown in fig. 19 as two line segments. Therefore, as shown in fig. 21, 3 points such as both ends and a middle point are taken on the NURBS curve and connected by a straight line. Since the effective ranges of the node vectors are (u 3, u 4) as the values of the medium variable u, the positional coordinates of the points on the NURBS curve are obtained by substituting, for example, u 3 and u 4 and (u 3+ u 4)/2, which is the intermediate value between them, into the above expression. When the NURBS curve 2001 shown in FIG. 20 is divided into two segments, the valid ranges of the node vectors are (u 3, u 5), so that u 3, u 4, and u5 can be used as the values of the medium variables u, for example. Needless to say, the number of control points of the NURBS curve or the number of segments to be divided is not limited to this example and can be freely set.
On the basis of the above, the NURBS surface in 3-dimensional space is considered to be divided into planar polygons. The simplest NURBS surface 2201 is shown in fig. 22. In fig. 22, the number of times of the u-direction and the v-direction is n-m-3, and the number of control points in the u-direction and the v-direction is I-J-4, that is, 4 × 4-16. The node vectors are (u [0], u [1],. till, u [7]), (v [0], v [1],. till, v [7]) in the u direction and the v direction, respectively, and the number of elements of the node vector is (I + n +1) ═ J + m +1) ═ 8 in both the u direction and the v direction. The valid ranges of the node vectors describing NURBS surface 2201 are (u 3, u 4) and (v 3, v 4). Therefore, for example, if u 3 and u 4 and (u 3+ u 4)/2 as the intermediate points are used as u values and v 3 and v 4 and (v 3+ v 4)/2 as the intermediate points are used as v values, the total of 9 points on the curved surface can be obtained. The points on the 9 curved surfaces can be used to divide the four-sided polygon into 4. Needless to say, the polygon may be divided into 8 triangular polygons. The number of control points for NURBS surfaces and the number of divided polygons are not limited to this example, and may be freely set (see, for example, japanese unexamined patent application publication No. hei 3-201073).
Next, as (C), a background technique in the case of tessellating a parametric surface such as a bezier surface and then processing the tessellated surface into minute polygons will be described.
As a method of displaying a 3-dimensional object including a parametric surface on a 2-dimensional image display device, a method of approximating an object to an aggregate of a plurality of minute planar polygons and then rendering the object is generally used.
Further, objects expressed smoothly in the form of parametric surfaces such as NURBS surfaces and bezier surfaces have a smaller data amount than objects expressed by a set of polygonal polygons such as triangles, and have a higher affinity for transmission or the like by a network that has been rapidly developed in recent years.
As a general method for dividing a parametric surface into polygons, a method is used in which values of medium variables are discretely changed at regular intervals in an equation defining the parametric surface, points on the parametric surface are directly obtained, and a plurality of points adjacent to the points are combined with each other to form a planar polygon. This treatment is generally referred to as テツセレ - シヨン.
An example of a 4-order (3-degree) rational b e zier surface as a representative parametric surface is shown in fig. 30. The 4-order (3-order) rational bezier surface is expressed in the following form (equation 4).
(formula 4)
B ( u , v ) = UMQM T V = UM Q 00 Q 01 Q 02 Q 03 Q 10 Q 11 Q 12 Q 13 Q 20 Q 21 Q 22 Q 23 Q 30 Q 31 Q 32 Q 33 M T V M = - 1 3 - 3 1 3 - 6 3 0 - 3 3 0 0 1 0 0 0 , U = [ u 3 u 2 u 1 ] , V = { v 3 v 2 v 1 ] T - - - ( 4 )
And expressing the parametric surface by using u and v independent parameters. Wherein, 0 < u < 1, 0 < v < 1. Qij (i ═ 0. -, 3, j ═ 0. -, 3) is a control point defining a bezier curved surface shape, and in the case of 4 steps (3 times), there are 16 points of 4 × 4 ═ 16 points. Of the control points, Q00, Q30, Q03, and Q33 exist on the curved surface, but the other points do not exist on the curved surface. Each control point includes a W component corresponding to the weight in addition to the X, Y, Z component. Therefore, any point P on the surface is represented by P (X (u, v)/W (u, v), Y (u, v)/W (u, v), and Z (u, v)/W (u, v)), and it is known that the bezier surface has convex hull and the surface is completely included in a polyhedron extended by control points.
The object is represented by the set of parametric surfaces described above. Hereinafter, a parametric surface constituting an object will be referred to as a patch (patch). In order to render the object with high accuracy, each patch may be approximated by a plurality of fine polygons. However, if the number of polygons to be processed increases, the calculation load becomes very large. Therefore, it is required to establish a method capable of drawing with high precision while suppressing the amount of generated polygons as much as possible.
Therefore, in the conventional technique, each time a surface patch is divided into a left surface patch and a right surface patch, the flatness of each divided patch is calculated, and when the flatness is larger than an allowable value, the patch is divided again. Further, a method of repeating the process until the flatness of all the subdivided patches is within the allowable value is proposed (for example, refer to Japanese patent application laid-open No. 11-007544 (pages 11-14)).
As shown in fig. 49, the end points of the control points are connected to generate an edge, a vector from the midpoint of the edge to the midpoint on the curved surface representing the polygonal surface is calculated, and the vector is defined as the deviation vector of the chord. There is also a method of determining the amount of sharpness in accordance with the length of a vector on a scale when a vector of a string is shifted in a perspective manner (see, for example, japanese patent laid-open No. 2001-52194 (page 5)).
Next, as (D), a background technique of calculating the normal of each control point forming the finely divided surface patch will be described.
The Bezier surface which uses the NURBS surface to carry out node insertion and parameter transformation forms a double n-order Bezier surface if (n +1) × (n +1) control points are provided. Referred to herein simply as an n-degree bezier surface. In the field of 3-dimensional computers and paintings, 3-dimensional bezier curved surfaces are often used for easy control of curved surfaces (see, for example, computer painting 2 nd edition, kawa hui seal, translation of image information center of institute of relief printing, japan industries, news agency).
The 3-dimensional bezier surface is generally represented by the following (equation 5). Pij in (equation 5) is the coordinates of the control point, and Ji and Ki are Bernstein functions expressed by (equation 6) and (equation 7). P (u, v) represents the vertex coordinates on the free-form surface of a specific u, v (0. ltoreq. u, v. ltoreq.1).
(formula 5)
P ( u , v ) = &Sigma; i = 0 3 &Sigma; j = 0 3 P ij J i K j - - - ( 5 )
(formula 6)
J i = 3 ! i ! ( 3 - i ) ! u i ( 1 - u ) 3 - i - - - ( 6 )
(formula 7)
K j = 3 ! j ! ( 3 - j ) ! u j ( 1 - u ) 3 - j - - - ( 7 )
The bezier surfaces are directly depicted by (equation 5) and (equation 6). In this case, an iterative method or the like is used for calculation of a side where a curved surface intersects another curved surface or a projection surface, and the calculation time is long.
On the other hand, in order to draw faster than the direct drawing method, a method テツセレ - シヨン is used, in which points on a free-form surface are calculated by evaluating P (u, v) for a specific parameter group (u, v), and a set of polygons capable of connecting these points is used to approximate drawing (see, for example, japanese patent application laid-open No. 2001-331812).
Alternatively, as another drawing method, the polygon approximation drawing may be performed by a method called a subdivision method in which a process of generating a new control point by averaging the coordinates between adjacent control points is repeated (see, for example, japanese unexamined patent application publication No. h 11-7544).
In the field of 3-dimensional computer graphics, processing such as color and shading as well as shape quality of an object is an important element for determining image quality. The shading in each surface uses the normal of the object, and it is important to calculate the normal correctly.
However, in the background description of the process (a) which is a calculation procedure of the entire curved surface image processing apparatus, in the conventional curved surface image processing apparatus, since each coefficient matrix is calculated in the preprocessing stage, there is a problem that the amount of data other than NURBS data increases.
In addition, there is a problem that input data is limited to 3-dimensional NURBS surfaces. Further, when the difference matrix is used, there is a problem that the increment of each of the reference u and v is limited to a constant value (Δ u and Δ v). Further, when a NURBS curved surface is expressed strictly, a physicochemical coefficient matrix is necessary, and therefore, division by each point on the calculated NURBS curved surface is necessary, which causes a problem of an increase in the amount of computation by the division.
Next, when the NURBS curved surface in (B) is drawn, the points on the NURBS curved surface are directly obtained, and polygon division is performed. However, in order to calculate the coordinates of the points on the NURBS curved surface by this method, it is necessary to obtain the basis function expressed by the Cox-debour asymptote, which requires a very large amount of computation.
In addition, the direct polygon division of the above parametric surface, i.e., テツセレ - シヨン processing, has been conventionally performed by the CPU. However, in order to express a 3-dimensional object as an object more precisely and truly, the number of polygon divisions for approximating a curved surface needs to be increased, and the load on the CPU becomes larger. Further, even when the circuit (テツセレ - タ) for dividing the parametric curved surface into polygons is configured by hard wiring, there is a problem in that the scale of an arithmetic circuit for obtaining points on the curved surface becomes large.
Therefore, in order to solve these problems, in the present invention, the points on the NURBS surface are not directly obtained, but the points on the rational bezier surface are directly obtained to perform polygon segmentation while equivalently transforming the NURBS surface into a simpler rational bezier surface. This is because the rational bezier surface can easily find the control points on the surface by using the subdivision method. Further, as a method of transforming a B-spline curve into a rational B-spline curve by node insertion, there is a method based on the Oslo Algorithm or Bohm (for example, refer to [ parszsh, h., "AShort Proof of the Oslo Algorithm," comp. aid. door. des., vol.1, pp.95-96, 1984] or [ Bohm, w., "Inserting New Knots inter B-spline curves" comp. aid. des., vol.12, pp.199-201, 1980 ]).
Therefore, first, a problem occurring when the NURBS surface is equivalently transformed into the rational bezier surface will be described, and in order to equivalently transform the NURBS surface into the rational bezier surface, a method called node insertion may be used. Next, a surface transformation method using the node insertion algorithm will be described in detail.
First, a method of converting the NURBS curve into the rational bezier curve will be described to simplify the description. For node insertion in NURBS curves, the following algorithm is known. That is, the number of times is n, the control points are (Q [0], Q [1],. and Q [ I-1]) (the number of control points is I), and when the initial node vector is (u [0], u [1],. and u [ I + n ]) (the number of elements of the node vector is I + n +1), and if 1 new node-u (node insertion position k) is inserted between the nodes u [ k ] and u [ k +1], the new control point sequence (Q [0] ', Q [1] ',. and. Q [ I ] ') is represented by the following formula.
<Q′[i]>=(1-a[i]*<Q[i-1]>+a[i]*<Q[i]>
Wherein, in the above formula, i is not 0. The case where i is 0 is as follows.
<Q′[0]>=a[0]*<Q[0]>
Here, the coefficient array a [ i ] included in the above formula is represented by the following formula.
a [ i ] ═ 1 (when i is smaller than k-n)
a [ i ] ═ 0 (when i is greater than k +1)
a [ i ] ═ u-u [ i ]/(u [ i + n ] -u [ i ] (i is an external value)
For example, assuming that the initial control point columns are (Q0, Q1, Q2, Q3), and the initial node vectors are (u 0, u1, etc., u 7), when a new node-u having a value equal to u 3 is inserted between the nodes u 3 and u 4, the new control point columns (Q0 ', Q1 ', etc., Q4 ') are as follows. That is, since the insertion position of the node is k 3, the coefficients are arranged in the order of 3
a[0]=1
a[1]=(~u-u[1])/(u[4]-u[1])=(u[3]-u[1])/(u[4]-u[1])
a[2]=(~u-u[2])/(u[5]-u[2])=(u[3]-u[2])/(u[5]-u[2])
a[3]=(~u-u[3])/(u[6]-u[3])=0
a[4]=0
Is formed by the series of arrangement
<Q′[0]>=a[0]*<Q[0]>=<Q[0]>
<Q′[1]>=(1-a[1]*<Q[0]>+a[1]*<Q[1]>
<Q′[2]>=(1-a[2]*<Q[1]>+a[2]*<Q[2]>
<Q′[3]>=(1-a[3]*<Q[2]>+a[3]*<Q[3]>=<Q[2]>
<Q′[4]>=(1-a[4]*<Q[3]>+a[4]*<Q[4]>=<Q[3]>
. This means that the initial control point < Q [1] >, is eliminated and control points are generated at new positions < Q '[ 1] > and < Q' [2] >.
However, in practice the control points have a weight. In this case, the position coordinates of the control points must be converted into coordinates of the same order and substituted into the above equation. That is, the control point is expressed as a 2-dimensional coordinate < Q [ i ] > (qx [ i ], qy [ i ]), and when the weight is qw [ i ], the same coordinate is multiplied by the weight at the position coordinate, < Q [ i ] > (qw [ i ] × [ i ], qw [ i ] > (qw [ i ] >) qy [ i ], qw [ i ]). When a control point is expressed as a 3-dimensional coordinate < Q [ i ] > (qx [ i ], qy [ i ], qz [ i ]), the same coordinate is (qw [ i ] > (qxx [ i ], qw [ i ] > (qy [ i ], qw [ i ] >) qzz [ i ], qw [ i ]). In this way, when the node insertion is performed using the control points converted into the coordinates of the same order, the final control point sequence also becomes the coordinates of the same order, and therefore, division by weight is necessary to return to the normal coordinates.
However, when a NURBS curve is equivalently transformed into a rational bezier curve using a node insertion algorithm, useless control points that should be dropped are generated, and the number of dropped control points itself varies depending on the arrangement method of the initial node vectors. Therefore, when the NURBS curve is simply converted into the rational bezier curve only by the node insertion, the positions and the number of control points to be discarded are unclear, and the subsequent fine division processing becomes difficult.
In addition, a specific example in which useless control points are generated when the NURBS curve in fig. 19 is converted into the rational bezier curve will be described below.
As a first example of the generation of the unnecessary control points, the number of times n of the NURBS curve 1901 in fig. 19 is3, and is defined by a control point sequence (Q [0], Q [1], Q [2], and Q [3]) and a node vector (u [0], u [1],. and u [7 ]). That is, the number of control points is 4, and the number of elements of the node vector is 8. Wherein u [ i ] < u [ j ] is assumed for all different coefficients i, j (i < j). Here, as shown in fig. 23, the nodes are inserted into the initial node vectors (u [0], u [1],. and u [7]) 1 by 1, and the final node vectors are (u ' [0], u ' [1], u ' [2],. and. Here, if the following relationship is satisfied, the generated final NURBS curve shape is equivalently converted into 1 rational bezier curve without change.
u[0]=u′[0]
u[1]=u′[1]
u[2]=u′[2]
u[3]=u′[3]=u′[4]=u′[5]
u[4]=u′[6]=u′[7]=u′[8]
u[5]=u′[9]
u[6]=u′[10]
u[7]=u′[11]
That is, since the valid range of the initial node vector of the NURBS curve is (u 3, u 4), the nodes u 3 and u 4 in the range are inserted into the nodes with the multiplexing degree of 3 (equal to the degree n being 3), and the final node vectors (u '0, u' 1, u '2, u', 11) are obtained. In this example, 4 new nodes are inserted. Therefore, the final control point number is also increased by 4 to 8, and the final control point sequence is (Q ' [0], Q ' [1], Q ' [7 ]). It is known that the NURBS curve defined by this node permutation method is equivalent to the rational bezier curve, and the original NURBS curve is completely identical to the shape. Here, the [ final control point number after node insertion ] is 8, but since the [ control point number where the rational bezier curve is defined ] is 4, it is known that 4 control points in the rational bezier curve in the final control point sequence are useless.
In addition, as a second example of generating unnecessary control points, a case where 1 initial node vector (u [0], u [1],. once, u [8]) is added is considered. Wherein u [ i ] < u [ j ] is assumed for all different coefficients i, j (i < j). In order to obtain a rational bezier curve, the final node vectors (u ' [0], u ' [1], u ' [2], and.
u[0]=u′[0]
u[1]=u′[1]
u[2]=u′[2]
u[3]=u′[3]=u′[4]=u′[5]
u[4]=u′[6]=u′[7]=u′[8]
u[5]=u′[9]=u′[10]=u′[11]
u[6]=u′[12]
u[7]=u′[13]
u[8]=u′[14]
That is, since the valid range of the initial node vector of the NURBS curve is (u 3, u 5), the nodes u [3], u [4], and u [5] in the range are inserted into the nodes with the multiplexing degree of 3 (equal to the degree n equal to 3), and the final node vectors (u '[ 0], u' [1], u '[ 2],. and u' [14]) are obtained. That is, the final control point number after node insertion is 11, which is 4 less than the node vector prime number 15. At this point, the original NURBS curve is split into two rational bezier curves in succession. Since the two rational bezier curves share 1 control point in the connection portion, the number of control points defining the rational bezier curves is 4 × 2-1 — 7. Therefore, it is also known that there are 4 control points that are useless.
However, the number of useless control points to be discarded is not always 4, depending on the arrangement method of the initial node vector values. That is, the number of useless control points to be discarded varies with the initial node vector. This results in the elements of the initial node vector containing nodes of equal value.
That is, as a third example of generating an unnecessary control point, there is a case where a node having u [ i ] ═ u [ j ] is included in each of different indices i and j (i < j). For example, in the above example, when a relationship of u [2] ═ u [3] is considered in the initial node vectors (u [0], u [1],. and u [8]), it is necessary to satisfy the following relationship in equivalently transforming the NURBS curve finally generated after node insertion into two rational bezier curves.
u[0]=u′[0]
u[1]=u′[1]
u[2]=u[3]=u′[2]=u′[3]=u′[4]
u[4]=u′[5]=u′[6]=u′[7]
u[5]=u′[8]=u′[9]=u′[10]
u[6]=u′[11]
u[7]=u′[12]
u[8]=u′[13]
That is, since the valid range of the initial node vector of the NURBS curve is (u [3], u [5]), and u [2] ═ u [3], nodes are inserted so that u [3], which also includes u [2], becomes multiplexing degree 3, and u [4] and u [5] also become multiplexing degree 3, respectively, to obtain the final node vector (u '[ 0], u' [1], u '[ 2],. once, u' [13 ]). Here, the final control point number after the node insertion is 14-4 to 10, but since the control point number where the rabesque curve is defined is 4 × 2-1 to 7, it is known that 3 control points are useless in this example.
The occurrence of such useless control points is also the same for the case of equivalent transformation of NURBS surfaces into rational bezier curves.
As a fourth example of generating useless control points, a NURBS surface is described, but in the case of a NURBS surface, control points are defined by a 2-dimensional arrangement, and the degree and the node vector as basis function parameters are defined in the u direction and the v direction, respectively. Therefore, it is possible to equivalently transform NURBS surfaces into rational bezier curves by performing node insertion in the u-direction and v-direction, respectively.
For example, the NURBS curved surface has 5 × 5 to 25 control points, m to n to 3 times in the u direction and the v direction, a node vector in the u direction is (u [0], u [1],. and u [8]), and a node vector in the v direction is (v [0], v [1],. and v [8 ]). Further, for a node vector in the u direction, u [ i ] < u [ j ] holds for all different indices i, j (i < j), and v [2] < v [3] holds for a node vector in the v direction, and v [ i ] < v [ j ]. If the final u-direction and v-direction node vectors after node insertion are (u '0, u' 1, u '2, etc.) and (v' 0, v '1, v' 2, etc.), respectively, the following relationships must be satisfied for equivalent transformation into a rational bezier curve by analogy with the above.
u[0]=u′[0]
u[1]=u′[1]
u[2]=u′[2]
u[3]=u′[3]=u′[4]=u′[5]
u[4]=u′[6]=u′[7]=u′[8]
u[5]=u′[9]=u′[10]=u′[11]
u[6]=u′[12]
u[7]=u′[13]
u[8]=u′[14]
v[0]=v′[0]
v[1]=v′[1]
v[2]=v[3]=v′[2]=v′[3]=v′[4]
v[4]=v′[5]=v′[6]=v′[7]
v[5]=v′[8]=v′[9]=v′[10]
v[6]=v′[11]
v[7]=v′[12]
v[8]=v′[13]
Therefore, the number of final control points after node insertion is 11 in the u direction and 10 in the v direction, and therefore 11 × 10 is 110. On the other hand, the number of control points defining the rational bezier curve is 7 in the u direction and 7 in the v direction, and therefore 7 × 7 is 49. Therefore, the number of useless control points to be discarded is 110 to 49 to 61.
As described above, when transforming NURBS surfaces into rational bezier curves, useless control points are generated during the node insertion process.
Further, since the number of unnecessary control points varies depending on the arrangement method of the initial node vectors, particularly, the degree of multiplexing of nodes in the effective range of the initial node vectors, it is necessary to generalize these rules.
Further, although the control point data finally converted into the rational bezier curve by the node insertion is transferred to the polygon segmentation processing block by the post-stage subdivision method, the control points necessary for the subdivision processing are preferably only the control points where the rational bezier curve is defined, and the control points where the rational bezier curve is not defined are not required. Therefore, it is necessary to remove control points where no rational bezier curve is defined from the final control points after node insertion in advance, thereby reducing the data amount.
Next, in the subdivision processing shown in (C), for example, japanese patent application laid-open No. h 11-007544 (pages 11-14), the flatness needs to be calculated every time the patch is subdivided, and therefore, there is a problem that the calculation load of the flatness in the curved surface image processing apparatus is very large.
In the above-mentioned japanese patent application laid-open No. 2001-52194 (page 5), the string offset vector cannot be used as an index for detecting a patch for forming a contour edge to be formed (hereinafter, referred to as a contour edge forming patch).
Next, in the normal line calculation of each control point constituting the bezier curved surface in (D), by the method described in [ computer drawing 2 nd edition, kagaku, translation of image information center of the institute of relief printing press, japan industries, press ], the normal line of the point on the curved surface generated is calculated by directly providing parameters such as (u, v) to the point on the curved surface, calculating the vertex on the curved surface, and using the intermediate point obtained in the middle. In addition, when the normal calculation of each control point describing the patch of the curved surface is performed when the points at the four corners or the adjacent control points are matched, the calculation is avoided by determining the matching.
In the method of japanese unexamined patent publication No. h 11-7544, the calculation time is long because a general expression is used to calculate the normal line on the curved surface. In addition, using patches from ク - ンズ, the normal is an approximate calculation.
Further, in the method of the above-mentioned Japanese patent laid-open Nos. 2001-331812 and 11-7544, when only control points on a curved surface are used in generating an image finally, there is a problem that even if the normal calculation is performed on the unnecessary control points, the calculation amount increases.
Disclosure of Invention
In view of the above problems, a first object of the present invention is to provide a curved surface image processing apparatus capable of rendering at higher speed and with high quality in image processing using NURBS data as graphics information.
It is a second object of the present invention to provide a curved-surface image processing apparatus that can more effectively reduce the amount of computation in image processing even in a computation step of polygon segmentation by a subdivision method after converting NURBS data into rational bezier data by node interpolation.
A third object of the present invention is to provide a curved surface image processing apparatus capable of performing curved surface segmentation processing more efficiently and reducing arithmetic processing even when polygonal segmentation of a curved surface is performed by a subdivision method.
A fourth object of the present invention is to provide a curved surface image processing apparatus capable of efficiently calculating an accurate normal line by a method suitable for normal line calculation of control points at four corners of a curved surface as vertices even when normal line calculation is performed using control point information located on a curved surface such as a bezier curved surface.
In order to solve the above problems, a curved surface image processing apparatus according to the present invention is a curved surface image processing apparatus for rendering a 3-dimensional object on a screen by NURBS data serving as shape data of the 3-dimensional object, comprising: the data conversion unit is provided for converting NURBS data parameters formed by a NURBS curve and a NURBS curved surface into rational Bezier control point data formed by a rational Bezier curve and a rational Bezier curved surface; a curved surface dividing unit that divides a rational Bessel curved surface patch formed by the rational Bessel control point data converted by the data conversion unit into a plurality of curved surface patches; and a rendering unit that renders the 3-dimensional object with the surface patch.
Therefore, although the NURBS data cannot be divided into a large number of parts without change, the data conversion unit for converting the NURBS data parameters into bezier data can effectively reduce the amount of computation in the 3-dimensional object rendering process in the curved surface image processing apparatus, and can perform a highly accurate rendering process in a short time.
In the curved surface image processing apparatus according to the present invention, the NURBS data includes a control point sequence and a node vector, and the data conversion means includes a node insertion unit configured to perform an operation of inserting the node vector into the control point sequence by a node insertion algorithm; and a control point modification unit configured to delete unnecessary control points from the control points included in the control point sequence generated by the calculation by the node insertion unit. Preferably, the node insertion unit searches for an index of a node located at a specific position of the final node vector in a process of converting an initial node vector and an initial control point sequence included in the NURBS data into a final node vector and a final control point sequence representing the rational bessel control point data, and the control point modification unit deletes the specific control point of the final node vector using the searched index.
Therefore, since unnecessary control points generated in the parameter conversion process for converting NURBS data into Bessel data are appropriately deleted, the fine division process is performed, and therefore, unnecessary computation is reduced, and the effective 3-dimensional object rendering process can be performed.
Further, according to the curved surface image processing apparatus of the present invention, the curved surface segmentation means further includes an area calculation unit for calculating a signed area of a 2-dimensional graph formed by perspective transformation using the rational bezier control point data defining each curved surface patch shape constituting the object; and a detection unit that detects whether or not the surface patch is a surface patch forming a contour edge as the target contour portion, based on the value of the signed area, or the surface dividing unit further includes a subdivision-level determination unit that determines a subdivision level of the surface patch in accordance with a result of the detection unit detecting whether or not the surface patch forming the contour edge and the value of the signed area on the screen of the surface patch calculated by the area calculation unit.
Therefore, the contour edge constituting the rational bezier curved surface contour portion subdivided by the subdivision means is searched, the subdivision level of the curved patch forming the contour edge is increased, and the subdivision level of the curved patch not forming the contour edge is decreased, thereby omitting the previously unnecessary subdivision operation, and the edge portion is subdivided into higher levels, thereby enabling the drawing process to be performed with higher accuracy. In addition, by using the value of the signed area in the determination of the subdivision level, the subdivision of the patch can be performed more efficiently.
Further, according to the curved surface image processing apparatus of the present invention, the curved surface image processing apparatus further includes a normal line calculation unit that calculates a normal line of each control point using the rational bessel control point data of the rational bessel curve, and the normal line calculation unit includes a selection unit that selects two adjacent control points for a first control point to be a normal line calculation target when calculating a normal line from a 4 th control point which is a first control point at four corners of the curved surface patch; and a calculation unit that calculates a vector between each of the first control point and the two adjacent control points, calculates an outer product (, as a normal to the first control point.
Therefore, in the normal calculation of the control point of the patch after the subdivision processing, since the control point that is retracted to another control point is not selected in the calculation of the normal vector, the normal calculation of each control point constituting the bezier surface can be performed more accurately, and the rendering processing such as the luminance of the 3-dimensional object can be performed with higher accuracy.
The present invention can be realized not only as the curved image processing apparatus described above but also as a curved image processing method in which the unit constituting steps provided in the curved image processing apparatus are performed, a program for realizing the curved image processing method by a computer or the like, and the program can be distributed via a recording medium such as a DVD or a CD-ROM, or a transmission medium such as a communication network.
Drawings
Fig. 1 is a functional block diagram of a curved image processing apparatus according to embodiment 1.
Fig. 2 is a flowchart showing the overall processing of the curved surface image processing apparatus according to embodiment 1.
Fig. 3(a) is a diagram showing an example of a NURBS curve.
Fig. 3(b) is a diagram showing an example of a NURBS curved surface.
Fig. 4(a) is a diagram showing an example of 3 rational bezier curves.
Fig. 4(b) is a diagram showing an example of 3 rational bezier surfaces.
Fig. 5 is a segment example of a 3-degree rational bezier curve in a 3-dimensional euclidean space.
Fig. 6 is a diagram showing an example of segment division using a de Casteljau algorithm in a shadow space for segments of 3-degree rational bezier curves.
FIG. 7 is a reference diagram illustrating a 3-degree rational B zier patch with 16 control point data.
Fig. 8 is a reference diagram of a rational b e zier patch as a result of dividing a section in the parameter u direction in which the rational b zier patch is formed by the process of step 205.
Fig. 9 is a reference diagram of a rational b e zier patch as a result of division of the rational b zier patch and further the section in the parameter v direction using the processing of step 205.
Fig. 10 is a diagram showing an example of converting 4 surface patches into polygon data.
Fig. 11 is a flowchart showing a specific process of the curved surface image processing apparatus according to embodiment 2.
Fig. 12 is a block diagram showing the configuration of the curved patch division processing unit according to embodiment 3.
Fig. 13 is a flowchart showing a specific process of the segment assigning section.
Fig. 14 is a functional block diagram of a data conversion unit according to embodiment 4.
Fig. 15 is a diagram illustrating the use of a subdivision method for rational bezier curves.
Fig. 16 is a diagram schematically showing control points defining a rational bezier surface.
Fig. 17 is a diagram illustrating the use of a subdivision method for rational bezier surfaces.
Fig. 18 is a diagram illustrating the use of a subdivision method for rational bezier surfaces.
Fig. 19 is a diagram illustrating the use of a subdivision method for rational bezier curves.
Fig. 20 is a diagram showing an example of a NURBS curve.
Fig. 21 is a diagram showing an example of a NURBS curve.
Fig. 22 is a diagram illustrating segment approximation of a conventional NURBS surface.
Fig. 23 is a diagram illustrating polygon segmentation of a conventional NURBS curved surface.
FIG. 24 is a schematic diagram illustrating a surface transformation based on node insertion.
Fig. 25 is a diagram for explaining the specification of the unnecessary control points according to embodiment 4.
Fig. 26 is a diagram for explaining the specification of the unnecessary control points according to embodiment 4.
Fig. 27 is a diagram for explaining the specification of the unnecessary control points according to embodiment 4.
Fig. 28 is a diagram illustrating a control point trimming method according to embodiment 4.
Fig. 29 is a functional block diagram showing a curved patch division processing unit according to embodiment 5.
Fig. 30 is a diagram showing an example of a 4-order (3-order) rational bezier surface as a parametric surface.
Fig. 31 is a flowchart showing a process in the contour edge detection unit according to embodiment 5.
Fig. 32(a) is a diagram showing an example of a case where two triangular signed areas formed by perspective-transformed control points are the same sign.
Fig. 32(b) is a diagram showing an example of a case where two triangular signed areas are different signs.
Fig. 33(a) is a diagram showing an example of a patch before subdivision is performed.
Fig. 33(b) is a diagram showing an example of a patch subjected to level 1 subdivision.
Fig. 33(c) shows an example of a patch subjected to level 2 subdivision.
Fig. 34 is a flowchart showing the processing of the segmentation level decision unit according to embodiment 5.
Fig. 35(a) is a diagram showing an example of a table showing a correspondence relationship between the maximum value of the signed area and the level of subdivision of the contour edge forming patch.
Fig. 35(b) is a diagram showing an example of a table showing a correspondence relationship between the signed area and the subdivision level.
Fig. 36(a) is a diagram showing an example of an object before the subdivision is performed.
Fig. 36(b) is a diagram showing an example of an object after each patch constituting the object is subdivided in accordance with the subdivision levels.
Fig. 37 is a flowchart showing a process of the contour edge detection unit according to embodiment 6.
Fig. 38(a) is a diagram showing an example of a case where all control polygons face outward.
Fig. 38(b) is a diagram showing an example of the case where all the surfaces are facing inward.
Fig. 38(c) is a diagram showing an example of a case where a polygon having an outward direction and an inward direction is mixed in a control polygon.
Fig. 38(d) is a diagram showing an example of a case where a polygon having an outward direction and an inward direction is mixed in a control polygon.
Fig. 39 is a flowchart showing the processing of the segmentation level decision unit according to embodiment 6.
Fig. 40 is a diagram showing an example of a table showing the correspondence relationship between the signed area and the segmentation level of the contour edge forming patch in embodiment 6.
Fig. 41 is a flowchart showing processing in the fine division level determination unit.
Fig. 42(a) is a diagram showing an example of a patch that is not necessarily divided in any of the u and v axis directions.
Fig. 42(b) is a diagram showing an example of a patch that needs to be divided finely in the u-axis direction.
Fig. 42(c) is a diagram showing an example of a patch that needs to be divided finely in the v-axis direction.
Fig. 42(d) is a diagram showing an example of a patch that needs to be divided into fine pieces in the u-and v-axis directions.
Fig. 43 is a table in which the bending parameter C is associated with a fine division level.
Fig. 44 is a diagram showing an example of the configuration of the curved surface image processing apparatus according to embodiment 7.
Fig. 45(a) is a diagram illustrating a method of determining the maximum segmentation level by the method 1.
Fig. 45(b) is a diagram illustrating a method of determining the maximum segmentation level by the method 2.
Fig. 46 is a diagram showing an example of a table showing a correspondence relationship between a curvature parameter and a maximum segmentation level.
Fig. 47 is a diagram showing an example of a table showing a correspondence relationship between a signed area of a patch and a subdivision level.
Fig. 48 is a diagram showing an example of the configuration of the curved surface image processing apparatus according to embodiment 8.
Fig. 49 is a diagram for explaining a curved surface segmentation method in the related art.
Fig. 50 is a functional block diagram showing the configuration of the normal line calculation unit according to embodiment 9.
Fig. 51 is a block diagram showing another configuration of the curved surface image processing apparatus according to embodiment 9.
Fig. 52 is a flowchart showing a processing procedure of the normal line calculating unit according to embodiment 9.
Fig. 53 is a diagram showing an example of the normal vector when the control point adjacent to the control point to be subjected to normal calculation is not retracted.
Fig. 54(a) is a reference diagram for explaining the case where the control point adjacent to the control point P00 to be subjected to normal calculation is retracted.
Fig. 54(b) is a reference diagram for explaining the case where the control point adjacent to the control point P00 to be subjected to normal calculation is retracted.
Fig. 54(c) is a reference diagram for explaining the case where the control point adjacent to the control point P00 to be subjected to normal calculation is retracted.
Fig. 55(a) is a diagram showing an example of a list of control points and vertex coordinates stored in the memory.
Fig. 55(b) is a diagram showing an example of a list describing control points and normal line data stored in the memory.
Detailed Description
Hereinafter, a curved surface image processing apparatus according to the present invention will be described with reference to the drawings. In addition, a curved surface image processing apparatus 100 having the overall characteristics of the process of generating a curved surface image using NURBS data will be described in embodiments 1 to 3 below.
(embodiment 1)
Fig. 1 is a functional block diagram of a curved surface image processing apparatus 100 according to embodiment 1.
The curved surface image processing apparatus 100 includes a data input unit 101 for inputting NURBS data, a coordinate conversion unit 102 for performing coordinate conversion on the NURBS data, an animation control unit 103 for controlling animation data of each rendering frame, a data conversion unit 104 for converting the NURBS data into rational bezier data, a curved surface patch division processing unit 105 for subdividing the rational bezier curved surface patch, a normal calculation unit 106 for calculating a normal vector for a control point of the divided curved surface patch, a perspective conversion unit 107 for performing perspective conversion on the divided curved surface patch, and a rendering processing unit 108 for performing rendering processing on the curved surface patch.
The curved surface image processing apparatus 100 according to the present invention is not limited to the configuration shown in fig. 1, and may include the coordinate conversion unit 102, the data conversion unit 104, and the curved surface patch division processing unit 105, and the other processing units may be any constituent elements.
First, NURBS data and rational b e zier control point data in the curved surface image processing apparatus 100 according to embodiment 1 of the present invention and a processing method thereof will be described.
NURBS data forming NURBS curves and NURBS surfaces are composed of 3 elements, which are NURBS control points, weights (weights) of the control points, and node vectors. The rational bessel control point data on which the rational bessel curve and the rational bessel curved surface are formed is composed of 2 elements of the rational bessel control point and the weight of each control point.
In general, if a normal coordinate system in 3-dimensional euclidean space is used, arbitrary control points and weights of NURBS data and rational bessel control point data are represented by a combination of P (x, y, z) and w.
On the other hand, a coordinate system in which the weight component w is regarded as 1 coordinate is referred to as a same-order coordinate system, and may be represented as P (X, Y, Z, w). The space represented by the same secondary coordinate system is called a projective space. When a point P (x, y, z) in the 3-dimensional euclidean space is expressed by P (X, Y, Z, w) in the projective space, the following relationship (equation 8) is established between the two points.
Formula 8
P(X,Y,Z,w)= P(wx,wy,wz,w)=wP(x,y,z,1) (8)
In embodiment 1, the control points and the weights are collectively referred to as control point data hereinafter for NURBS data and rational bessel control point data, and are processed as P (X, Y, Z, w) in the same coordinate system in the projective space.
The operation of the curved image processing apparatus 100 configured as described above will be described with reference to fig. 2. Fig. 2 is a flowchart showing the overall processing of the curved surface image processing apparatus 100 according to embodiment 1.
First, the data input unit 101 inputs NURBS data to the coordinate conversion unit 102 (S201).
Next, the animation control unit 103 calculates animation data in the current frame. Here, the animation data is composed of, for example, time information indicating elapsed time information of the current frame, a viewpoint including camera position information, a line of sight including camera direction information, light source information including a light source type, position, and intensity information, and the like (S202).
Then, the coordinate conversion unit 102 performs model conversion, field of view conversion, and clipping (clipping) processing in a 3-dimensional space on the NURBS data using the information on the viewpoint and the line of sight of the animation data input from the animation control unit 103, and calculates NURBS data in a field of view coordinate system (S203).
Next, the data conversion unit 104 converts each NURBS curve forming NURBS data into a partitioned rational bezier curve composed of segments by inserting nodes (S204). In addition, a description of the rational bezier surface in this section will be described later. Further, a method of transforming a B-spline into a rational bezier curve by node insertion includes an Oslo Algorithm or a method based on Bohm as described above [ parszsh, h., "AShort Proof of the Oslo Algorithm," comp.aid.gel.des., vol.1, pp.95-96, 1984] or [ Bohm, w., "Inserting New Knots into B-spline curves," comp.aid.des., vol.12, pp.199-201, 1980 ].
The curved surface patch division processing unit 105 calculates a plurality of rational bessel curved surface patches from the NURBS data, and performs a subdivision processing of the section rational bessel curved surface (S205). This segmentation process is performed by the recursive (de Casteljau) algorithm in embodiment 1.
Next, the curved surface patch division processing unit 105 determines whether the subdivision result of the curved surface patch and each of the curved surface patches have sufficient flatness by using the distance from the current viewpoint to the rational bezier curved surface patch, and if the subdivision process is necessary, performs the division process of the rational bezier patch again (no in S206), and if all the rational bezier curved surface patches are subdivided sufficiently (yes in S206), the perspective conversion unit 107 approximately converts each of the rational bezier curved surface patches into polygon data having control point data as vertices (S207).
Then, the normal calculation unit 106 calculates normal vectors of control points of each polygon data after the division (S208), the perspective conversion unit 107 performs perspective conversion for converting 3-dimensional coordinates into 2-dimensional coordinates on the screen (S209), and the rendering processing unit 108 performs arrangement and rendering of each polygon data, thereby completing rendering processing of the 3-dimensional object (S210).
The processing of all steps is repeatedly executed for each drawing frame, and once the drawing of all the drawing frames is completed, the series of processing is terminated.
NURBS surfaces are formed from a collection of Non-Uniform Rational (Rational) B-spline curves. For example, in fig. 3(b), the NURBS surface 33 is formed by a set of bidirectional NURBS curves with parameters u and v as medium variables.
Fig. 4(a) and (b) are diagrams showing examples of 3-degree rational bezier curves and rational bezier curved surfaces. In fig. 4(a), the shape of the rational bezier curve 41 is manipulated 3 times by a plurality of control point data 42. A rational bezier curve having the smallest constituent element is generally referred to as a segment, and the nth segment is formed of (n +1) pieces of control point data, and particularly, the 1 st and (n +1) th pieces of control point data are points on the curve and are referred to as end points. For example, the 3 rd section is composed of 4 pieces of control point data, and the 1 st and 4 th pieces of control point data are end points. The curve connecting the segments is referred to as a section rational bezier curve.
In FIG. 4(a), P0, P1 and P2 are endpoints, and the intervals P0P1 and P1P2 are segments. Thus, by providing end points continuously, a smooth interval rational bezier curve can be expressed. The 3-degree rational bezier curve 41 is a parameter curve in which the parameter u is set as a media variable, as in the NURBS curve, and is provided by (equation 9).
Formula 9
P(u)=(1-u)3 P 0 +3u(1-u)2 P 1 +3u2(1-u) P 2 +u3P3 (9)
In the formula 9, P0, P1, P2 and P3 represent control point data. The operation using the rational bezier curve is simpler than NURBS operation, and can be implemented as hardware using a small-scale circuit.
In fig. 4(b), the rational b e zier patch 43 is a parametric surface in which parameters u and v are used as medium variables, and the shape of the parametric surface is operated by control point data represented by 44, as in the rational b zier curve 41. In general, a rational Bessel patch may be defined by a bi-directional set of sections of parameters u, v. The control point data of the n times rational Bessel curved surface patch is (n +1) × (n +1), and the points of 4 corners of the patch are end points, namely the points on the curved surface. In fig. 4, R0, R1, R2, and R3 are endpoints and are points on the 3-degree rational bezier patch 43.
As described above, in other words, in S204, a plurality of rational bezier patches are calculated from NURBS data. The parameters u and v defining NURBS data do not necessarily match the parameters u and v defining the rational bezier patch for 3 times calculated by the transformation in S204.
Next, a method for the surface patch division processing unit 105 to divide the rational b e zier surface patch by the recursive (de Casteljau) algorithm will be described.
Fig. 5 is a segment example of a 3-degree rational bezier curve 51 in a 3-dimensional euclidean space. In fig. 5, B1, B2, B3, and B4 are control points where the rational bessel section 51 is formed, and B1 and B4 are end points. Here, the weights of control points B1, B2, B3, and B4 are w1, w2, w3, and w4, respectively.
In general, if a rational recursion (de Casteljau) algorithm is used in a 3-dimensional Euclidean space, then 1 segment 51 will be at wi+1*t:wiPoints of straight lines B1B2, B2B3, and B3B4 connecting the control points within the ratio of (1-t) are C1, C2, and C3, where 0 < t < 1, i is 1, 2, 3, and 4, and the weights w'i=wi*(1-t)+wi+1*t,
And will be w'i+1*t:w′iPoints of straight lines C1C2 and C2C3 in the ratio of (1-t) are D1 and D2, where i is 1, 2, and 3, and the respective weights w ″ -w'i*(1-t)+w′i+1*t,
And will be denoted by w "i+1*t:w”iThe point of the straight line D1D2 in the ratio of (1-t) is B5, where i is 1, 2, and W5 is W "i*(1-t)+w”i+1In the case of t, the number of the first,
given that B5 is the point on segment 51, the calculated w5 is the weight of B5. Here, B1, C1, D1, and B5 are control points of the segment B1B5, and B5, D2, C3, and B4 are control points of the segment B5B 4.
In step 44 of the present embodiment, since the control point data is processed in the same coordinate system, for example, the recursive (de Casteljau) algorithm in projective space shown in "NURBS from projective geometry to practice" PP119-122 "is used, which is published in symposium by translation of" Gerald E.Farin, Yuanxiao Cheng et al. In the present embodiment, t is 1/2.
FIG. 6 is an example of using the recursive (de Casteljau) algorithm in 3 segments in the shadow space. In fig. 6, B1, B2, B3, and B4 are control point data for forming the segment 61, and B1 and B4 are end points.
In step 205, for segment 61, the interior of lines B1B2, B2B3, B3B4 will be divided into 1/2: when the points (i.e., the midpoints) of (1-1/2) are C1, C2, and C3, the midpoints of the straight lines C1C2 and C2C3 are D1 and D2, and the midpoint of the straight line D1D2 is B5, B5 is a point on the segment 61.
As a result, section 61 is divided into two sections, B1B5 and B5B 4. The control points of the newly formed section B1B5 are B1, C1, D1, B5, and the control points of the section B5B4 are B5, D2, C3, B4.
As described above, in the curved surface patch division processing unit 105 according to embodiment 1, by using t as 1/2 in the recursive (de Casteljau) algorithm in the projective space using the coordinate system of the same order, the division processing requires only the shift operation and the addition without multiplication and division, and therefore the division processing can be significantly speeded up.
Fig. 7 is a reference diagram showing the 3-degree rational b e zier patch 71 with 16 pieces of control point data. In fig. 7, P1, P2, P3, and P4 are endpoints, and each endpoint is a point on the patch.
Fig. 8 is a reference diagram of a rational b e zier patch that is a result of dividing a section in the parameter u direction that forms the rational b zier patch 71 of fig. 7 by the processing of step 205.
In step 205, in particular, the point P6 of the divided section P3P4 and the point P5 of the divided section P1P2 become points on the rational b e zier patch 71. That is, the rational b e zier patch 71 is divided into two patches, i.e., a rational b zier patch 81 having P1, P5, P6, and P3 as endpoints and a rational b zier patch 82 having P5, P2, P4, and P6 as endpoints.
Fig. 9 is a reference diagram of the rational b e zier patches 81 and 82 in fig. 8 and the result of division of the sections in the parameter v direction using the processing of step 205.
Here, the points P7, P8, and P9 on the rational bessel surface patch are recalculated, the rational bessel surface patch 81 is divided into the rational bessel surface patch 91 with the endpoints P7, P9, P6, and P3, the rational bessel surface patch 92 with the endpoints P1, P5, P9, and P7, and the rational bessel surface patch 82 is divided into the rational bessel surface patch 93 with the endpoints P9, P8, P4, and P6, and the rational bessel surface patch 94 with the endpoints P5, P2, P8, and P9.
Fig. 10 shows an example of converting the 4 surface patches in fig. 9 into polygon data. The surface patches 91, 92, 93, and 94 are converted into 8 polygon data in total with control point data on the surface as vertices.
Therefore, since the vertices of these polygon data are points in projective space, in order to perform rendering, it is necessary to transform into points in 3-dimensional euclidean space.
Generally, the transformation from coordinates defined in a projective space to coordinates defined in a 3-dimensional euclidean space is referred to as a projective transformation. In addition, the transformation from the coordinates defined in the 3-dimensional euclidean space to the 2-dimensional screen coordinates is referred to as perspective transformation. An expression for projectively converting the coordinates P (wx, wy, wz, w) of the same order in the projective space into the normal coordinates P (x, y, z) in the 3-dimensional euclidean space according to the expression 4 is given by the expression 10.
Formula 10
P ( x , y , z , l ) = P &OverBar; ( X , Y , Z , w ) w - - - ( 10 )
On the other hand, since the vertices of the polygon data are converted into the visual field coordinates by the coordinate conversion unit 102, the viewpoint is the origin, and the line of sight is the Z axis. Therefore, an expression for converting the vertex perspective of polygon data in 3-dimensional euclidean space into a screen coordinate system is provided by (expression 11).
Formula 11
( xs , ys ) = R ( x z , y z ) + ( xo , yo ) - - - ( 11 )
In (equation 11), P ═ is (x, y, z) an arbitrary vertex of the polygon data, R is the distance from the viewpoint to the screen, So ═ is (xo, yo) denotes the origin of the screen coordinates, and Ps ═ is (xs, ys) denotes the vertex of the polygon data in the screen coordinates after perspective transformation.
In step 209, the projective transformation and the perspective transformation are performed on each vertex of the polygon data obtained in step 208, using (equation 12) as described above (S209).
Formula 12
( xs , ys ) = R ( x w z w , y w z w ) + ( xo , yo )
= R ( x z , y z ) + ( xo , yo ) - - - ( 12 )
Therefore, in the present embodiment, division of the weight accompanying projective transformation can be omitted. In step 210, after rendering processing such as shading (shading) processing or texture mapping (texture mapping) processing is performed on the polygon data using the current light source information, the process returns to step 202 for rendering processing of the next frame.
As described above, according to embodiment 1 of the present invention, by providing the data conversion unit 104 and the curved surface patch division processing unit 105, converting NURBS data into rational-bessel control point data in step 204, and then dividing the rational-bessel curved surface patch in step 205, the NURBS curved surface can be drawn with a smaller amount of computation without increasing the data other than the control point data, as compared with the case of directly computing a NURBS curved surface and drawing it.
Further, since the coordinate conversion unit 102 converts NURBS data into the visual field coordinate system in advance, it is not necessary to perform visual field conversion on the rational bessel control data obtained by subdividing the rational bessel curved surface patch, and the overall amount of computation for coordinate conversion can be reduced.
Further, since it is possible to perform the projective transformation and the perspective transformation before the rendering processing in the perspective transformation unit 107 as the control point data by processing the control points and the weights of the NURBS data and the rational bezier control point data in the coordinate system of the same order, it is possible to omit the division of the weights involved in the projective transformation and to render the NURBS curved surface at high speed.
In addition, in order to obtain a sufficient approximation result for polygons, even when subdivision of the bezier patch is repeated, the curved surface patch division processing unit 105 can realize the operations necessary for subdivision only by shift operations and addition, and therefore the processing load is reduced, and the curved surface image processing device 100 according to the present invention can obtain a rendering result of a NURBS curved surface with smooth high image quality at high speed.
In embodiment 1, the coordinate conversion unit 102 performs the field-of-view coordinate conversion on the NURBS data input from the data input unit 101, but may perform the field-of-view coordinate conversion on the rational bessel control point data forming the divided rational bessel curved surface patch obtained by the curved surface patch division processing unit 105 without performing the field-of-view coordinate conversion.
(embodiment 2)
Embodiment 2 of the present invention will be described below with reference to the drawings.
The functional block diagram of the curved-surface image processing apparatus 100 according to embodiment 2 is similar to that of embodiment 1, and is characterized in that the motion picture data calculated by the motion picture control unit 103 is input to the coordinate conversion unit 102 and the data conversion unit 104.
First, the operation of the curved surface image processing apparatus 100 according to embodiment 2 will be described with reference to fig. 11. In embodiment 2, like embodiment 1 of the present invention, NURBS data and rational bezier control point data are collectively referred to as control point data, and processing is performed using the same coordinate system in projective space.
Fig. 11 is a flowchart showing a specific process of the curved surface image processing apparatus 100 according to embodiment 2.
First, the data input unit 101 inputs NURBS data composed of NURBS control points, the weight of each control point, and a node vector to the data conversion unit 104 (S1101).
Next, the data conversion unit 104 converts each NURBS curve forming the NURBS data input from the data input unit 101 into a segment-based rational bezier curve composed of segments by inserting nodes (S1102).
Next, the animation control unit 103 calculates animation data in the current frame. (S1103).
The coordinate transformation unit 102 performs modeling transformation, visual field transformation, and clipping processing in a 3-dimensional space on the rational b e zier control point data forming each section calculated by the data transformation unit 101 using the information on the viewpoint and the line of sight of the moving image data of the current frame obtained from the moving image control unit 103, and calculates rational b zier control point data forming each section in the visual field coordinate system (S1104).
Next, the surface patch division processing unit 105 calculates a rational bessel surface patch formed of the sections in the visual field coordinate system obtained in step 1104 by using t 1/2 in a recursion (de Casteljau) algorithm in the projective space, and divides the rational bessel surface patch (S1105).
The curved patch division processing unit 105 determines whether or not the current division result is sufficient by using the distance from the current viewpoint to the rational bezier curved patch, and returns to step 1105 when the division processing is necessary again (S1106).
When all the rational b e zier patches are sufficiently subdivided (yes in S1106), the perspective conversion unit 107 converts the control point data of each subdivided rational b zier patch into polygon data as vertices (S1107), and the normal calculation unit 106 calculates normal vectors of the control points of each polygon data (S1108).
Then, the perspective conversion unit 107 performs the projective conversion and the perspective conversion on each vertex of the obtained polygon data at once (S1109), and the rendering processing unit 108 performs rendering processing such as shading processing and texture mapping processing on the polygon data using the current light source information (S1110), and then returns to step 1101 for rendering processing of the next frame. The processing before step 1102 is executed as preprocessing only 1 time, and the processing from step 1103 to step 1110 is repeatedly executed for each drawing frame.
As described above, according to the curved surface image processing device 100 of embodiment 2 of the present invention, the data conversion unit 104, the coordinate conversion unit 102, and the curved surface patch division processing unit 105 are provided, and as the pre-processing, the data conversion unit 104 converts NURBS data into rational bezier control point data, and when the shape of a NURBS object does not change at any time, that is, when a shape-deformed animation is not performed, the processing after step 1103 is executed in the drawing processing for each frame, and the amount of computation in the processing performed for each drawing frame can be significantly reduced. That is, the high-performance curved surface image processing apparatus 100 can be configured to smoothly convert high-quality NURBS data into bezier curved surfaces and perform fine division so as to render the curved surfaces in real time.
In addition, although it is assumed that the shape morphing animation is not performed in embodiment 2, even when the shape morphing animation is performed, the same effect can be obtained by performing the conversion to the rational bezier control point data for all the key frame data (NURBS data at all times) in step 1102 as the preprocessing.
In embodiment 2, the coordinate conversion unit 102 performs the field-of-view coordinate conversion on the rational bessel control point data obtained by the data conversion unit 104, but may perform the field-of-view coordinate conversion on the rational bessel control point data forming the divided rational bessel curved surface patch obtained by the curved surface patch division processing unit 105 without performing the field-of-view coordinate conversion in advance.
(embodiment 3)
Next, the curved surface image processing apparatus 100 according to embodiment 3 will be described. The functional block diagram of the curved surface image processing apparatus 100 according to embodiment 3 is the same as that of embodiment 1, and therefore, detailed description thereof is omitted.
Fig. 12 is a block diagram showing the configuration of the curved patch division processing unit 105 according to embodiment 3. The curved patch division processing unit 105 includes a segment allocation unit 1201 and at least 1 or more segment division units 1202.
Next, the configuration of the curved patch division processing unit 105 of the curved image processing apparatus 100 and the operation thereof will be described.
The segment dividing unit 1202 receives 4 pieces of control point data forming 3 segments, each of which has parameters u and v as medium variables, and outputs 7 pieces of control point data forming 2 segments after division by using t 1/2 in a recursive (de Casteljau) algorithm in a projective space for the 4 pieces of control point data. For example, when 4 pieces of control point data B1, B2, B3, and B4 forming the segment 51 in fig. 5 are input to the segment division unit 1202, 7 pieces of control point data B1, C1, D1, B5, D2, C3, and B4 forming two segments B1B5 and B5B4 are output.
Here, the rational bezier patch 71 obtained by the data conversion unit 104 may be processed for 4 sections in the parameter u direction in any order, or may be processed simultaneously. In the rational b e zier patches 81 and 82 after the division processing in the parameter u direction is completed, the processing for 7 sections in the parameter v direction may be performed in any order or simultaneously. In addition, the processing of the sections in the u and v directions of the parameters may be performed in any order or simultaneously between different rational b e zier patches, which exemplify the rational b zier patches 81 and 82.
Next, the processing of the segment assigning unit 1201 will be described with reference to fig. 13. Fig. 13 is a flowchart showing a specific process of the segment assigning unit 1201.
First, the section assignment unit 1201 selects 1 section to be subjected to the division process from the sections forming the rational bezier patch obtained by the data conversion unit 104 (S1301).
Next, the section assignment unit 1201 determines whether or not another section in the same patch is being processed by any of the section division units 1202 for the selected section (S1302). If the partition 1201 is not in process (no in S1302), the partition inputs any of the partition 1202 in the process waiting state (S1305).
On the other hand, in the case of processing (yes in S1302), the extent allocation unit 1201 determines whether or not the selected extent is in the v direction (S1303). If the direction is not the v direction (no in S1303), the segment is input to any segment division unit 1202 in the process waiting state (S1305).
When the section is in the v direction (yes in S1303), the section allocation unit 1201 determines whether all the division processes at the same level in the u direction in the same patch as the selected section have been completed. When the processing is finished (yes in S1304), the segment assignment unit 1201 inputs the segment to any of the segment division units 1202 in the processing waiting state (S1305).
On the other hand, if the division processing is not completed (no in S1304), the selected segment division processing waiting state is maintained, and the process returns to step 1301. The segment assignment unit 1201 repeats the above processing until a segment is not in the division processing waiting state.
As described above, according to embodiment 3 of the present invention, the high-performance curved surface image processing apparatus 100 capable of smoothly rendering a high-quality NURBS curved surface in real time can be configured by providing 1 or more segment assignment units 1201 and the curved surface patch division processing unit 105, the curved surface patch division processing unit 105 having at least one or more segment division units 1202 for calculating the rational bezier control point data forming the divided 2 3-degree rational bezier curved surfaces from the rational bezier control point data forming the 1-degree 3-degree rational bezier curved surface, and by executing the division processing of the concurrently processable segments in parallel, the division processing of the curved surface patch can be realized at high speed.
In embodiments 1 to 3, the NURBS data input by the data input unit 101 may be data stored in a storage medium or data transmitted via a network.
In the above-described embodiment 1-3, the curved patch division processing unit 105 uses t equal to 1/2 for the recursive (de Casteljau) algorithm, but may use t equal to 1/(2 to the nth power) (where n is a positive integer). The curved patch division processing unit 105 divides the image into pieces in the parameter v direction after dividing the image into pieces in the parameter u direction.
In addition, in the above-described embodiments 1 to 3, the perspective conversion unit 107 converts each rational b e zier patch into polygon data having control point data on a curved surface as vertices, but may convert the rational b zier patch into polygon data having control point data other than the curved surface as vertices (for example, in the division of the curved surface patch 91, the rational b zier patch may be converted into polygon data having control point data other than P3, P7, P9, and P6 as vertices).
In addition, all vertices of polygon data are not necessarily control point data. For example, in the division of the curved patch 91, 4 pieces of polygon data may be converted into 4 pieces of polygon data having control point data of 4 corners as vertices, and intersections of the straight lines P3P9 and P6P 7.
Further, although the conversion into polygon data is performed in units of rational bezier patches, the conversion into polygon data of skip patches may be performed. For example, the surface patches 91, 92, 93, and 94 are converted into two polygon data, P4P3P1 and P4P1P 2.
(embodiment 4)
Next, a curved surface image processing apparatus 100 according to embodiment 4 will be described.
In the curved surface image processing apparatus 100 according to embodiment 4, when the curved surface patch division processing unit 105 divides the curved surface patch into smaller pieces, the data conversion unit 104 equivalently converts the NURBS curved surface into a rational bezier curved surface, and then calculates control points on the curved surface, instead of directly calculating points on the NURBS curved surface as in the prior art, and then divides the curved surface patch into smaller pieces. Therefore, the problem arises that the data conversion unit 104 generates an unnecessary control point at the time of node insertion, but the curved surface image processing apparatus 100 according to embodiment 4 solves this problem.
Fig. 14 is a functional block diagram of the data conversion unit 104 according to embodiment 4.
In embodiment 4, the data conversion unit 104 includes a node insertion unit 1401, a control point clipping unit 1402, and a rational b e zier control point data determination unit 1403.
The NURBS model data input to the data conversion unit 104 is model data describing a NURBS curved surface. Here, the NURBS model data does not include the position coordinates of the points on the NURBS surface, and can be said to be the minimum amount of information representing the NURBS surface. Therefore, the load imposed on the data transmission system that transmits NURBS model data is small.
The node insertion unit 1401 performs node insertion, and finally converts node vectors defining the u direction and v direction of the NURBS curved surface into 1 or more rational bezier curved surfaces.
The node insertion unit 1401 updates the control point sequence while inserting the nodes, and the final output of the node insertion unit 1401 outputs a control point sequence defining a rational bezier surface that completely matches the NURBS surface shape defined by the input NURBS model data.
However, the control point sequence output from the node insertion unit 1401 includes unnecessary control points where no rational bezier surface is defined. The control point clipping unit 1402 is a block for deleting the unnecessary control points and transferring only necessary control points to the curved patch division processing unit 105 at the subsequent stage. Therefore, the rational bessel control point data output from the control point clipping unit 1402 is data not including useless control points. Here, the rational bessel control point data is a control point sequence in which a rational bessel curved surface is defined, but when the NURBS model data is NURBS curve data, the rational bessel control point data is a control point sequence in which a rational bessel curve is defined. The rational b e zier control point data is transmitted to the surface patch division processing unit 105 at the subsequent stage.
The curved surface patch division processing unit 105 sequentially obtains points on the rational bessel curved surface using the input rational bessel control point data. In this way, the rational bezier surface is approximated to an aggregate of planar polygons by the surface patch division processing unit 105.
Further, although not shown, the display unit of the curved surface image processing apparatus 100 displays the 3-dimensional polygon on the 2-dimensional display. When the rational-bezier control point data defines a rational-bezier curve, the curved-surface patch division processing unit 105 approximates the rational-bezier curve as an aggregate of a plurality of line segments.
Next, a method for deleting useless control points from the control point sequence after node insertion by the control point clipping unit 1402 according to embodiment 4 will be described.
The initial node vectors describing the NURBS surface are set to (u 0, u1, u.s.u I + n) and (v 0, v 1, v.s.v.J + m). Here, n and m are the number of basis functions defined for the medium variables u, v. I and J are the number of control points in the u and v directions. The NURBS surfaces defined by these node vectors are transformed into rational bezier surfaces for equivalent transformation, and node insertion is performed to obtain the final node vectors as (u ' 0, u ' 1, u., u ' I ' + n) and (v 0, v 1, u., v J ' + m). Here, the final control point numbers in the u direction and the v direction are I '+ n +1 and J' + m + 1. These control points include useless control points that do not define a rational bezier surface.
For the initial node vectors, the rational ranges that describe NURBS surfaces are the ranges of (u [3], u [ I + n-3]) and (v [3], v [ J + m-3 ]). The nodes in the range are multiplexed by node insertion, and if the multiplexing degree is equal to the number of times, the result means that the original NURBS surface is transformed into a rational bezier surface.
Fig. 24 shows a state in which node insertion and control point sequence change are performed on a node vector in the u direction of the NURBS surface. In the example of fig. 24, the number of times in the u direction is n equal to 3, and the start node u [3] of the valid range of the node vector is multiplexed. Here, it is assumed that the values of the first portions (u 0, u1, u2, and.) of the node vectors in the u direction are different, and that when the values increase monotonically, (u ' 0, u ' 1, u ' 2, and.) finally generated satisfy the following relationship.
u[0]=u′[0]
u[1]=u′[1]
u[2]=u′[2]
u[3]=u′[3]=u′[4]=u′[5]
Initially, when a node-u' 4 having a value equal to u [3] is inserted, since the node insertion position is a position where k is3, the coefficients are arranged as follows.
a[0]=1
a[1]=(~u-u[1])/(u[4]-u[1])=(u[3]-u[1])/(u[4]-u[1])
a[2]=(~u-u[2])/(u[5]-u[2])=(u[3]-u[2])/(u[5]-u[2])
a[3]=(~u-u[3])/(u[6]-u[3])=0
a[4]=0
Thus, the generated control point column is listed as
<Q′[0]>=a[0]*<Q[0]>=<Q[0]>
<Q′[1]>=(1-a[1]*<Q[0]>+a[1]*<Q[1]>
<Q′[2]>=(1-a[2]*<Q[1]>+a[2]*<Q[2]>
<Q′[3]>=(1-a[3]*<Q[2]>+a[3]*<Q[3]>=<Q[2]>
<Q′[4]>=(1-a[4]*<Q[3]>+a[4]*<Q[4]>=<Q[3]>
Accordingly, the control point < Q [1] >, and new control points < Q '[ 1] >, and < Q' [2] >, are generated. Here, the control points defining the NURBS surface are expressed by a 2-dimensional array having indices i and j, but here, for the sake of simplicity of explanation, they are expressed only by 1-dimensional array in the u direction. Even this simplification does not lose generality. When a node u' 5 having a value equal to u 3 is inserted, the coefficients are arranged as follows because the node insertion position is a position where k is 4.
a[0]=1
a[1]=1
a[2]=(u[3]-u[2])/(u[4]-u[2])
a[3]=0
a[4]=0
Using this arrangement, the control point column generated is
<Q″[0]>=a[0]*<Q′[0]>=<Q′[0]>=<Q[0]>
<Q″[1]>=(1-a[1]*<Q′[0]>+a[1]*<Q′[1]>=<Q′[1]>
<Q″[2]>=(1-a[2]*<Q′[1]>+a[2]*<Q′[2]>
<Q″[3]>=(1-a[3]*<Q′[2]>+a[3]*<Q′[3]>=<Q′[2]>
<Q″[4]>=(1-a[4]*<Q′[3]>+a[4]*<Q′[4]>=<Q′[3]>=<Q[2]>
This means that a new control point < Q "[ 2] >, is generated.
Next, the new control point < Q "[ 2] > is shown below as the starting point on the original NURBS surface. And substituting u into u [3] into an asymptotic formula of Cox-debour, and recursively calculating a basis function of the NURBS curved surface. First, for n ═ 0, is
B[0][3](u[3])=1
B [0] [ i ] (u [3]) is 0(i is not 3).
With the above formula, for n ═ 1, is
B[1][2](u[3])=1
B [1] [ i ] (u [3]) is 0(i is not 2).
Further using the above formula, for n ═ 2, is
B[2][1](u[3])=(u[4]-u[3])/(u[4]-u[2])
B[2][2](u[3])=(u[3]-u[2])/(u[4]-u[2])
B [2] [ i ] (u [3]) 0(i is not 1 or 2).
Further using the above formula, for n ═ 3, is
B[3][0](u[3])=(u[4]-u[3])/(u[4]-u[1])*B[2][1](u[3])
B[3][1](u[3])=(u[3]-u[1])/(=u[4]-u[1])*B[2][1](u[3])
+(u[5]-u[3])/(u[5]-u[2])*B[2][2](u[3])
B[3][2](u[3])=(u[3]-u[2])/(u[5]-u[2])*B[2][2](u[3])
B [3] [ i ] (u [3]) is 0(i is greater than 3).
Thus, the coefficient array is set to-a [ i ] ═ 1-a [ i ] at the starting point of the NURBS curved surface,
1-a' [ i ] or
<P(u[3])>=B[3][0](u[3])*<Q[0]>+B[3][1](u[3])*<Q[1]>
+B[3][2](u[3])*<Q[2]>
=~a′[2]*~a[1]*<Q[0]>
+(~a′[2]*~a[1]+~a′[2]*a[1])*<Q[1]>
+a′[2]*a[2]*<Q[2]>
=<Q″[2]>
. Therefore, since the start point of the first NURBS surface coincides with the control point < Q "[ 2] > converted into the rational bezier surface, the two control points < Q" [0] > and < Q "[ 1] > are useless.
In addition, as another example, a case where the element arrangement of the initial node vector shown in fig. 25 includes a multiplexing degree will be described. When u [2] is equal to u [3], the node vectors (u ' [0], u ' [1], u ' [2], and.
u[0]=u′[0]
u[1]=u′[1]
u[2]=u′[3]=u′[2]=u′[3]=u′[4]
When the node-u' 4 having the insertion value equal to u [2] and u [3] is inserted, the coefficients are arranged as follows because the node insertion position is a position where k is 3.
a[0]=1
a[1]=(~u-u[1])/(u[4]-u[1])=(u[2]-u[1])/(u[4]-u[1])
a[2]=(~u-u[2])/(u[5]-u[2])=0
a[3]=(~u-u[3])/(u[6]-u[3])=0
a[4]=0
Using this arrangement, the control point column generated is
<Q′[0]>=a[0]*<Q[0]>=<Q[0]>
<Q′[1]>=(1-a[1])*<Q[0]>+a[1]*<Q[1]>
<Q′[2]>=(1-a[2])*<Q[1]>+a[2]*<Q[2]>=<Q[1]>
<Q′[3]>=(1-a[3])*<Q[2]>+a[3]*<Q[3]>=<Q[2]>
<Q′[4]>=(1-a[4])*<Q[3]>+a[4]*<Q[4]>=<Q[3]>
On the other hand, because the basis function of 0, which does not define the original NURBS surface, is
B[2][1](u[3])=(u[4]-u[3])/(u[4]-u[2])=1
B[3][0](u[3])=(u[4]-u[3])/(u[4]-u[1])*B[2][1](u[3])
B[3][1](u[3])=(u[3]-u[1])/(u[4]-u[1])*B[2][1](u[3])
So on the other hand, the starting point of the NURBS surface becomes
<P(u[3])>=B[3][0](u[3]*<Q[0]>+B[3][1](u[3])*<Q[1]>
=(1-a[1])*<Q[0]+a[1]*<Q[1]>
=<Q′[1]>
In agreement with the control point < Q' [1] >. Only 1 control point < Q' [0] > is useless at this time.
As shown in fig. 26, as another example in which the element array of the initial node vector includes a degree of multiplexing, a case where u [3] ═ u [4] is considered for the initial portion (u [0], u [1], u [2], and. In this case, the node vectors (u ' [0], u ' [1], u ' [2], etc.) finally generated satisfy the following relationship.
u[0]=u′[0]
u[1]=u′[1]
u[2]=u′[2]
u[3]=u[4]=u′[3]=u′[4]=u′[5]
When the node-u' 5 having the insertion value equal to u [3] and u [4] is inserted, the coefficients are arranged as follows because the node insertion position is a position where k is 4.
a[0]=1
a[1]=1
a[2]=(~u-u[2])/(u[5]-u[2])=(u[3]-u[2])/(u[5]-u[2])
a[3]=(~u-u[3])/(u[6]-u[3])=0
a[4]=(~u-u[4])/(u[7]-u[4])=0
Using this arrangement, the control point column generated is
<Q′[0]>=a[0]*<Q[0]>=<Q[0]>
<Q′[1]>=(1-a[1])*<Q[0]>+a[1]*<Q[1]>=<Q[1]>
<Q′[2]>=(1-a[2])*<Q[1]>+a[2]*<Q[2]>
<Q′[3]>=(1-a[3])*<Q[2]>+a[3]*<Q[3]>=<Q[2]>
<Q′[4]>=(1-a[4])*<Q[3]>+a[4]*<Q[4]>=<Q[3]>
On the other hand, if the starting point of the NURBS curved surface is found, there are
<P(u[3])>=B[3][1](u[3]*<Q[1]>+B[3][2](u[3])*<Q[2]>
=(1-a[2])*<Q[1]+a[2]*<Q[2]>
=<Q′[2]>
In accordance with the control point < Q' [2] >. Therefore, the 2 control points < Q '[ 0] > and < Q' [1] > are useless at this time.
By generalizing the number of times n to 3 in the above example, the following relationship holds for control points that become useless after node insertion. That is, the finally generated control points are (Q '[ 0], Q' [1],. and Q '[ I' -1]), the finally generated node vector is (u '[ 0], u' [1], …, u '[ I' +3]), and as shown in fig. 27, for the node u '[ 3] starting to draw the NURBS curved surface, (u' [ j ],. mu '[ 3],. and.,. mu' [ k ]) have a value of (k-j +1) nodes equal to u '[ 3], and are multiplexed with a degree of multiplexing 3 or more, and (k-3) control points are not used in total for the control points (Q' [0], Q '[ 1],. and Q' [ k-4 ]).
In addition, useless control points are generated not only at the starting point but also at the ending point of the NURBS curved surface. In this case, when the relationship between the control point sequence and the coefficient of the node vector is considered to be reverse, useless points can be deleted in the same manner. That is, the control points finally generated are listed as (Q '[ 0],. logbook, Q' [ I '-2 ], Q' [ I '-1 ]), the node vectors finally generated are (u' [0],. logbook, u '[ I' +2], u '[ I' +3]), and as shown in fig. 28, for the node u '[ I' ] ending the depiction of the NURBS curved surface, the values of (k-j +1) nodes in total of (u '[ j ],. logbook, u' [ I '],. logbook, u' [ k ]) are equal to u '[ I' ], and multiplexing is performed with a degree greater than 3, and the control points (Q '[ j ],. logbook, Q' [ I '-2 ], Q' [ I '-1 ]) in total of (I' -j) control points are useless.
In the above description, the method of deleting useless control points with respect to the u-direction control point sequence has been described, but the same method is also applicable to the v-direction control point sequence. In addition, although the control point actually has a weight, the above-described method of deleting the unnecessary control point can be used if the same coordinate obtained by multiplying the position coordinate by the weight is used.
Further, the description here explains polygon division by a rational bezier surface by the subdivision method. First, for ease of understanding, a rational bezier curve is described. As shown in fig. 15, a case where a rational bezier curve is approximated to a plurality of line segments by using a subdivision method is considered. In the subdivision method of the present embodiment, first, new points (R [0], R [1], and R [2]) are taken at the middle position between two adjacent control points, and the coordinate calculation is performed as follows using the same coordinate obtained by multiplying the position coordinate by the weight.
rw[0]*<R[0]>=(qw[0]*<Q[0]>+qw[1]*<Q[1]>)/2
rw[1]*<R[1]>=(qw[1]*<Q[1]>+qw[2]*<Q[2]>)/2
rw[2]*<R[2]>=(qw[2]*<Q[2]>+qw[3]*<Q[3]>)/2
Wherein,
rw[0]=(qw[0]+qw[1])/2
rw[1]=(qw[1]+qw[2])/2
rw[2]=(qw[2]+qw[3])/2
when new points (S0, S1) are taken at these intermediate positions, the coordinates thereof are set to
sw[0]*<S[0]>=(rw[0]*<R[0]>+rw[1]*<R[1]>)/2
sw[1]*<S[1]>=(rw[1]*<R[1]>+rw[2]*<R[2]>)/2
Wherein,
sw[0]=(rw[0]+rw[1])/2
sw[1]=(rw[1]+rw[2])/2
. Then, when a new point T0 is taken at the intermediate position, the coordinates thereof are
tw[0]*<T[0]>=(sw[0]*<S[0]>+sw[1]*<S[1]>)/2
Wherein,
tw[0]=(sw[0]+sw[1])/2
if the calculation is performed as described above, the initial rational B zier curve is divided into two consecutive rational B zier curve segments, i.e., the rational B zier curve 1501 including the control points (Q0, R0, S0, and T0) and the rational B zier curve 1502 including the control points (T0, S1, R2, and Q3), and the final point T0 becomes a point on the initial rational B zier curve. Thus, the original rational Bezier curve may be approximated by two segments, the segment (Q [0], T [0]) and the segment (T [0], Q [3 ]). When the approximation accuracy is to be improved by dividing the curve into fine line segments, the rational bezier curves 1501 and 202 after division may be repeatedly divided by using the subdivision method again. In this way, the subdivision process is very simple compared to an operation of repeatedly performing multiplication and addition and obtaining the basis function of NURBS by division [2 ].
By analogy with the subdivision method for the rational bezier curve described above, the polygon segmentation can be performed by the subdivision method for the rational bezier curved surface. In the case of a curved surface, the control points become a 2-dimensional arrangement with indices i, j corresponding to medium variables u, v. That is, the control points are denoted as Q [ i ] [ j ]. A method for dividing a polygon by using a subdivision method in a rational bezier surface will be described with reference to fig. 16.
Although the rational bessel surface itself is not shown in fig. 16, the control points of the rational bessel surface are defined in a simplified manner, and the number of times in the u direction and the v direction is n-m-3, and the number of control points is 4 × 4-16. In FIG. 16, the control points (Q0, Q3, Q0) for the 4 corners are points on the rational Bessel surface.
In the subdivision method for the rational Bessel surface, the value of the exponent j of the control point is first fixed to 0, and the subdivision method is used with 4 control points (Q0, Q1, Q2, Q3, Q0). Thus, a rational Bezier curve 1601 constituted by the control points (Q0, R0, S0, T0) and a rational Bezier curve 1602 constituted by the control points (T0, S1, R2, Q3) are generated, and a new point T0 on the first rational Bezier curve is obtained. Only this point T [0] [0] is shown in FIG. 16.
Subsequently, control points are similarly calculated using 4 control points (Q0, Q1, Q2, Q1, Q3) obtained by increasing the index j by 1, using a subdivision method. Thus, control points (Q0 < 1 >, R0 < 1 >, S0 < 1 >, T0 < 1 >) and (T0 < 1 >, S1 < 1 >, R2 < 1 >, Q3 < 1 >) are obtained. These points are intermediate data generated during the calculation, and since Q0 < 1 > and Q3 < 1 > are not originally points on the original rational B-Bezier surface, the generated control point T0 < 1 > is not a point on the original rational B-Bezier surface. And, the same process is repeated until the index j becomes 3. FIG. 17 shows the 28 control points previously generated, where T [0] [0], T [0] [3] are the new points on the original Bessel surface. In addition, in fig. 17, the control point mark O is placed on the rational bezier surface.
Then, the control points generated by the subdivision method in the u direction are divided into groups of 7 control points shown below, and the subdivision method is used again in the v direction for each group.
(Q[0][0],Q[0][1],Q[0][2],Q[0][3])
(R[0][0],R[0][1],R[0][2],R[0][3])
(S[0][0],S[0][1],S[0][2],S[0][3])
(T[0][0],T[0][1],T[0][2],T[0][3])
(S[1][0],S[1][1],S[1][2],S[1][3])
(R[2][0],R[2][1],R[2][2],R[2][3])
(Q[3][0],Q[3][1],Q[3][2],Q[3][3])
As shown in FIG. 18, when the subdivision method is applied to the first group (Q0, Q2, Q0, Q3), control points (Q0, Q ' 0, Q ' 1, Q ' 0, Q ' 1, Q ' 0, Q ' 2, Q ' 0, Q3) are obtained. Here, Q' [0] [1] is a point on the original Bessel surface. Similarly, another group is also subdivided to finally obtain 7 × 7 to 49 control points, and the first rational bessel surface is divided into 4 small rational bessel surfaces defined by 4 × 4 to 16 control points. Among the control points of the divided small rational bessel surfaces, the 4 control points located at the corners are the points on the first rational bessel surface. That is, 9 points on the rational bezier surface are obtained. In fig. 18, O marks are marked on the control points on these rational bezier surfaces. Points adjacent to points on these rational bezier surfaces are joined to each other, so that a planar polygon can be constructed. In addition, when the approximation accuracy is to be improved by further dividing the surface into fine polygons, the subdivision method may be used again for the divided rational bezier surface.
As described above, in the reprocessing processing performed by the curved patch division processing unit 105 of the curved image processing apparatus 100 according to embodiment 4, the control point clipping unit 1402 deletes unnecessary control points, and therefore the amount of computation is less than that required to directly obtain points on the NURBS curved surface. Therefore, by using the method for deleting useless control points that does not define a rational bessel surface described in embodiment 4, the provided control point sequence of the NURBS surface can be efficiently converted into a control point sequence of a rational bessel surface that can be used for the subdivision processing.
(embodiment 5)
Next, the processing in the curved patch division processing unit 105 of the curved image processing apparatus 100 according to the present invention will be described. The processing of the curved patch division processing unit 105 will be described with reference to embodiments 5 to 8.
Next, the curved surface patch division processing unit 105 of the curved surface image processing apparatus 100 according to embodiment 5 of the present invention will be described with reference to the drawings.
Fig. 29 is a functional block diagram showing the curved patch division processing unit 105 according to embodiment 5.
In the present embodiment 5, the curved patch division processing unit 105 includes a shape input receiving unit 2901, a contour edge detecting unit 2902, a subdivision level determining unit 2903, and a subdivision unit 2904. The respective functions will be described in detail below.
The shape input receiving unit 2901 receives input of viewpoint information including a view field transformation matrix for transforming the representation in the spherical coordinate system into a viewpoint coordinate system as a coordinate system defined as a viewpoint by the data transformation unit 104, and perspective transformation for transforming the representation in the viewpoint coordinate system into a coordinate system defined on a 2-dimensional screen by perspective transformation, and object shape information as information on a drawn object shape.
The object shape information is input from the data conversion unit 104, and includes coordinates (expressed in the same coordinate system) of control points defining the shape of the patch constituting the object and adjacent patch information as information on patches adjacent to the respective patches. The method of expressing the adjacent patch information is not particularly limited. For example, an index (index) is provided for each patch, and the index (index) of the adjacent patch is sequentially arranged and expressed in the parameter space v-0, u-1, v-1, and u-0 as the adjacent patch information. When no adjacent patch is present, a special index such as-1 may be given. The object shape information may include attribute information of a patch used for drawing, for example, vertex normal vector information, texture coordinate information, and the like.
The contour edge detection unit 2902 determines whether or not each patch constituting the target acquired by the shape input reception unit 2901 is a patch for forming a contour edge. In order to maintain the determination result, an identifier (hereinafter referred to as an edge identifier) indicating whether or not the patch is a contour edge forming patch is defined for each patch, and for example, by initializing with 0, it indicates that 0 is not a contour edge forming patch.
Fig. 31 is a flowchart showing the processing in the contour edge detection unit 2902 according to embodiment 5. Next, a flow of processing in the contour edge detection unit 2902 will be described.
First, for each patch, the perspective converter 2902a of the outline edge detector 2902 converts the 4 vertices Q00, Q30, Q03, and Q33 present on the curved surface in the perspective conversion control points into coordinates on the screen using the field of view conversion matrix and the perspective conversion matrix included in the viewpoint information (S3102).
Next, the signed area calculation unit 2902b calculates the signed area of the 2-dimensional graph formed by the transformation to 4 vertices on the screen (S3103). In general, the signed area S of a triangle formed by 3 vertices a (ax, ay), B (bx, by), and C (cx, cy) on a 2-dimensional plane is obtained by the following expression 13. In addition, the triangle is an outward face in the case of a positive signed area and an inward face in the case of a negative signed area.
Formula 13
S = 1 2 a x a y 1 b x b y 1 c x c y 1 - - - ( 13 )
Transforming perspective into 4 vertices Q00、Q30、Q03、Q33The coordinates on the rear screen are set to R00(r00x、r00y)、R30(r030x、r03y)、R03(r03x、r03y)、R33(r33x、r33y) Then the graph consisting of 4 vertices is divided into R00、R30、R03And R30、R33、R03Two sets of triangles, the signed area S is calculated by the following formula0、S1
S0=(r00x*r30y+r03x*r00y+r30x*r03y-r03x*r30y-r30x*r00y-r00x*r03y)/2
S1=(r30x*r33y+r03x*r30y+r33x*r03y-r03x*r33y-r33x*r30y-r30x*r03y)/2
Here, is the multiplication. In the case where 4 vertices on the screen are in the positional relationship of fig. 32(a), the signed area S0And S1The same symbols are used, but different symbols are used in the case of fig. 32 (b).
Therefore, the contour edge detection unit 2902 can distinguish between positive and negative values by holding the signed area in different storage areas (not shown) for each patch. And at S0、S1In the case of the same sign, the added value is held in the storage area. In embodiment 5, 4 vertices are divided into R00、R30、R03And R30、R33、R03Two sets of triangles, but other than indexing, e.g. into R00、R33、R03And R00、R30、R33The group (2) may be processed in the same manner.
Next, the contour edge detection unit 2902 compares the value of a storage region (not shown) holding the maximum value of the secured signed area other than the storage region storing the signed area value of each patch with the signed area calculated in S3103 (S3104). When the signed area is large, the maximum value is updated, and the updated value is written in the storage area. Here, the sum of the absolute values of the signed areas S0, S1 is used for comparison with the maximum signed area. That is, when the positive signed area calculated in S3103 is Ap, the negative area is Am, and the maximum value of the signed areas stored in the storage area is MAXA, the following processing is performed.
(1) The absolute value sum Ap + | Am | of the signed area is calculated.
(2) If Ap + | Am | > MAXA, let MAXA ═ Ap + | Am |.
When the above processing is completed for all patches constituting the object, the process proceeds to S3105 (yes in S3101).
Then, the contour edge detection unit 2902 refers to the calculated signed area of each patch, and when both the positive and negative values are not 0 (in the case of fig. 32 (b)), forms a contour edge as the patch, sets the edge identifier to 1, and moves to the next patch determination (S3106).
When both positive and negative values are 0 in S3106, the contour edge detection unit 2902 moves to S3107.
Next, the contour edge detection unit 2902 compares the sign of the signed area of the patch with the sign of the signed area of the adjacent patch referred to using the adjacent patch information, and determines whether or not the patch is a contour edge forming patch (S3107).
Thereafter, when the sign of the signed area of the patch is different from that of the adjacent patch (no in S3107), the contour edge detection unit 2902 recognizes that a boundary between the front surface and the back surface exists between the two patches, and determines that the patches are contour edge forming patches. Therefore, if the product value of the signed areas of the patch and the adjacent 4 patches is negative, it is determined that the patch is a contour edge forming patch and the edge identifier is set to 1 (S3108).
On the other hand, if all the adjacent patches have the same sign (yes in S3107), it is determined that the adjacent patches are not contour edge forming patches (S3109). In addition, when there is no adjacent patch (in the embodiment of the present invention, the adjacent patch information is provided with-1), the identifier is set to 1 as the patch forming edge. When the contour edge detection unit 2902 completes the above-described processing for all patches, the process proceeds to the sub-division level determination processing of the sub-division level determination unit 2903. The signed area and the edge identifier of each patch calculated by the contour edge detection unit 2902 are sent to the segmentation level specification unit 2903.
The fine division level determination unit 2903 determines the fine division level using the signed area and the edge identifier of each patch calculated by the contour edge detection unit 2902.
The following two methods are roughly used to approximate a patch with a polygon set.
In the 1 st method, the uv parameter space is first divided according to the stride of the divided patch (determined in advance by any method), and lattice points are generated. Then, the coordinates of the generated lattice points in the 3-dimensional space are calculated, and the vertices are connected to generate a polygon.
The 2 nd method is a method of generating control points for dividing a patch and recursively repeating the above-described actions to generate a polygon. The former is called the tessellation (tessellation) algorithm and the latter is called the subdivision algorithm.
In embodiment 5, the operation of dividing the patch by 2 in the u and v directions by the subdivision algorithm is set to 1 stage, and the number thereof is defined as the subdivision stage. For example, a table in which the number of stages is associated with the number of divisions such as division of the parameter space into 10 stages in the u-and v-axis directions for stage 1 and division into 20 stages for stage 2 may be prepared. In addition, description of subdividing the segmentation bezier curve using the above-described recursive (de Casteljau) algorithm as a representative subdividing algorithm is as described above.
Fig. 33(a) shows an example of a patch before the subdivision method is used, and fig. 33(b) and (c) show examples of cases where the subdivision method is used for the patch at level 1 and level 2, respectively. In fig. 33(b), 4 sub-patches are formed, and in fig. 33(c), 16 sub-patches are formed.
Fig. 34 is a flowchart showing the processing of the segmentation level determiner 2903. Next, each step will be described in detail with reference to fig. 34.
Since the contour edge forming patch forms the contour of the object at the time of drawing, it is desirable to divide the patch into smaller pieces than other patches. However, since the signed area of the contour edge patch tends to be small, there is a possibility that the fine division level cannot be determined depending on the area size. Therefore, in embodiment 5, the subdivision level (fixed value) of the contour edge forming patch is determined using the maximum value of the signed area calculated by the contour edge detection unit 2902. By using the maximum value of the signed area, the fine division requirement can be satisfied in both cases where the fine division requirement is satisfied and where the object is displayed very small apart from the viewpoint.
Specifically, first, the subdivision-level determination unit 2903 prepares a table 3501 describing the correspondence between the maximum value of the signed area stored in the table storage unit 2903a and the subdivision level of the contour edge formation patch as shown in fig. 35(a), compares the table 3501 with the maximum value of the signed area calculated by the contour edge detection unit 2902, and determines the subdivision level of the contour edge formation patch (S3401). In fig. 35(a), MAi (i ═ 0.., 4) is a threshold value of the maximum signed area value.
Next, the subdivision-level determination unit 2903 first refers to the edge identifier and determines whether or not each patch constituting the object is a contour-edge-forming patch (S3403). If the edge identifier is 1 (yes in S3403), the contour edge forms a patch, and therefore the segmentation level is determined immediately (S3404).
On the other hand, when the edge identifier is 0 and the patch is not a patch for forming a contour edge (no in S3403), the subdivision-level determination unit 2903 determines the subdivision level with reference to the positive signed area of the patch. The reason is that in the case of a large negative signed area, the area inside the patch, i.e. not visible from the viewpoint, is very large and does not have to be divided. Specifically, the table 3502 shown in fig. 35(b) is recorded in the table storage 2903a, and the division level is specified by referring to the table 3502 (S3405). In table 3502 of fig. 35(b), Ai (i ═ 0. -, 4) is a threshold value of the maximum signed area value. The above processing is continued until the above processing is completed for all patches (S3402).
In embodiment 5, the subdivision level of the contour edge forming patch is determined to be a fixed value according to the maximum value of the signed area, but a table 3502 such as fig. 35(b) is prepared and determined according to the signed area, as in the other patches. In this case, it is desirable to record a table for a contour edge forming patch in the storage unit 2903a in addition to a table used for a normal patch. In addition, it is desirable to set the threshold value of the signed area low. In addition, as described above, attention must be paid to the problem that the signed area may be very small.
The subdivision unit 2904 subdivides each patch by a subdivision algorithm according to the subdivision level determined by the subdivision level determination unit 2903. After that, if the division level is different between adjacent patches, the division unit 2904 performs a process of compensating for the gap because the gap is generated between the patches, but this method is not particularly limited.
In addition, for example, there is a method of newly generating a polygon in a generated gap. Fig. 36(a) shows an object example before the fine segmentation, and fig. 36(b) shows an object example after the fine segmentation is performed. In the example of fig. 36(b), the contour edge forming patch generating the contour of the object is divided into level 2, and the other patches are divided into level 0 or level 1 corresponding to the area on the screen. Therefore, by detecting whether or not the contour edge is present, the curved surface image can be rendered more precisely.
As described above, according to the curved surface image processing apparatus 100 of embodiment 5, 4 vertices existing on a patch among control points of the patch to be subjected to perspective transformation are calculated, and the signed area of a graph formed by the transformed vertices is calculated. Next, referring to the calculated sign of the signed area, the contour edge detection unit 2902 determines whether or not each patch is a patch forming a contour edge. Then, the subdivision level determination unit 2903 determines the subdivision level of each patch based on the result of the determination and the signed area.
Through the above processing, it is possible to appropriately control the fine division level corresponding to the area of the patch on the screen, and at the same time, generate an object whose edge portion is smooth. In addition, since the determination of the fine division level is performed only once before the fine division processing is performed, the amount of calculation is small compared to the related art in which the flatness is calculated and whether or not to divide is determined every time of fine division. Further, by using the signed area also for the determination of the patch formation at the contour edge, the calculation load can be minimized.
The curved surface image processing apparatus 100 according to embodiment 5 is particularly effective when polygon approximation and rendering are performed using only control points existing on patches.
(embodiment 6)
Next, a curved surface image processing apparatus 100 according to embodiment 5 will be described with reference to the drawings. The function configuration of the curved surface image processing apparatus 100 according to embodiment 6 is the same as that of embodiment 5, but the processing of the contour edge detection unit 2902 and the segmentation level determination unit 2903 is different. The respective functions are explained in detail below.
The curved surface image processing apparatus 100 according to embodiment 6 is particularly effective when polygon approximation and rendering are performed using all control points defining the shape of a patch. Further, the shape input receiving unit 2901 receives inputs of viewpoint information including a field of view transformation matrix and a perspective transformation matrix and object shape information including information on an object shape and adjacent patch information, as in embodiment 5.
Fig. 37 is a flowchart showing a process performed by the contour edge detection unit 2902 according to embodiment 6.
First, unlike embodiment 6 in which only control points existing on a patch are perspective-transformed, all control points (16 vertices in the case of a 4-order (3-order) rational bezier surface) are perspective-transformed (S3702), and transformed onto a 2-dimensional screen. Here, when the adjacent control points are connected, 3 × 3 to 9 graphs are generated on the 2-dimensional screen shown in fig. 30. Hereinafter, the generated figure is referred to as a control polygon.
Next, the signed areas of all control polygons generated by securing the respective memory areas are calculated for each patch so that the positive and negative values of the signed areas can be distinguished (S3703). In addition, the calculated value is added to the value of the memory area storing the positive area in the case of positive, and added to the value of the memory area storing the negative area in the case of negative. When the processing is completed for the 9 control polygons, the process proceeds to S3704.
Since the shape of the patch is defined by the control points and convex hull is present in the bezier patch, whether the patch is a contour edge forming patch can be determined by using the control polygons. For example, since all control polygons face outward in fig. 38(a) and all control polygons face inward in fig. 38(b), it can be determined that a patch is not formed on the contour edge. On the other hand, in fig. 38(c) and (d), since the outward control polygons and the inward control polygons coexist, it can be determined that the contour edge forms a patch.
If the signed area of the control polygon calculated by the signed area calculation unit 2902b of the outline edge detection unit 2902 includes only one of positive and negative values, the control polygon is not a contour edge formation patch, and if the signed area of the control polygon includes both of positive and negative values, the control polygon can be determined to be a contour edge formation patch.
Therefore, the contour edge detector 2902 acquires the added positive and negative signed area values from the memory area, calculates the product, and determines whether the product is 0 (S3704). If the product is not 0 (no in S3704), it is determined that the contour edge has been formed (S3705), and the edge identifier is set to 1.
When the product of the plus and minus signed areas after the addition IS 0 and IS3704 IS, the contour edge detection unit 2902 determines whether the contour edge IS a contour edge forming patch (S3706). Using the above processing (S3701) for all patches, the processing is terminated by terminating the processing for all patches.
Fig. 39 is a flowchart showing the processing of the sub-division level determining unit 2903 according to embodiment 6.
First, the segmentation level determiner 2903 refers to the edge identifier provided by the contour edge detector 2902, and checks whether or not the patch is a contour edge forming patch (S3902). In embodiment 5, the subdivision level of the contour edge forming patch is constant, but in embodiment 6, the subdivision level is determined with reference to the signed area. In this case, in order to alleviate the problem of the decrease in the signed area, the sum of the absolute values of the positive and negative signed areas is used as an index.
Next, the sub-division level specifying unit 2903 holds a table 4000 in which the signed area for the contour edge forming patch and the sub-division level shown in fig. 40 are associated with each other in the table storage unit 2903a, and refers to the table 4000 to specify the sub-division level as compared with the signed area of the patch (S3903). Note that, although not described here, as in embodiment 5, the subdivision level of the contour edge forming patch may be determined with reference to the maximum signed area of the patch constituting the object. The subdivision-level determination unit 2903 may determine the subdivision level by preparing a table in which the positive signed area is associated with the subdivision level, as with the patches other than the contour edge forming patch, without preparing the table 4000 of fig. 40. When the patch is not a contour edge (no in S3902), the segmentation level determiner 2903 determines a segmentation level for a patch having an edge identifier of 0 by referring to a normal table, as in embodiment 5 (S3904).
It is sometimes desirable that the sub-division levels in the u-and v-axis directions can be set independently so that if an object such as a cylinder is used, it is not necessary to divide it as finely in the rotation axis direction. However, in the algorithm described heretofore, the same fine division level is set for the u-and v-axis directions. Therefore, the following describes the determination of the subdivision levels in the u-axis direction and the v-axis direction using the calculated signed areas of the control polygons.
The contour edge detection unit 2902 performs perspective transformation on all control points defining the patch shape, generates control polygons, and calculates the signed area of each control polygon, as in the above-described method. The calculated signed area of the control polygon is added to the memory area corresponding to the sign. Then, whether or not the control polygon is a contour edge is determined based on the values of the storage area at the processing end time of all the control polygons. In the present invention, not only the sum of the signed areas but also the value of the signed area of each control polygon is held in the storage area (not shown). In the 4-step (3-time) bezier patch (see fig. 30), the signed area of each control polygon is held in 9 storage areas. These values are sent to the fine division level determination unit 2903.
Fig. 41 is a flowchart showing the processing in the subdivision level determination unit 2903 of this method.
First, the contour edge detection unit 2902 refers to the edge identifier, and checks whether or not each patch is a contour edge forming patch (S4102). When the patch is a contour edge forming patch (yes in S4102), the subdivision level of the uv direction axis is determined by referring to the contour edge table based on the sum of absolute values of the signed areas (S4103). Therefore, for the contour edge forming patch, the u and v axis directions become the same subdivision level. Here, the fine division level may be independently determined in the u-axis direction and the v-axis direction by using a method described later for forming a patch on the contour edge. At this time, it is necessary to pay attention to the possibility that the contour edge portion becomes unsmooth.
The process of the contour edge detection unit 2903 when the edge identifier is 0 will be described with reference to fig. 42. Fig. 42(a) - (d) show curved surfaces into which control polygons are respectively grouped. Any of the control polygons of fig. 42(a) are similar in shape and substantially equal in area. Therefore, the patch is considered to be less bent in both the u-axis direction and the v-axis direction.
On the other hand, it can be seen that in fig. 42(b), the control point Q is10、Q11、Q13、Q20、Q21、Q22、Q23Deviation control point Q30、Q31、Q32、Q33On the other hand, the bending is in the u-axis direction. This means that the patch of fig. 42(b) must be subdivided in the u-axis direction.
Meanwhile, it is understood that in fig. 42(c), the division must be performed in the v-axis direction, and in fig. 42(d), the division must be performed in the u-and v-axis directions. Using these properties, the present method determines the level of subdivision using the area ratio of control polygons existing in the u and v axis directions as an index indicating the degree of patch curvature. The index indicating the degree of bending is hereinafter referred to as a bending parameter.
First, the subdivision-level determination unit 2903 checks whether or not all of the signed areas of the control polygons forming the patch are negative (S4104), and if so (yes at S4104), determines the subdivision level of the patch as 0(S4105), and moves to the next patch processing.
The sub-division level determination unit 2903 checks whether or not all the signed areas of the control polygons forming the patch are negative, and if not (no in S4104), then calculates the area ratio of the control polygons existing in the u-axis direction (S4106). Specifically, the following procedure was followed.
(1) Get from the control point Qj0And Qj1(j ═ 0,. and 3) to form 3 control polygon signed area values.
(2) If the signed area value obtained in (1) is set as A0、A1、A2Then, the maximum value AMAX and the minimum value AMIN are obtained.
(3) The following equation was calculated to obtain bending parameter Cu 0.
Cu0=AMAX/AMIN
(4) By Qj1、Qj2(j ═ 0,. and 3) and a control polygon formed of Qj2、Qj3The same processing is performed on the control polygons formed in (j ═ 0, 0.. and 3), and Cu1 and Cu2 are calculated, respectively.
(5) The values of (3) and (4) are averaged to calculate the bending parameter Cu used for the fine division level determination.
Cu=(Cu0+Cu1+Cu2)/3
Here, the area ratio is obtained using all control polygons, and the average is used as the bending parameter. For example, the bending parameters may be calculated using only control polygons adjacent to the boundary line v of 0 and v of 1, and the average may be obtained. Conversely, the calculation may be performed using only control polygons that do not adjoin a boundary line parallel to the u-axis.
The sub-division level determination unit 2903 determines the sub-division level in the u-axis direction based on the warp parameter calculated in S4106 (S4107). At this time, the subdivision level determination unit 2903 holds a table 4301, shown in fig. 43, in which the bending parameter C and the subdivision level are associated with each other in the table storage unit 2903a, and determines the subdivision level by comparing the calculated value with the table. In fig. 43, Ci (i ═ 0,. and 4) is a threshold value of the bending parameter.
Thereafter, the sub-division level determination unit 2903 performs the same processing as in S4106 and S4107 for the 3 control polygons adjacent in the v-axis direction (S4108 and S4109). Thereafter, a fine division level in the v-axis direction is determined. The above process is continued until the above process is performed for all patches (S4101). By using the above-described method, the curved-surface image processing apparatus 100 according to the present invention can independently determine the sub-division levels in the u-axis direction and the v-axis direction, respectively.
The subdivision section 2904 subdivides each patch according to the subdivision level determined by the subdivision level determination section 2903. After that, when a gap occurs between patches, a process of compensating for the gap is also performed.
As described above, according to the curved surface image processing apparatus 100 of embodiment 6, all the control points constituting each patch are transformed into the screen coordinate system by perspective transformation, and the signed areas of all the control polygons formed thereby are calculated. The contour edge detection unit 2902 determines whether or not a contour edge patch is formed based on the calculated signed area, and the subdivision level determination unit 2903 determines a subdivision level according to the determination result and the signed area.
Therefore, even in the method of performing the polygon approximation using all the control points defining the patch shape, it is possible to generate an object whose edge portion is smooth while suppressing an increase in the number of polygons. Further, the processing of the segmentation level determiner 2903 may be performed only once before the segmentation is performed, and the amount of calculation is small compared to the conventional technique. Further, it is possible to determine that the contour edge forms a patch by adding only a little processing, and this is very effective in view of the calculation load.
(embodiment 7)
Next, a curved surface image processing apparatus 100 according to embodiment 7 will be described with reference to the drawings. Fig. 44 is a diagram showing an example of the configuration of the curved surface image processing apparatus according to embodiment 7. The curved surface image processing apparatus 100 according to embodiment 7 is characterized by further including the maximum segmentation level determination unit 4401 in addition to the units described in embodiment 5.
Further, by providing the maximum subdivision level determination unit 4401, even if the area on the screen is large, patches that do not need to be subdivided are not divided, and the number of polygons can be further suppressed. The functions will be described in detail below, but the same reference numerals as in fig. 1 are assigned to them, and the description thereof will be omitted. In embodiment 7, the description is given of the case where polygon approximation is performed using only vertices existing on a patch among control points defining the shape of a patch constituting an object, but the same can be applied to the case where approximation is performed using all control points. In fig. 44, the shape input receiving unit 2901 and the contour edge detecting unit 2902 send the signed area of each patch and the result of determination as to whether or not the patch is a contour edge forming patch to the segmentation level determining unit 2903, as in embodiment 5.
For example, even if the area on the screen is very large, if the patch is flat, it is not necessary to perform fine division. Therefore, the maximum segmentation level determining unit 4401 acquires the object shape information, calculates an index indicating how much warping is applied to each patch constituting the object, and determines the maximum segmentation level. Hereinafter, the index indicating how much the bending is performed is referred to as a curvature parameter.
Two methods for determining the curvature parameter by the maximum segmentation level determiner 4401 are described below. Either index is a rough approximation of the patch shape using a polyhedron extended by control points. Further, since the following processing is performed in a spherical coordinate system, it is not necessary to perform field of view conversion or perspective conversion.
Method 1 referring to fig. 45(a), the maximum segmentation level determination unit 4401 determines a curvature parameter by calculating a distance between a plane extended by a control point existing on a patch and another control point. Specifically, the following procedure was followed.
(1) Find the control point Q00、Q30、Q03The extended plane equation α 0.
(2) In general, the plane α is found by the following equation: a x + b y + c z + d0 and a point P (x) in 3-dimensional space0、y0、z0) The distance L therebetween. Here, is the multiplication.
Formula 14
L = | ax 0 + by 0 + cz 0 + d | a 2 + b 2 + c 2 - - - ( 14 )
Calculating the plane alpha 0 and the control point Q obtained in (1) by the above formula01、Q02、Q10、Q11、Q12、Q20、Q21A distance I between01、I02、I10、I11、I12、I20、I21
(3) Find the control point Q01Q30、Q01Q33、Q01Q03 extended plane equation α 1.
(4) Calculating the plane α 1 and the control point Q obtained in (3)12、Q13、Q21、Q22、Q23、Q31、Q32The distances between them are I12 ', I13 ', I21 ', I22 ', I23 ', I31 ' and I32 '.
(5) Respectively calculate connection control points Q03And Q30And Q00And Q33D0, d 1.
(6) The curvature parameter C is calculated by solving the following equation.
I0=I01+I02+I10+I11+I12+I20+I21
I1=I12′+I13′+I21′+I22′+I23′+I31′+I32
C=(I0+I1)/(d0+d1)
Here, the control points for calculating the distance corresponding to the extended plane are divided into two groups, but the method is not particularly limited. For example, the distances to all control points not on the plane may be obtained for each plane, and the curvature parameter may be determined. Conversely, it is also possible to use only the representative point (e.g., Q located at the center in the control point)11、Q12、Q21、Q22) The method of (1).
In method 2, referring to fig. 45(b), the maximum segmentation level determiner 4401 calculates the sum of the distance between the segment length connecting the end points of the control points and the adjacent control points, and determines the curvature parameter. Specifically, the following procedure was followed.
(1) Calculating a connection control point Q0iAnd Q3iLength di of the line.
(2) Calculating a control point Q connecting adjacent control points in the u directionjiAnd Q(j+1(iLength of line segment Iij(j=0、...、2)。
(3) Solving the following formula, calculating Ci
Ci=(Ii0+Ii1+Ii2)/di
(4) Repeating the treatments (1) to (3) to obtain C0、C1、C2、C3
(5) Calculating a connection control point Q0iAnd Q3iLength di' of the line segment.
(6) Calculating a control point Q connecting adjacent control points in the v directionijAnd Qi(j+1)Length of line segment Iij′(j=0、...、2)。
(7) Solving the following formula, calculating Ci′
Ci′=(Ii0′+Ii1′+Ii2′)/di
(8) Repeating the treatments (5) to (7) to obtain C0′、C1′、C2′、C3′
(9) From the values obtained in (4) and (8), an average is calculated and used as a curvature parameter.
C=(C0+C1+C2+C3+C0′+C1′+C2′+C3′)/8
Here, the curvature parameter is determined by processing all line segments formed using the control points, but the method is not particularly limited, and the curvature parameter may be determined by processing only on the boundary lines (u-0, u-1, v-0, v-1).
Next, the maximum segmentation level determination unit 4401 prepares a table 4601 showing the correspondence relationship between the curvature parameter and the maximum segmentation level shown in fig. 46, compares the table with the calculated C, and determines the maximum segmentation level of the patch. In fig. 46, ∈ i (i ═ 0.,. 4) is a threshold value of the curvature parameter. All patches are processed as described above. The calculated maximum fine division level is sent to the fine division level determination unit 2903.
The fine division level specification unit 2903 specifies the fine division level using the signed area and the edge identifier of each patch calculated by the contour edge detection unit 2902, but in this case, the maximum fine division level specified by the maximum fine division level specification unit 4401 is also considered. Therefore, the table 4701 of fig. 47 in fig. 35(b) is prepared and corrected, and the column of the fine division level is updated in accordance with the maximum fine division level transmitted from the maximum fine division level determining unit 4401. The fine division level specifying unit 4401 uses the updated table to specify the fine division level in the same manner as in embodiment 5.
In fig. 47, MAXL is the maximum segmentation level transmittable from the maximum segmentation level specification unit 4401, and clip (x) indicates that the value of (x) is greater than 0, and 0 if (x) is less than 0.
The subdivision unit 2904 subdivides each patch in accordance with the subdivision level determined by the subdivision level determination unit 2903, and performs processing for compensating for the gap.
In this way, according to the curved surface image processing device 100 of embodiment 7, the maximum segmentation level specification unit 4401 specifies the maximum segmentation level of each patch in advance. This makes it possible to avoid, for example, unnecessary division of a patch that is infinitely close to flat and does not require fine division, and to generate polygons more efficiently and approximate the shape of an object. The processing of the maximum segmentation level determination unit 4401 may be performed only once at the time of inputting the target shape information, and can be realized with a minimum calculation load.
(embodiment 8)
Next, a curved surface image processing apparatus 100 according to embodiment 8 will be described with reference to the drawings.
Fig. 48 is a diagram showing an example of the configuration of the curved surface image processing apparatus according to embodiment 8. The curved surface patch division processing unit 105 of the curved surface image processing apparatus 100 shown in fig. 48 is characterized by being provided with an initial fine division unit 4801 in addition to the processing units described in embodiment 5. By providing the initial fine division unit 4801, the fine division level can be determined in units of small patches before the fine division level is determined, and the polygon approximation of the object can be performed more flexibly. Hereinafter, each function will be described in detail, but the same reference numerals as in fig. 1 are assigned, and the description thereof will be omitted.
The shape input receiving unit 2901 receives input of viewpoint information and object shape information. The acquired data is sent to the initial segmentation unit 4801.
When the object is composed of a plurality of patches or is composed of only very large patches, if the fine division level is set therein, the possibility of only rough division or excessively fine division as a whole increases, and flexible level control becomes difficult. Therefore, the initial fine division unit 4801 performs several levels of fine division in advance before determining the fine division level. The fine division level determination unit 2903 determines a fine division level for each of the patches after the number of divisions. The method of the initial segmentation unit 4801 for determining the number of segmentation steps is not particularly limited. The number of stages may be determined in advance, or may be determined in accordance with the number of patches constituting the object. Alternatively, the initial patch of the object is transformed in a perspective manner, the signed area is calculated, and the fine division level is determined from the minimum value.
In the subdivision method of the initial subdivision unit 4801, when polygon approximation is performed on an object using only the points located on the patch among the control points defining the shape of each patch, a mosaic algorithm or a subdivision algorithm may be used. In the case of using all control points, since it is necessary to subdivide the control points of the divided patch, it is necessary to perform subdivision processing using the above recursion (decesteljau) algorithm.
According to the fine division level determined by the above method, all patches constituting an object are finely divided by the same level only. The finely divided patch data is sent to the contour edge detection unit 2902.
The contour edge detection unit 2902, the segmentation level specification unit 2903, and the segmentation unit 2904 process all the patches segmented by the initial segmentation unit 4801 to generate a polygon-approximated object. The curved surface image processing device 100 according to embodiment 8 may further include a maximum segmentation level determination unit 4401.
In this way, according to the curved surface image processing apparatus 100 of embodiment 8, the initial fine dividing unit 4801 performs the number-level fine division on each patch constituting the object before determining the fine division level. Then, for the patch generated by the fine division, the contour edge detection unit 2902 or the fine division level determination unit 2903 is used to determine the fine division level. Therefore, even if an object is composed of a very small number of patches or an object is composed of only very large patches, the fine division level can be flexibly set, and an object with smooth edge portions can be generated while suppressing an increase in the number of polygons.
(embodiment 9)
Next, a process of calculating the normal of each control point of the rational bezier surface in the normal calculation unit 106 of the curved surface image processing apparatus 100 according to the present invention will be described with reference to embodiment 9. In addition, the latitude, brightness, and the like of the bezier surface are determined in the normal calculation.
Fig. 50 is a functional block diagram showing the configuration of the normal line calculation unit 106 according to embodiment 9.
The normal calculation unit 106 includes a control vertex input unit 5001 that inputs the coordinates of each control vertex of the bezier surface converted into the bezier data by the data conversion unit 105, a normal calculation unit 5002 that calculates the normal vector of each control vertex, and a control point output unit 5003 that outputs a control point to the perspective conversion unit 107.
The normal calculation unit 5002 includes a determination unit 5002a for determining whether or not the control point is a control point to be subjected to normal vector calculation.
Fig. 51 is a block diagram showing another configuration of the curved surface image processing apparatus 100, and includes a CPU5101 that actually performs calculation processing of a normal vector, an I/O5102, and a memory 5103 that stores control point information of a bezier curved surface. The memory 5103 also holds a table storage 5103a that stores table information shown in fig. 55.
Fig. 52 is a flowchart showing the processing procedure of the normal line calculation unit 5002 according to embodiment 9.
First, the control vertex input unit 5001 inputs control points Pij (0 ≦ i, j ≦ 3) constituting the bezier surface from the surface patch division processing unit 105 (S5201). The information of the control point is recorded in the memory 5103 through the I/O5102. The information of the control point may be input from the keyboard as described above, or may be input from a reading unit of data recorded on the recording medium.
Fig. 55(a) and (b) show an example of a list 5501 describing control points and vertex coordinates and a list 5502 describing control points and normals stored in the memory 5103. The control points described in the list may be input with one patch of the bezier surface or with a plurality of patches. Moreover, a plurality of bezier curved surfaces may be input.
The normal calculation unit 5002 calculates a normal vector of the control point input from the control vertex input unit 5001, but the first determination unit 5002a determines whether or not the input control point P00 has been retracted into an adjacent control point. That is, it is determined whether or not P00 and P01 or P00 and P10 match (S5202).
When none of the control points is retracted and does not match (no in S5202), the normal calculation unit 5002 calculates a difference vector between adjacent control points. That is, the difference vector (P10-P00) and the difference vector (P01-P00) are calculated (S5203).
Next, the normal calculation unit 5002 calculates the outer product of the difference vectors (P10-P00) × (P01-P00) (S5205), normalizes the outer product, and calculates the normal of the control point P00 (S5205). In the following (equation 15), the equation for calculating the normal line by the normal line calculating unit 5002 is shown. The calculated normal is stored in the memory 5103 as a normal vector for each control point.
Formula 15
( P 10 - P 00 ) &times; ( P 01 - P 00 ) | ( P 10 - P 00 ) &times; ( P 01 - P 00 ) | - - - ( 15 )
Thereafter, the normal calculation unit 5002 checks whether or not the normals of the vertices at the 4 corners of the bezier curved surface are calculated (S5206), and if the normals of all the vertices are not calculated (no in S5206), the processing from S5202 onward is repeated again, and if the normal calculation unit 5002 calculates the normals of the vertices at the 4 corners (yes in S5206), the series of processing ends.
Fig. 53 shows an example of the difference vector 5301(P10-P00), the difference vector 5302(P01-P00), and the outer product 5303(P10-P00) × (P01-P00) in the case where the control point adjacent to the normal calculation target control point is not retracted.
On the other hand, when one or both of P00 and P01, and P00 and P01 as control points are contracted and matched, the normal calculation unit 5002 calculates a normal using a control point in the vicinity (yes in S5206). In this way, the case where the adjacent control points are retracted will be described with reference to fig. 54(a), (b), and (c).
When the judgment unit 5002a of the normal calculation unit 5002 according to embodiment 9 judges that P00 matches P01 and that P00 does not match P10, the difference vector (P10-P00) and the difference vector (P11-P00) are calculated. Then, when it is determined that P11 matches P00, a search is made for points that do not match P00 in the order of P12, P13, P21, P22, P23, P31, P32, and P33, and then a difference vector between the search point and P00 is calculated.
Fig. 54 is a reference diagram for explaining a case where the control point adjacent to the control point P00 as the normal line calculation target is retracted.
Fig. 54(a) shows an example of the case where P00 coincides with P01. The difference vector 5401 is (P10-P00), the difference vector 5402 is (P11-P00), and the normal vector 5403 is the outer product of these vectors (P10-P00) × (P11-P00).
Fig. 54(b) shows an example in which the normal vector of P00 is calculated using the other differential vectors when P00 matches P01. In order to accurately calculate the normal vector, it is also conceivable to calculate the normal vector 5406 using the differential vector 5404 and the differential vector 5405 as shown in fig. 54(b) when the angle formed by the two differential vectors to be normal calculation targets is smaller than a predetermined angle or when the distance between the control point P00 to be normal vector calculation target and the control point to be differential vector calculation target is shorter than a predetermined distance.
Alternatively, as shown in fig. 54 c, the normal calculation unit 5002 calculates the difference vector 5408(P01-P00) and the difference vector 5407(P11-P00) when P00 does not match P01 and P00 matches P10, and the normal vector 5409 becomes the outer product of these vectors (P11-P00) × (P11-P00). When P11 matches P00, the determination unit 5002a determines that the points do not match P00 in the order of P21, P31, P12, P22, P32, P13, P23, and P33, and then calculates a difference vector between the search point and P00, as described above.
When P00, P01, and P10 match, the determination unit 5002a determines that the difference vector (P20-P00) and the difference vector (P02-P00) are calculated from P02 and P00, and P20 and P00. When P02 matches P00, the search is performed in the order of P03, P21, P31, P32, and P33. If P20 agrees with P00, the search is performed in the order of P30, P12, P13, P23 and P33. Then, when a point of disagreement is found, a judgment is made to calculate a difference vector between the search point and P00.
In addition, when all the control points are on one curve or recede to one point, the bezier curve is different, and therefore, the control points may be deleted from the drawing object.
The control point output unit 5003 receives information on the normal line calculated by the normal line calculation unit 106. Or, stored in the memory 203. Fig. 55(b) shows an example of storing the normal line data stored in the memory 203. In fig. 55(a) and (b), the states of the vertex coordinates and the normal line of the control points are independently managed, but needless to say, they may be managed in a unified manner.
The data of the control points calculated by the normal line calculation unit 106 according to embodiment 9 is used in the rendering process of the 3-dimensional image in the perspective conversion unit 107 and the rendering processing unit 108.
As described above, according to the normal calculation unit 106 of the present embodiment 9, even when the adjacent control point to be subjected to normal calculation is retracted, the normal calculation unit 5002 can accurately and efficiently calculate the normal vector of the control point of the bezier surface. In addition, in the case where the drawing processing of the 3-dimensional image is performed using only the control points located on the bezier surface, it is not necessary to perform normal calculation with respect to the control points inside the surface.
Industrial applicability
The curved image processing apparatus according to the present invention is applicable as a curved image processing apparatus for rendering a 3-dimensional object using a free-form surface such as a NURBS curved surface or a bezier curved surface in the field of 3-dimensional computer graphics, and is applicable to, for example, a portable terminal, a car navigation apparatus, a portable game machine, a television, or other entertainment apparatus that displays a 3-dimensional object on a screen.

Claims (29)

1. A curved surface image processing apparatus that draws a 3-dimensional object on a screen using NURBS data that is shape data of the 3-dimensional object, characterized in that: is provided with
The data conversion unit is used for performing parameter conversion on NURBS data formed by a NURBS curve and a NURBS curved surface and converting the NURBS data into rational Bezier control point data formed by a rational Bezier curve and a rational Bezier curved surface;
a curved surface dividing unit that divides a rational Bessel curved surface patch formed by the rational Bessel control point data converted by the data conversion unit into a plurality of curved surface patches; and
a rendering unit that renders the 3-dimensional object with the surface patch;
the NURBS data consists of columns of control points and node vectors,
the data conversion unit includes a node insertion unit configured to perform an operation of inserting the node vector into the control point sequence by using a node insertion algorithm; and
a control point adjusting unit configured to delete unnecessary control points from the control points included in the control point sequence generated by the calculation of the node inserting unit;
the node insertion unit searches for an index of a node located at a specific position of the final node vector in a process of converting an initial node vector and an initial control point sequence included in the NURBS data into a final node vector and a final control point sequence representing the rational bessel control point data,
the control point adjusting unit deletes the specific control point of the final node vector using the searched index.
2. The curved surface image processing apparatus according to claim 1, wherein:
when the number of times of NURBS data is3, the final control point row is (Q [0], Q [1],. and Q [ I-1]) in which I is an integer, the final node vector is (u [0], u [1],. and u [ I +3]), and node values of (k-j +1) of the nodes u [3], (u [ j ],. once, u [3],. and u [ k ]) that start to draw the NURBS data are equal to u [3] and multiplexed with a multiplexing degree of 3 or more, the control point adjusting unit deletes (k-3) control points of (Q [0], Q [1],. and Q [ k-4]) in the final control point row.
3. The curved surface image processing apparatus according to claim 1, wherein:
when the number of times of NURBS data is3, the final control point is (Q [0],. once, Q [ I-2], Q [ I-1]), the final node vector is (u [0],. once, u [ I +2], u [ I +3]), and (k-j +1) node values of (u [ j ],. once, u [ I ],. once, u [ k ]) of the node u [ I ], (u [ j ],. once, u [ I ],. once, u [ k ]) at which the depiction of the NURBS data is completed are equal to u [ I ] and are multiplexed by 3 or more degrees of multiplexing, the control point adjusting unit deletes (I-j) control points of (Q [ j ],. once, Q [ I-2], Q [ I-1]) in the final control point column.
4. A curved surface image processing apparatus that draws a 3-dimensional object on a screen using NURBS data that is shape data of the 3-dimensional object, characterized in that: is provided with
The data conversion unit is used for performing parameter conversion on NURBS data formed by a NURBS curve and a NURBS curved surface and converting the NURBS data into rational Bezier control point data formed by a rational Bezier curve and a rational Bezier curved surface;
a curved surface dividing unit that divides a rational Bessel curved surface patch formed by the rational Bessel control point data converted by the data conversion unit into a plurality of curved surface patches; and
a rendering unit that renders the 3-dimensional object with the surface patch;
the curved surface dividing unit is further provided with: an area calculation unit that calculates a signed area of a 2-dimensional graph formed by perspective transformation using the rational b e zier control point data defining each curved patch shape constituting the object; and
and a detection unit that detects whether or not the surface patch is a surface patch forming a contour edge of the target contour portion, based on the value of the signed area.
5. The curved surface image processing apparatus according to claim 4, wherein:
the curved surface dividing unit further includes a fine division level determining unit configured to determine a fine division level of the curved surface patch in accordance with a result of whether the detection unit detects a curved surface patch forming the contour edge and a value of the signed area of the curved surface patch on the screen calculated by the area calculating unit.
6. The curved surface image processing apparatus according to claim 4, wherein:
the area calculating unit calculates a signed area of a 2-dimensional figure composed of rational B zier control points existing in a patch by perspective transformation among the rational B zier control point data,
the detection unit detects whether or not the patch is a curved patch in which a contour edge is formed, using the signed area.
7. The curved surface image processing apparatus according to claim 4, wherein:
the detection unit further compares the sign of the signed area of the 2-dimensional pattern calculated for the first surface patch with the sign of the signed area of the 2-dimensional pattern calculated for the surface patch adjacent to the first surface patch, and detects that the surface patch is a surface patch having a contour edge if the signs are different from each other.
8. The curved surface image processing apparatus according to claim 5, wherein:
the fine division level determination unit further determines a maximum value of the signed area calculated by the area calculation unit, and determines the fine division level of the curved patch forming the contour edge based on the determined maximum value of the signed area.
9. The curved surface image processing apparatus according to claim 4, wherein:
the detection unit uses the signed area of the 2-dimensional graph, which is calculated by the area calculation unit and is configured by performing perspective transformation on all control points defining the shape of each curved patch constituting the object, as an index for determining whether the area is a curved patch for forming a contour edge.
10. The curved surface image processing apparatus according to claim 4, wherein:
the area calculating unit further calculates a signed area of each 2-dimensional pattern formed by perspective-converting all control points defining each curved patch shape of the object to be formed, and then calculates a sum of areas divided by signs,
the detection unit detects that the curved patch is not a curved patch in which a contour edge is formed when any one of the sums of the areas divided by the positive and negative signs calculated by the area calculation unit is 0.
11. The curved surface image processing apparatus according to claim 5, wherein:
the area calculating unit further calculates a signed area of each 2-dimensional figure formed by perspective-converting all control points defining each curved patch shape of the object,
the fine division level determination unit determines the fine division level of the curved patch forming the contour edge based on the sum of absolute values of the signed area values of the 2-dimensional graph of the curved patch calculated by the area calculation unit.
12. The curved surface image processing apparatus according to claim 5, wherein:
the fine division level determination unit independently determines the fine division level for each of the 1 st axis and the 2 nd axis defining each curved patch constituting the object.
13. The curved surface image processing apparatus according to claim 12, wherein:
the area calculation unit further calculates a signed area of each of the 2-dimensional figures to be referred to by referring to the 2-dimensional figures adjacent to the 1 st axis direction and the 2 nd axis direction among the 2-dimensional figures configured by perspective-transforming all the control points defining the shape of each curved patch to be configured,
the subdivision-level determination unit determines a subdivision level in the 1 st axis direction in accordance with a ratio of the maximum value to the minimum value of the calculated signed area, calculates the signed area of each of the 2-dimensional patterns to be referred to by referring to the 2-dimensional patterns adjacent to the 2 nd axis direction, and determines a subdivision level in the 2 nd axis direction in accordance with a ratio of the maximum value to the minimum value of the calculated signed area.
14. The curved surface image processing apparatus according to claim 5, wherein:
the curved surface dividing unit further includes an initial fine dividing unit that performs fine division of 1 or more levels before the fine division level determining unit determines the fine division level of each curved surface patch constituting the object.
15. The curved surface image processing apparatus according to claim 4, wherein:
the curved surface dividing unit further includes a maximum subdivision level determination unit configured to determine in advance a maximum subdivision level of each curved surface patch constituting the object.
16. The curved surface image processing apparatus according to claim 15, wherein:
the maximum subdivision level determination unit determines the maximum subdivision level of each surface patch based on a ratio of a distance between a plane in which a control point existing on the surface patch is expanded and the control point not existing on the surface patch among control points defining the shape of each surface patch constituting an object, and a length of a diagonal line connecting the control points existing on the surface patch.
17. The curved surface image processing apparatus according to claim 15, wherein:
the maximum subdivision level determination unit calculates a length of a line connecting control points existing on the curved patch among control points defining shapes of the curved patches constituting the object, calculates a sum of distances between adjacent control points for the control points existing between the control points on the curved patch, and determines a maximum subdivision level of each curved patch in accordance with a ratio of the calculated sum of distances to the length of the line.
18. The curved surface image processing apparatus according to any one of claims 4 to 17, wherein:
the surface patch constituting the object is a rational b e zier surface.
19. The curved surface image processing apparatus according to claim 5, wherein:
the fine division level determined by the fine division level determination unit is set to 1 level for each fine division of each curved patch to be subdivided in the 1 st axis and the 2 nd axis directions, or 1 level for 1-time fine division in either the 1 st axis or the 2 nd axis direction, and the fine division level is the number of times of the fine division.
20. The curved surface image processing apparatus according to claim 1, wherein:
the curved surface image processing apparatus further includes a normal line calculation unit that calculates a normal line of each control point using the rational b e zier control point data of the rational b zier curved surface,
the normal calculation means includes: a selection unit that, when calculating normals from a first control point to a 4 th control point that are four corners of the surface patch, selects two control points adjacent to the first control point to be a normal calculation target; and
and a calculation unit that calculates a vector between the first control point and each of the two adjacent control points, calculates an outer product of the calculated two vectors, and calculates a normalized outer product as a normal to the first control point.
21. The curved surface image processing apparatus according to claim 20, wherein:
the selection unit selects two control points adjacent to a first control point to be a normal calculation target and having different coordinates from the first control point when calculating normals from the first control point to a 4 th control point which are four corners of the patch, and when at least one of second control points to fourth control points adjacent to the first control point is set to the same coordinates as the first control point after being set to a retracted position,
the calculating unit calculates a vector between the first control point and each of the two control points selected by the selecting unit, calculates an outer product of the two vectors, and calculates a normalized outer product as a normal line of the first control point.
22. The curved surface image processing apparatus according to claim 20, wherein:
the selection unit, when calculating the normals from the first control point to the 4 th control point, which are the four corners of the patch, retracts the control point adjacent to the first control point to be the normal calculation target, and selects the control point adjacent to the control point and not retracted.
23. The curved surface image processing apparatus according to claim 20, wherein:
the selection unit further selects another control point when the angle formed by the two normal vectors calculated by the calculation unit is equal to or smaller than a predetermined angle.
24. The curved surface image processing apparatus according to claim 20, wherein:
the selection unit does not select a control point whose distance from the first control point to be a normal calculation target is equal to or less than a predetermined distance.
25. The curved surface image processing apparatus according to claim 20, wherein:
when calculating the normals of the control points at the four corners of the Bessel surface, the selection unit compares coordinates of a first control point to be a normal calculation target among the control points at the 4 corners and the coordinates of the second control point to the fourth control point adjacent to the first control point, and selects two control points (P01, P10) adjacent to one of the control points (P00) at the 4 corners when the coordinates are different from each other,
the calculating unit calculates a vector between each of two control points (P01, P10) adjacent to the first control point (P00) and the first control point (P00), calculates an outer product of the two vectors, and calculates a normalized outer product as a normal line of the control point,
the selection unit selects a nearby non-retracted control point when the first control point (P00) is identical in coordinates to at least one of two adjacent control points (P01, P10),
the calculation unit calculates a vector between the first control point and a nearby control point and a vector between the first control point and a control point that is not retracted among the two control points (P01, P10), calculates an outer product of the two calculated vectors, and calculates a normalized outer product as a normal to the control point.
26. A curved surface image processing method for rendering a 3-dimensional object on a screen using NURBS data as shape data of the 3-dimensional object, characterized in that: comprises
A data conversion step of performing parameter conversion on NURBS data formed by a NURBS curve and a NURBS curved surface to convert the NURBS data into rational Bezier control point data formed by a rational Bezier curve and a rational Bezier curved surface;
a curved surface dividing step of finely dividing a rational bessel curved surface patch formed by the rational bessel control point data converted in the data conversion step into a plurality of curved surface patches; and
a rendering step of rendering the 3-dimensional object with the surface patch;
the NURBS data consists of columns of control points and node vectors,
the data conversion step includes:
a node insertion step of performing an operation of inserting the node vector into the control point row by using a node insertion algorithm; and
and a control point adjusting step of deleting useless control points from the control points included in the control point sequence generated by the calculation in the node inserting step.
27. The curved surface image processing method according to claim 26, wherein:
the dividing processing step further includes: an area calculation step of calculating a signed area of a 2-dimensional graph formed by perspective transformation using the rational Bessel control point data defining each curved patch shape constituting the object; and
a detection step of detecting whether or not the surface patch is a surface patch forming a contour edge of the target contour portion, based on the value of the signed area.
28. The curved surface image processing method according to claim, wherein:
the dividing processing step may further include a fine division level determining unit configured to determine a fine division level of the surface patch based on a result of the detection of whether the surface patch forms the contour edge in the detecting step and the value of the signed area on the screen of the surface patch calculated in the area calculating step.
29. The curved surface image processing method according to claim 26, wherein:
further comprising a normal calculation step of calculating a normal of each control point using the rational Bessel control point data of the rational Bessel surface,
the normal calculation step includes: a selection step of selecting two control points adjacent to a first control point to be a normal calculation target when calculating normals from the first control point to a 4 th control point which are four corners of the surface patch; and
and a calculating step of calculating vectors between the first control point and the two adjacent control points, respectively, calculating an outer product of the two calculated vectors, and calculating a normalized outer product as a normal line of the first control point.
CNB2003101143697A 2002-11-12 2003-11-12 Curve image processor and its processing method Expired - Fee Related CN100341031C (en)

Applications Claiming Priority (9)

Application Number Priority Date Filing Date Title
JP2002328052 2002-11-12
JP328052/2002 2002-11-12
JP2002329442 2002-11-13
JP329442/2002 2002-11-13
JP329441/2002 2002-11-13
JP329443/2002 2002-11-13
JP2002329443 2002-11-13
JP2002329441 2002-11-13
JP2003380341A JP4464657B2 (en) 2002-11-12 2003-11-10 Curved image processing apparatus and curved image processing method

Publications (2)

Publication Number Publication Date
CN1499447A CN1499447A (en) 2004-05-26
CN100341031C true CN100341031C (en) 2007-10-03

Family

ID=32719582

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2003101143697A Expired - Fee Related CN100341031C (en) 2002-11-12 2003-11-12 Curve image processor and its processing method

Country Status (2)

Country Link
JP (1) JP4464657B2 (en)
CN (1) CN100341031C (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7038697B2 (en) * 2003-02-25 2006-05-02 Microsoft Corporation Color gradient paths
US7646384B2 (en) * 2005-03-31 2010-01-12 Siemens Product Lifecycle Management Software Inc. System and method to determine a simplified representation of a model
CN101441781B (en) * 2007-11-23 2011-02-02 鸿富锦精密工业(深圳)有限公司 Curved surface overturning method
US8643644B2 (en) * 2008-03-20 2014-02-04 Qualcomm Incorporated Multi-stage tessellation for graphics rendering
CN101867703B (en) * 2009-04-16 2012-10-03 辉达公司 System and method for image correction
CN101692257B (en) * 2009-09-25 2012-05-16 华东理工大学 Method for registering complex curved surface
JP4955075B2 (en) * 2010-01-20 2012-06-20 本田技研工業株式会社 Design support system and design support program
CN102609987B (en) * 2012-01-09 2014-12-17 北京电子科技学院 Method and system for drawing curved surface by calculating all real roots and multiple numbers of zero-dimensional trigonometric polynomial system
US9558573B2 (en) * 2012-12-17 2017-01-31 Nvidia Corporation Optimizing triangle topology for path rendering
KR102292923B1 (en) 2014-12-15 2021-08-24 삼성전자주식회사 3d rendering method and apparatus
CN105376555A (en) * 2015-12-11 2016-03-02 重庆环漫科技有限公司 Stereo fusion playing method
CN105631817A (en) * 2015-12-23 2016-06-01 王蕾 Subdivision rational surface (forward) depixeling technology
CN105676290B (en) * 2016-04-03 2017-10-13 北京工业大学 Geological data 3 D displaying method based on surface subdivision
JP6782108B2 (en) * 2016-07-19 2020-11-11 大成建設株式会社 Visible rate calculation device
WO2018054496A1 (en) 2016-09-23 2018-03-29 Huawei Technologies Co., Ltd. Binary image differential patching
CN108428230B (en) * 2018-03-16 2020-06-16 青岛海信医疗设备股份有限公司 Method, device, storage medium and equipment for processing curved surface in three-dimensional virtual organ
CN108399942A (en) * 2018-03-16 2018-08-14 青岛海信医疗设备股份有限公司 Display methods, device, storage medium and the equipment of three-dimensional organ
CN108763668B (en) * 2018-05-15 2022-03-01 杭州电子科技大学 Gear model region parameterization method based on subdivision technology and boundary replacement
KR102115945B1 (en) * 2019-11-26 2020-05-27 성균관대학교 산학협력단 Geometry extraction method from noise barrier tunnels using quad meshes, computer-readable recording medium on which the method is stored, and computer program stored on the medium
CN111443864B (en) * 2020-04-14 2023-03-07 重庆赋比兴科技有限公司 iOS-based curve drawing method
CN113345065B (en) * 2021-08-04 2021-11-12 康达洲际医疗器械有限公司 Curved surface image construction method and system based on directional line segments
CN116984266B (en) * 2023-09-26 2024-01-16 中江立江电子有限公司 Connector sorting device and sorting method
CN117726710B (en) * 2024-02-18 2024-06-04 粤港澳大湾区数字经济研究院(福田) Curve dispersion-based drawing method and related device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1173000A (en) * 1996-05-10 1998-02-11 索尼计算机娱乐公司 Improvements in methods and apparatus for recording and information processing and recording medium therefor
CN1272933A (en) * 1998-04-09 2000-11-08 索尼电脑娱乐公司 Image processing apparatus and image processing method, program providing medium, and data providing medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1173000A (en) * 1996-05-10 1998-02-11 索尼计算机娱乐公司 Improvements in methods and apparatus for recording and information processing and recording medium therefor
CN1272933A (en) * 1998-04-09 2000-11-08 索尼电脑娱乐公司 Image processing apparatus and image processing method, program providing medium, and data providing medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Real -Time Rendering of Trimmed Surfaces" Alyn Rockwood,Kurt Heaton,Tom Davis:,Computer Graphics,Vol.Vol.3,No.No.23 1989 *

Also Published As

Publication number Publication date
JP2004178576A (en) 2004-06-24
JP4464657B2 (en) 2010-05-19
CN1499447A (en) 2004-05-26

Similar Documents

Publication Publication Date Title
CN100341031C (en) Curve image processor and its processing method
CN1293518C (en) Method and apparatus for triangle rasterization with clipping and wire-frame mode support
CN1194318C (en) Object region information recording method and object region information forming device
CN1178458C (en) Image coding device image decoding device, image coding method, image decoding method and medium
CN1168322C (en) Image coding/decoding method and recorded medium on which program is recorded
CN1816825A (en) Signal processing device, signal processing method, program, and recording medium
CN1845178A (en) Image plotting method and image plotting equipment using omnidirectional different pattern mapping
CN1465036A (en) Information processor
CN101046883A (en) Graphics-rendering apparatus
CN1645241A (en) Imaging apparatus, method and device for processing images
CN101038625A (en) Image processing apparatus and method
CN1947152A (en) Image processing apparatus and method, and recording medium and program
CN1684492A (en) Image dictionary creating apparatus, coding apparatus, image dictionary creating method
CN1813267A (en) Signal processing device, signal processing method, program, and recording medium
CN1251724A (en) Image processor, image data processor and variable length encoder/decoder
CN1200571C (en) Orthogonal transformation, inverse orthogonal transformation method and device, and encoding and decoding method and device
CN1645415A (en) Rendering device and rendering method
CN1957616A (en) Moving picture encoding method and moving picture decoding method
CN1526098A (en) Method and system for output of data related to two- or three-dimensional geometrical entities
CN1083954A (en) The input-output unit and the method thereof of lteral data, lexicon-illustration data
CN1618087A (en) Three dimentional shape displaying program, 3-D shape displaying method and 3-D shape displaying device
CN1885431A (en) Semiconductor memory device and control method for the semiconductor memory device
CN1324531C (en) Image processor and image processing method
CN1289509A (en) Data processing method, data processor, and program recorded medium
CN1304617A (en) Interpolation processor and recording medium recording interpolation processing program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20071003

Termination date: 20101112