JPH08263696A - Three-dimensional object model generating method - Google Patents

Three-dimensional object model generating method

Info

Publication number
JPH08263696A
JPH08263696A JP7041729A JP4172995A JPH08263696A JP H08263696 A JPH08263696 A JP H08263696A JP 7041729 A JP7041729 A JP 7041729A JP 4172995 A JP4172995 A JP 4172995A JP H08263696 A JPH08263696 A JP H08263696A
Authority
JP
Japan
Prior art keywords
point
dimensional
camera
cameras
remarked
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
JP7041729A
Other languages
Japanese (ja)
Inventor
Mikio Ikuta
幹雄 生田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meidensha Corp
Meidensha Electric Manufacturing Co Ltd
Original Assignee
Meidensha Corp
Meidensha Electric Manufacturing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Meidensha Corp, Meidensha Electric Manufacturing Co Ltd filed Critical Meidensha Corp
Priority to JP7041729A priority Critical patent/JPH08263696A/en
Publication of JPH08263696A publication Critical patent/JPH08263696A/en
Withdrawn legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

PURPOSE: To generate an accurate model with less operation and smaller data quantity by designating remarked points on pictures to be remarked by the use of plural cameras, designating the stereoscopic shape of an object and calculating their vertex coordinates. CONSTITUTION: In respect of the three-dimensional position of some point on the object 1, a point on camera pictures taken by, for instance, two cameras C0, C1 at different positions (points by two cameras to conform with each other in three-dimensional space: remarked point) is designated. This remarked point is made (Uc0 , Vc0 ) on the picture of the camera C0 and (Uc1 , Vc1 ) on the picture of the camera C1. Then, by calculating the equations of straight lines which pass through the origins 0 of the cameras C0, C1 and have direction vectors (ac0 , bc0 , cc0 ), (ac1 , bc1 , cc1 ) so as to be free from the error, the three- dimensional position of the point at which these two straight lines intersect each other becomes the three-dimensional position of the remarked point. At that time, the point at which distance between two straight lines becomes minimum is calculated, and this point is made the three-dimensional coordinates of the remarked point. As the result, the point most approximate to an intersection can be obtained.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】本発明は、複数の画像から3次元
オブジェクトモデルを作成する方法に関する。
FIELD OF THE INVENTION The present invention relates to a method for creating a three-dimensional object model from a plurality of images.

【0002】[0002]

【従来の技術】現在、3次元CAD等を用いて3次元オ
ブジェクトモデルを作成するに当っては、与えられた図
面から3次元座標を入力してモデルを作成しているが、
3次元座標入力に手間がかかり、また物体の位置や高さ
などの3次元情報が図面により判明しない場合にはモデ
ルを作成できないという問題を孕んでいる。このため現
在では、複数の画像(映像)から物体の頂点などの3次
元座標を計算し、3次元オブジェクトモデルを作成する
手法も提案されている。IEICETRANS IFS & SYST.,VOL.
E77-D,No.9 SEPTEMBER 1994. P966〜972 ; StructureRe
covery from Multiple Images by Directly Estimating
the Intersectionsin 3−D Space の提案例がある。
2. Description of the Related Art Currently, when creating a three-dimensional object model using three-dimensional CAD, etc., the model is created by inputting three-dimensional coordinates from a given drawing.
There is a problem that it takes time and effort to input three-dimensional coordinates, and a model cannot be created when three-dimensional information such as the position and height of an object is not known from a drawing. Therefore, at present, there is also proposed a method of calculating three-dimensional coordinates of vertices of an object from a plurality of images (videos) and creating a three-dimensional object model. IEICETRANS IFS & SYST., VOL.
E77-D, No.9 SEPTEMBER 1994.P966〜972; StructureRe
covery from Multiple Images by Directly Estimating
There is a proposed example of the Intersectionsin 3-D Space.

【0003】一般的に画像から3次元オブジェクトモデ
ルを作成する手法は、次のような処理による。 (1)異なる視点から撮影した複数の画像を計算機に読
み込む。 (2)撮影したカメラの位置,方向を表すパラメータを
計算機へ入力するか、このパラメータを画像から算出す
る。 (3)複数の画像上においてピクセルの輝度信号をもと
にして、3次元空間中同一点を自動的に算出しマッチン
グをとる。 (4)三角測量の原理を利用して3次元座標を算出す
る。 (5)算出した3次元座標中の点を結び3次元空間での
線や面を作成する。
Generally, a method of creating a three-dimensional object model from an image is as follows. (1) Read a plurality of images taken from different viewpoints into a computer. (2) Input a parameter indicating the position and direction of the photographed camera into the computer or calculate this parameter from the image. (3) The same point is automatically calculated and matched in the three-dimensional space on the basis of the luminance signals of pixels on a plurality of images. (4) Calculate three-dimensional coordinates using the principle of triangulation. (5) A line in a three-dimensional space or a surface is created by connecting points in the calculated three-dimensional coordinates.

【0004】[0004]

【発明が解決しようとする課題】上述の手法による3次
元オブジェクトモデルの作成に当っては、図7に示すよ
うに、殊に二つの視点が近い場合、前述のマッチングを
とるステップ(3)にてわずかの誤差の影響が奥行きを
計算する際に大きく影響し、二つの視点が離れている場
合は、画像間の輝度情報に大きな差異が生じてしまい自
動的にマッチングをとることが困難となってしまうとい
う問題がある。そして、この3次元座標の誤差はカメラ
の位置や方向を示すパラメータの誤差に起因しても生じ
得る。
In creating the three-dimensional object model by the above method, as shown in FIG. 7, particularly when the two viewpoints are close to each other, the step (3) for performing the above-mentioned matching is performed. The effect of a slight error greatly affects the depth calculation, and when the two viewpoints are far apart, there is a large difference in the brightness information between the images, making it difficult to automatically match. There is a problem that it will end up. The error in the three-dimensional coordinates can also occur due to the error in the parameter indicating the position or direction of the camera.

【0005】更に、自動的にマッチングをとるに当って
は、画像処理にて必要な点(例えばオブジェクトの頂
点)と不必要な点(例えば背景)との識別を行なうこと
が非常に困難であるため、必要最小限の3次元座標のみ
を算出することができず、データ量が莫大となる。ま
た、オクルージョン(他のオブジェクトやそのオブジェ
クト自体の陰によって面や頂点が見えなくなること)に
よって必要な点のデータが欠落してしまうということも
生ずる。なお、前述の提案例による手法にあっては、前
者の問題すなわちマッチングをとることは可能である
が、極めて多くの点で3次元座標を算出することにな
り、データ量が莫大となるという後者の問題は、解決し
ていない。なお、提案例では点の情報であり面や線の情
報は含まれていない。
Further, in automatic matching, it is very difficult to distinguish a point (for example, a vertex of an object) and an unnecessary point (for example, a background) necessary for image processing. Therefore, only the minimum required three-dimensional coordinates cannot be calculated, and the amount of data becomes enormous. In addition, occlusion (faces and vertices cannot be seen due to shadows of other objects or the objects themselves) may cause missing of necessary point data. In the method according to the above-mentioned proposal example, although the former problem, that is, matching can be taken, the three-dimensional coordinate is calculated at an extremely large number of points, and the latter requires a huge amount of data. Problem has not been solved. It should be noted that the proposed example does not include surface or line information, but point information.

【0006】本発明は、上述の問題に鑑み、離れた位置
から撮影した複数の画像のマッチングをとることがで
き、また、少ない操作とデータ量にて正確なモデルを作
成できる3次元オブジェクトモデル作成方法の提供を目
的とする。
In view of the above problems, the present invention is capable of matching a plurality of images photographed from distant positions, and is capable of creating an accurate model with a small amount of operation and data volume. The purpose is to provide a method.

【0007】[0007]

【課題を解決するための手段】上述の目的を達成する本
発明は、次の構成を特徴とする。 (1)オブジェクトを複数のカメラにて撮影した画像を
上記複数のカメラの位置や向きを示すカメラパラメータ
と共にコンピュータに取り込み、上記複数のカメラでの
画像上の注目点を指定すると共に、この注目点の3次元
座標を計算し、上記オブジェクトの立体形状を指定する
と共にオブジェクトの頂点座標を算出する、ことを特徴
とする。 (2)注目点の3次元座標の計算に際しては、複数のカ
メラによる原点と画像の注目点とを結ぶ直線が最も近接
した位置を注目点としたことを特徴とする。
The present invention which achieves the above object is characterized by the following constitution. (1) An image obtained by shooting an object with a plurality of cameras is loaded into a computer together with camera parameters indicating the positions and orientations of the plurality of cameras, a point of interest on the image of the plurality of cameras is specified, and the point of interest is also specified. Is calculated, the three-dimensional coordinates of the object are designated, the three-dimensional shape of the object is specified, and the vertex coordinates of the object are calculated. (2) In the calculation of the three-dimensional coordinates of the point of interest, the point of interest is the position where the straight line connecting the origin of a plurality of cameras and the point of interest of the image is the closest.

【0008】[0008]

【作用】複数のカメラの画像での注目点は直線の交差に
よらなくても近接位置にて特定することができる。ま
た、画像,カメラパラメータ,立体形状の指定による必
要最小限の頂点の3次元座標のみを求めることになり簡
単かつ少ない作業量にて必要最小限の3次元データを得
ることができる。
The point of interest in the images of a plurality of cameras can be specified at the close position without depending on the intersection of straight lines. Further, since only the minimum required three-dimensional coordinates of the vertices are specified by specifying the image, the camera parameter, and the three-dimensional shape, the minimum required three-dimensional data can be obtained easily and with a small amount of work.

【0009】[0009]

【実施例】ここで、図1〜図6を参照して本発明の実施
例を説明する。図1に示すように、まずオブジェクト1
に対してカメラc0,c1にて異なる位置からそれぞれ
の画像を撮影する。このとき、カメラc0,c1の3次
元位置と方向(カメラパラメータと称する)を同時に記
録しておく。このそれぞれのカメラパラメータを(x
0,y0,z0,ω0,φ0,κ0)、(x1,y1,
z1,ω1,φ1,κ1)とする。この場合(x0,y
0,z0)(x1,y1,z1)は、それぞれのカメラ
c0,c1の位置を示し、(ω0,φ0,κ0)(ω
1,φ1,κ1)にてカメラの方向,向きを示す。
EXAMPLES Examples of the present invention will now be described with reference to FIGS. As shown in FIG. 1, first, the object 1
On the other hand, the cameras c0 and c1 capture respective images from different positions. At this time, the three-dimensional positions and directions (referred to as camera parameters) of the cameras c0 and c1 are simultaneously recorded. Let each of these camera parameters be (x
0, y0, z0, ω0, φ0, κ0), (x1, y1,
z1, ω1, φ1, κ1). In this case (x0, y
0, z0) (x1, y1, z1) indicates the positions of the cameras c0, c1, respectively, and (ω0, φ0, κ0) (ω
1, φ1, κ1) indicates the direction and orientation of the camera.

【0010】ここで、ω,φ,κを若干説明する。図2
はカメラ回転によるパラメータを説明したものであり、
x,y,z軸方向と一致するカメラ座標系をu0,v
0,w0とし、この座標系のu0軸まわりでの回転量を
ωとする。すなわち、v0,w0がωだけ回転して座標
系u1,v1,w1(u0=u1である)を得る。つい
で、u0軸まわりの回転の結果得られた座標系u1,v
1,w1について、今度はv1軸まわりの回転量をφと
すると、u1,w1がφだけ回転して、u2,v2,w
2(v1=v2となる)が得られる。更に、v1軸まわ
りの回転の結果得られた座標系u2,v2,w2につい
て、w2軸まわりの回転量をκとするとu2,v2がκ
だけ回転してu,v,w(w2=wとなる)を得る。こ
うして、基準の座標系x,y,zに対する座標系u,
v,wによって実際に撮影したときのカメラの座標系が
得られる。
Here, ω, φ and κ will be described a little. Figure 2
Describes the parameters by camera rotation,
The camera coordinate system that coincides with the x, y, and z axis directions is u0, v
0, w0, and the rotation amount around the u0 axis of this coordinate system is ω. That is, v0 and w0 rotate by ω to obtain coordinate systems u1, v1, and w1 (u0 = u1). Then, the coordinate system u1, v obtained as a result of the rotation around the u0 axis
For 1, w1, this time, if the amount of rotation around the v1 axis is φ, then u1, w1 rotates by φ, and u2, v2, w
2 (v1 = v2) is obtained. Further, regarding the coordinate system u2, v2, w2 obtained as a result of rotation about the v1 axis, if the amount of rotation about the w2 axis is κ, then u2, v2 is κ.
To obtain u, v, w (w2 = w). Thus, the coordinate system u, for the reference coordinate system x, y, z
The coordinate system of the camera when the image is actually taken can be obtained from v and w.

【0011】上述の如く得られたカメラの位置と方向と
からなるカメラパラメータとカメラc0,c1による2
枚の画像(例えば図4のp0,p1)は、コンピュータ
に読み込まれる。
The camera parameters consisting of the position and direction of the camera obtained as described above and 2 by the cameras c0 and c1.
The images (for example, p0 and p1 in FIG. 4) are read by the computer.

【0012】ついで、オブジェクト1のある点の3次元
位置につき、例えば2台の別位置のカメラc0,c1に
て撮影したカメラ画像面上の点(2台のカメラにて3次
元空間中にて一致する点:注目点と称する)を指定す
る。この注目点をカメラc0の画像面上にて(UC0,V
C0)、カメラc1の画像面上にて(UC1,VC1)とす
る。この場合、図3に示すように画像面はカメラ座標系
でのu,v平面に合わせ、画像面中心をカメラ座標系w
軸に合わせており、カメラの焦点距離fをw軸上にとり
カメラ原点Oとする。また、この場合前述したオクルー
ジョンが生ずる場合には画像面上のおおよその位置をオ
ペレータが指定する。以上のデータをもとにしてカメラ
c0,c1それぞれにつき、カメラ原点(中心O)から
画像面上の指定注目点への3次元方向ベクトル(aC0
C0,cC0),(aC1,bC1,cC1)を算出する。この
場合、3次元方向ベクトルは次の式[数1]にて算出で
きる。
Next, for the three-dimensional position of a certain point of the object 1, for example, a point on the camera image plane photographed by two cameras c0 and c1 at different positions (two cameras in a three-dimensional space). Matching point: referred to as a point of interest) is designated. This point of interest is (U C0 , V
C0 ) and (U C1 , V C1 ) on the image plane of the camera c1. In this case, as shown in FIG. 3, the image plane is aligned with the u and v planes in the camera coordinate system, and the center of the image plane is set in the camera coordinate system w.
It is aligned with the axis, and the focal length f of the camera is on the w axis and is the camera origin O. Further, in this case, when the occlusion described above occurs, the operator specifies an approximate position on the image plane. Based on the above data, for each of the cameras c0 and c1, the three-dimensional direction vector (a C0 , from the camera origin (center O) to the designated point of interest on the image plane,
Calculate b C0 , c C0 ), (a C1 , b C1 , c C1 ). In this case, the three-dimensional direction vector can be calculated by the following formula [Equation 1].

【0013】[0013]

【数1】 すなわち、図3に示すU,V,fの値と、カメラの方向
を表わす回転パラメータω,φ,κとから計算される行
列式Rにて3次元方向ベクトル(a,b,c)が得られ
る。
[Equation 1] That is, the three-dimensional direction vector (a, b, c) is obtained by the determinant R calculated from the values of U, V, f shown in FIG. 3 and the rotation parameters ω, φ, κ representing the direction of the camera. To be

【0014】なお、前述の回転パラメータによる行列式
Rは次式[数2]のようにして得られる。
The determinant R based on the above-mentioned rotation parameter is obtained by the following equation [Equation 2].

【数2】 この結果、方向を表わす行列式はR=Rω・Rφ・Rκ
となり次式[数3]に示す式となる。
[Equation 2] As a result, the determinant representing the direction is R = Rω · Rφ · Rκ
Then, the following equation [Equation 3] is obtained.

【数3】 (Equation 3)

【0015】ここにおいて、オブジェクト1の座標系
(図2のXYZとする)と、カメラ座標系(u,v,w
とする)との関係は次式[数4]にて示される。この場
合(x 0 ,y0 ,z0 )はカメラ値を表す。
Here, the coordinate system of the object 1
(XYZ in FIG. 2) and the camera coordinate system (u, v, w
The relation with () is expressed by the following equation [Equation 4]. This place
Combination (x 0, Y0, Z0) Indicates a camera value.

【数4】 そして、このカメラ座標系(u,v,w)にて2次元の
カメラ画面上の座標系(U,Vとする)に写像する式が
次式[数5]である。
[Equation 4] Then, an equation for mapping the coordinate system (u, v, w) onto the coordinate system (U, V) on the two-dimensional camera screen is the following equation [Equation 5].

【0016】[0016]

【数5】 それぞれのカメラの原点0を通り方向ベクトル(aC0
C0,cC0),(aC1,bC1,cC1)をもつ直線の方程
式は次式[数6]のようになる。
(Equation 5) The direction vector (a C0 ,
The equation of a straight line having b C0 , c C0 ) and (a C1 , b C1 , c C1 ) is as shown in the following expression [Equation 6].

【数6】 したがって、2本の直線の方程式を誤差なく求められれ
ば、この2本の直線が交差する点の3次元位置が注目点
の3次元位置となる。この場合、実際上誤差の影響のた
め2本の直線が必ず交差するとは限らないので、この場
合は2本の直線相互の距離が最小となる点を算出し、こ
れを注目点の3次元座標とする。
(Equation 6) Therefore, if the equation of the two straight lines can be obtained without error, the three-dimensional position of the point where the two straight lines intersect becomes the three-dimensional position of the target point. In this case, the two straight lines do not always intersect due to the influence of an error in practice, so in this case, the point where the distance between the two straight lines is the minimum is calculated, and this is calculated as the three-dimensional coordinate of the target point. And

【0017】この2本の直線の最小距離を算出するに当
っては、[数6]に示す2直線の距離をDとすると、 D2 =(x0+saC0−x1−tac1)2+(y0+sb
C0−y1−tbc1)2+(z0+scC0−z1−tcc1)2 となる。この式を最小にするs,tを求めるべく∂D2/
∂s=0 ∂D2/∂t=0として連立方程式をとき、求め
たs,tを各方程式を代入して得られる点(x0'y0'
z0')(x1' y1' z1')とすると、2直線からの距離が最小
となる点は(x0'+ x1'/2 ,y0' + y1'/2 ,z0' +
z1'/2)となる。この結果、交差点に最も近似した点を
得ることができる。
In calculating the minimum distance between these two straight lines, if the distance between the two straight lines shown in [Equation 6] is D, then D 2 = (x0 + sa C0 −x1-ta c1 ) 2 + (y0 + sb
C0 -y1-tb c1) 2 + ( a z0 + sc C0 -z1-tc c1 ) 2. ∂D 2 / to find s and t that minimize this equation
When ∂s = 0 ∂D 2 / ∂t = 0, simultaneous equations are obtained, and points obtained by substituting the obtained s and t for each equation (x 0 'y 0 '
z 0 ') (x 1 ' y 1 'z 1 '), the point with the minimum distance from the two straight lines is (x 0 '+ x 1 ' / 2, y 0 '+ y 1 ' / 2, z 0 '+
z 1 '/ 2). As a result, the point closest to the intersection can be obtained.

【0018】以上、カメラパラメータの特定と、画像面
上の各カメラの注目点の特定、及び注目点の3次元座標
の決定という3次元座標指定法を、図4の如き2枚の直
方体の画像面について行なう場合、図5のような処理と
なる。すなわち、図5にて、2枚の画像p0,p1を撮
影し同時にカメラパラメータを記録する。ついで、コン
ピュータに画像p0,p1とカメラパラメータとを読み
込み、各画像での注目点0,1,2,3(図4参照)を
指定する。そして、注目点の3次元座標を計算し、求め
た注目点の3次元座標から3次元モデルの頂点座標を算
出する。この場合、モデルは直方体であるという情報を
加味して注目点以外の残りの4点の3次元座標を算出す
る。そして、この3次元座標から作成した3次元モデル
のデータを保存する。図6はこうして作成したモデルの
例を示している。なお、上述の例は直方体を示したので
あるが他の多面体,多角錐,多角柱等のモデルも同様に
作成でき、更に複雑なモデルの場合には多面体,多角
錐,多角柱等のモデルの組合せとして作成できる。更
に、本実施例では、2枚の画像面にて説明したが複数枚
の画像面でも同様な処理にてモデルを作成することがで
きる。
As described above, the three-dimensional coordinate designation method of specifying the camera parameters, specifying the attention point of each camera on the image plane, and determining the three-dimensional coordinates of the attention point is performed by using two rectangular parallelepiped images as shown in FIG. When the processing is performed on the surface, the processing is as shown in FIG. That is, in FIG. 5, two images p0 and p1 are photographed and the camera parameters are recorded at the same time. Next, the images p0 and p1 and the camera parameters are read into the computer, and the attention points 0, 1, 2, 3 (see FIG. 4) in each image are designated. Then, the three-dimensional coordinates of the attention point are calculated, and the vertex coordinates of the three-dimensional model are calculated from the obtained three-dimensional coordinates of the attention point. In this case, the information that the model is a rectangular parallelepiped is added to calculate the three-dimensional coordinates of the remaining four points other than the target point. Then, the data of the three-dimensional model created from these three-dimensional coordinates is stored. FIG. 6 shows an example of the model thus created. Although the above example shows a rectangular parallelepiped, other polyhedrons, polygonal cones, polygonal cylinders, and other models can be created in the same manner. Can be created as a combination. Further, in the present embodiment, the description has been made with respect to two image planes, but a model can be created with a similar process even for a plurality of image planes.

【0019】[0019]

【発明の効果】以上説明したように本発明によれば、カ
メラのパラメータや複数の画像面上にて注目点に誤差が
あったとしても、注目点での線の交差のみならず、離れ
た線の近似ができて、3次元座標を支障なく算出でき
る。また、オブジェクトモデルは、点の集合でなく、例
えば直方体などの形状、辺の位置や向きとしてデータを
作成し保存することができ、簡単な操作と少ない作業量
にて少ないデータ保存が可能となる。
As described above, according to the present invention, even if there is an error in the parameter of the camera or the point of interest on a plurality of image planes, not only the lines intersect at the point of interest but also the lines are separated. Lines can be approximated and three-dimensional coordinates can be calculated without any problems. In addition, the object model can create and save data, for example, the shape of a rectangular parallelepiped, the position and orientation of sides, instead of a set of points, which enables simple operation and a small amount of work to save a small amount of data. .

【図面の簡単な説明】[Brief description of drawings]

【図1】本発明の方法に係るカメラ位置の例を示す配置
図。
FIG. 1 is a layout diagram showing an example of camera positions according to the method of the present invention.

【図2】カメラの回転パラメータの説明図。FIG. 2 is an explanatory diagram of rotation parameters of a camera.

【図3】注目点の3次元位置の説明図。FIG. 3 is an explanatory diagram of a three-dimensional position of a point of interest.

【図4】画像面を例示した図。FIG. 4 is a diagram illustrating an image surface.

【図5】処理フローチャート。FIG. 5 is a processing flowchart.

【図6】実際のオブジェクトの作成例を示す図。FIG. 6 is a diagram showing an example of creating an actual object.

【図7】誤差の説明図。FIG. 7 is an explanatory diagram of an error.

【符号の説明】[Explanation of symbols]

c0,c1 カメラ u0 ,u1 ,u2 ,u,v0 ,v1 ,v2 ,v,w0
1 ,w2 ,w, カメラ回転パラメータによる座標系 UVW カメラ座標系 XYZ オブジェクトの座標系 X0 0 0 1 1 1 カメラ位置
c0, c1 cameras u 0 , u 1 , u 2 , u, v 0 , v 1 , v 2 , v, w 0 ,
w 1 , w 2 , w, coordinate system by camera rotation parameter UVW camera coordinate system XYZ object coordinate system X 0 Y 0 Z 0 X 1 Y 1 Z 1 camera position

Claims (2)

【特許請求の範囲】[Claims] 【請求項1】 オブジェクトを複数のカメラにて撮影し
た画像を上記複数のカメラの位置や向きを示すカメラパ
ラメータと共にコンピュータに取り込み、 上記複数のカメラでの画像上の注目点を指定すると共
に、この注目点の3次元座標を計算し、 上記オブジェクトの立体形状を指定すると共にオブジェ
クトの頂点座標を算出する、 ようにした3次元オブジェクトモデル作成方法。
1. An image in which an object is photographed by a plurality of cameras is loaded into a computer together with camera parameters indicating the positions and orientations of the plurality of cameras, a point of interest on the images of the plurality of cameras is designated, and A three-dimensional object model creating method in which the three-dimensional coordinates of the point of interest are calculated, the three-dimensional shape of the object is specified, and the vertex coordinates of the object are calculated.
【請求項2】 注目点の3次元座標の計算に際しては、
複数のカメラによる原点と画像の注目点とを結ぶ直線が
最も近接した位置を注目点とした請求項1記載の3次元
オブジェクトモデル作成方法。
2. When calculating the three-dimensional coordinates of the point of interest,
The method for creating a three-dimensional object model according to claim 1, wherein a position where a straight line connecting an origin of a plurality of cameras and a target point of an image is closest to the target point is the target point.
JP7041729A 1995-03-01 1995-03-01 Three-dimensional object model generating method Withdrawn JPH08263696A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP7041729A JPH08263696A (en) 1995-03-01 1995-03-01 Three-dimensional object model generating method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP7041729A JPH08263696A (en) 1995-03-01 1995-03-01 Three-dimensional object model generating method

Publications (1)

Publication Number Publication Date
JPH08263696A true JPH08263696A (en) 1996-10-11

Family

ID=12616522

Family Applications (1)

Application Number Title Priority Date Filing Date
JP7041729A Withdrawn JPH08263696A (en) 1995-03-01 1995-03-01 Three-dimensional object model generating method

Country Status (1)

Country Link
JP (1) JPH08263696A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11183142A (en) * 1997-12-19 1999-07-09 Fuji Xerox Co Ltd Method and apparatus for picking up three-dimensional image
KR100584536B1 (en) * 1999-09-29 2006-05-30 삼성전자주식회사 Image processing apparatus for image communication
JP2011165117A (en) * 2010-02-15 2011-08-25 Nec System Technologies Ltd Apparatus, method and program for processing image
JP2015087846A (en) * 2013-10-29 2015-05-07 山九株式会社 Three-dimensional model generation system
JP2017036999A (en) * 2015-08-10 2017-02-16 日本電信電話株式会社 Shape detecting device, shape detecting method, and program
CN110874606A (en) * 2018-08-31 2020-03-10 深圳中科飞测科技有限公司 Matching method, three-dimensional morphology detection method and system thereof, and non-transitory computer readable medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11183142A (en) * 1997-12-19 1999-07-09 Fuji Xerox Co Ltd Method and apparatus for picking up three-dimensional image
KR100584536B1 (en) * 1999-09-29 2006-05-30 삼성전자주식회사 Image processing apparatus for image communication
JP2011165117A (en) * 2010-02-15 2011-08-25 Nec System Technologies Ltd Apparatus, method and program for processing image
JP2015087846A (en) * 2013-10-29 2015-05-07 山九株式会社 Three-dimensional model generation system
JP2017036999A (en) * 2015-08-10 2017-02-16 日本電信電話株式会社 Shape detecting device, shape detecting method, and program
CN110874606A (en) * 2018-08-31 2020-03-10 深圳中科飞测科技有限公司 Matching method, three-dimensional morphology detection method and system thereof, and non-transitory computer readable medium
CN110874606B (en) * 2018-08-31 2024-07-19 深圳中科飞测科技股份有限公司 Matching method, three-dimensional morphology detection method, system thereof and non-transitory computer readable medium

Similar Documents

Publication Publication Date Title
JP3954211B2 (en) Method and apparatus for restoring shape and pattern in 3D scene
US8452081B2 (en) Forming 3D models using multiple images
US8447099B2 (en) Forming 3D models using two images
CN107077744B (en) Method and system for three-dimensional model generation using edges
US5790713A (en) Three-dimensional computer graphics image generator
US20160125638A1 (en) Automated Texturing Mapping and Animation from Images
JP2000163590A (en) Conversion device and method for three-dimensional model
JP2720920B2 (en) A method for synthesizing real images and computer graphics in an image processing system
US20200349754A1 (en) Methods, devices and computer program products for generating 3d models
da Silveira et al. Dense 3d scene reconstruction from multiple spherical images for 3-dof+ vr applications
US20200013187A1 (en) Space coordinate converting server and method thereof
Luo et al. An Internet-enabled image-and model-based virtual machining system
Moustakides et al. 3D image acquisition and NURBS based geometry modelling of natural objects
JPH10111951A (en) Method and device for image processing and storage medium
JPH08263696A (en) Three-dimensional object model generating method
GB2312582A (en) Insertion of virtual objects into a video sequence
US7064767B2 (en) Image solution processing method, processing apparatus, and program
JP2003323603A (en) Stereo matching method, three-dimensional measuring method and device, program for stereo matching method and program for three dimensional measurement
JPH03138784A (en) Reconstructing method and display method for three-dimensional model
Schöning et al. Interactive 3D Modeling
JP2019144958A (en) Image processing device, image processing method, and program
JP3277105B2 (en) Method and apparatus for creating partial solid model
JPH09237346A (en) Method for composing partial stereoscopic model and method for preparing perfect stereoscopic model
JP6719168B1 (en) Program, apparatus and method for assigning label to depth image as teacher data
EP3779878A1 (en) Method and device for combining a texture with an artificial object

Legal Events

Date Code Title Description
A300 Application deemed to be withdrawn because no request for examination was validly filed

Free format text: JAPANESE INTERMEDIATE CODE: A300

Effective date: 20020507