JPH03196373A - Distance image generating method for both-eye stereoscopy - Google Patents

Distance image generating method for both-eye stereoscopy

Info

Publication number
JPH03196373A
JPH03196373A JP1337663A JP33766389A JPH03196373A JP H03196373 A JPH03196373 A JP H03196373A JP 1337663 A JP1337663 A JP 1337663A JP 33766389 A JP33766389 A JP 33766389A JP H03196373 A JPH03196373 A JP H03196373A
Authority
JP
Japan
Prior art keywords
equation
distance image
distance
image
inflection points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP1337663A
Other languages
Japanese (ja)
Inventor
Tomoko Segawa
智子 瀬川
Yuji Nakagawa
祐治 中川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Priority to JP1337663A priority Critical patent/JPH03196373A/en
Publication of JPH03196373A publication Critical patent/JPH03196373A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

PURPOSE:To obtain a distance image which has no unevenness by finding the equation of a surface from inflection points of a right and a left images and obtaining the three-dimensional coordinates of the inflection points from the equation of the surface. CONSTITUTION:An area division part 22 divides images picked up by cameras 20 and 21 into areas which have the same gradation and pattern (structure). A left label image 23 and a right label image 24 are supplied to an inflection point extraction part 25, the contour of each of surfaces of the label images is tracked to detect inflection points on the contour, and a correspondence part 26 makes inflection points on contours of respective surfaces correspond to each other between the left and right images. A surface equation calculation part 27 uses a least squares error calculation part 28 to calculate the surface equation by a least squares error method. Thus, the equation of the surface is found from the inflection points of the left and right images and used to obtain three-dimensional coordinates of the inflection points, so an error in distance due to a quantization error caused by parallax is not generated, distance information on the distance image is accurate, and the distance image having no unevenness is obtained.

Description

【発明の詳細な説明】 〔産業上の利用分野〕 本発明は両眼立体視の距離画像生成方法に関し、一対の
カメラで両眼立体視を行ない、両画像上で対応する点の
探索結果から、カメラと被写体との間の距離情報を得る
両眼立体視の距離画像生成方法に関する。
[Detailed Description of the Invention] [Industrial Application Field] The present invention relates to a binocular stereoscopic distance image generation method. , relates to a binocular stereoscopic distance image generation method for obtaining distance information between a camera and a subject.

〔従来の技術〕[Conventional technology]

第7図は従来方法の一例の構成図を示す。 FIG. 7 shows a block diagram of an example of a conventional method.

同図中、左カメラ10及び右カメラ11夫々で撮像した
画像は対応付は部13でエツジ等の要素を抽出され、左
右画像の各要素の対応付けが行なわれる。視差画像生成
部14は、対応付けの結果から左右画像の被写体の各要
素間の視差を抽出して画像として表示する。距離算出部
15は視差と左右カメラ間距離及びカメラの焦点距離等
のカメラパラメータから三角測量の原理でカメラと被写
体の各要素までの距離を訓算する。距離画像表示部16
は被写体の各要素の3次元座標を求めて画像表示を行な
う。
In the figure, elements such as edges are extracted from the images taken by the left camera 10 and the right camera 11 in a mapping section 13, and the respective elements of the left and right images are correlated. The parallax image generation unit 14 extracts the parallax between each element of the subject in the left and right images from the association result and displays it as an image. The distance calculation unit 15 calculates the distance between the camera and each element of the subject using the principle of triangulation from camera parameters such as the parallax, the distance between the left and right cameras, and the focal length of the camera. Distance image display section 16
calculates the three-dimensional coordinates of each element of the object and displays the image.

〔発明が解決しようとする課題〕[Problem to be solved by the invention]

従来方法では左右画像の被写体の各要素間の視差を求め
るときに量子化誤差を生じ、この視差を基にして距離を
算出すると、カメラから被写体の各要素までの距離に大
きな誤差を生じ、例えば三次元表示する距離画像で平面
部が凸凹に表示されてしまうという問題があった。
In the conventional method, a quantization error occurs when calculating the parallax between each element of the object in the left and right images, and when the distance is calculated based on this parallax, a large error occurs in the distance from the camera to each element of the object. There has been a problem in that the plane portion of the three-dimensionally displayed distance image appears uneven.

本発明は上記の点に鑑みなされたもので、距離情報が正
確で、誤差による凸凹のない距離画像を得る両眼立体視
の距離画像生成方法を提供することを目的とする。
The present invention has been made in view of the above points, and it is an object of the present invention to provide a binocular stereoscopic distance image generation method that obtains a distance image with accurate distance information and no irregularities due to errors.

〔課題を解決するための手段〕[Means to solve the problem]

本発明の両眼立体視の距離画像生成方法は、一対のカメ
ラで撮像した被写体の左右画像夫々を領域分割し、 領域分割で得られた各面の輪郭上の3点以上の屈曲点を
抽出し、 左右画像の対応する各面の3点以上の屈曲点から各面の
方程式を求め、 各面の方程式から屈曲点の3次元座標を求め、各面の輪
郭に沿って各面の屈曲点間を結ぶベクトルを生成し、 各面のベクトルで囲まれる内部を塗り潰した距離画像を
得る。
The binocular stereoscopic distance image generation method of the present invention divides the left and right images of a subject captured by a pair of cameras into regions, and extracts three or more bending points on the contour of each surface obtained by region division. Then, calculate the equation for each surface from three or more bending points on each corresponding surface of the left and right images, find the three-dimensional coordinates of the bending point from the equation of each surface, and calculate the bending point of each surface along the contour of each surface. Generate a vector that connects the two planes, and obtain a distance image that fills the interior surrounded by the vectors of each face.

〔作用〕[Effect]

本発明方法においては、左右画像の屈曲点から而の方程
式を求め、この面の方程式から屈曲点の3次元座標を得
ているため、視差の量子化誤差に起因する距離の誤差が
生じず、距離画像の距離情報が正確であり、凸凹のない
距離画像を得ることができる。
In the method of the present invention, the equation is obtained from the bending point of the left and right images, and the three-dimensional coordinates of the bending point are obtained from the equation of this surface, so there is no distance error due to parallax quantization error, The distance information of the distance image is accurate, and a distance image without unevenness can be obtained.

〔実施例〕〔Example〕

第1図は本発明方法の一実施例の構成図を示す。 FIG. 1 shows a block diagram of an embodiment of the method of the present invention.

同図中、左カメラ20.右カメラ21夫々は被写体方向
をY方向としてこれと直交するX方向に所定距離だけ離
間して配置されており、夫々で撮像された画像は領域分
割部22に供給される。
In the figure, left camera 20. The right cameras 21 are arranged at a predetermined distance apart from each other in the X direction perpendicular to the Y direction, with the subject direction being the Y direction, and the images captured by each of the right cameras 21 are supplied to the area dividing section 22 .

領域分割部22は第2図に示す構成である5、カメラ2
0.21夫々で撮像された第3図(A)に示す如き入力
画像31はテクスチャ解析部32で濃淡及び模様(テク
スチャ)が同一の領域に分割される。領域ラベル付は部
33はテクスチャが同一の領域を一つの面とし、面角に
異なるラベルを割当てて、画像全体にラベルが付けられ
た第3図(B)に示す如きラベル画像34を出力する。
The area dividing unit 22 has the configuration shown in FIG.
An input image 31 as shown in FIG. 3(A) captured at 0.21 is divided by a texture analysis section 32 into regions having the same shading and pattern (texture). The region labeling unit 33 treats a region with the same texture as one surface, assigns different labels to the surface angles, and outputs a label image 34 as shown in FIG. 3(B) in which the entire image is labeled. .

第1図に示す左ラベル画像23及び右ラベル画像24夫
々は屈曲点抽出部25に供給され、ここでラベル画像の
各面の輪郭を追跡して輪郭上の屈曲点を抽出する。ここ
で各面の方程式を求めるには3点以上の屈曲点が必要で
ある。屈曲点は輪郭の傾きが急変する点であり、傾きの
変化はに一曲率で測定し、k−曲率の絶対値が一定の閾
値以上で局所的に極大又は極小な点を屈曲点とする。こ
れによって第3図(B)にX印で示す屈曲点が抽出され
る。
The left label image 23 and the right label image 24 shown in FIG. 1 are each supplied to a bending point extraction section 25, which traces the contour of each side of the label image and extracts bending points on the contour. Here, three or more bending points are required to find the equation for each surface. An inflection point is a point where the slope of the contour suddenly changes, and the change in inclination is measured in terms of curvature, and the point where the absolute value of k-curvature is at least a certain threshold and locally maximum or minimum is defined as the inflection point. As a result, the bending point indicated by the X mark in FIG. 3(B) is extracted.

対応付は部26は左右の画像の各面の輪郭上の屈曲点間
で対応付けを行なう。例えば第3図(D)に示す左画像
の面35と右画像の面36との対応付けを示しており、
屈曲点35aと36a、屈曲点35bと36b、屈曲点
35Gと36C2屈曲点35dと36d夫々の対応付け
が行なわれる。
The correspondence section 26 performs correspondence between bending points on the contours of each surface of the left and right images. For example, it shows the correspondence between the surface 35 of the left image and the surface 36 of the right image shown in FIG. 3(D),
The bending points 35a and 36a, the bending points 35b and 36b, the bending points 35G and 36C2, the bending points 35d and 36d are associated with each other.

面り程式算出部27は最小二乗誤差法泪算部28を用い
て最小二乗誤差法による面方程式の算出を行なう。ここ
で、第4図(A)に示す左カメラの画像の点A (X、
Y)と、右カメラの画像の点B (X’ 、Y’ )と
が、対応するとき、これをX、7平而で表わす第4図(
B)の関係から、左カメラで見た点Pの座標(x、y、
z)は次式で表わされ、 また右カメラで見た点Pの座標(x’ 、y’zl )
は次式で表わされる。
The surface equation calculation section 27 uses the least square error method calculation section 28 to calculate the surface equation by the least square error method. Here, point A (X,
When the point B (X', Y') of the image of the right camera corresponds to the point B (X', Y') of the right camera image, this is expressed as
From the relationship B), the coordinates (x, y,
z) is expressed by the following formula, and the coordinates (x', y'zl) of point P seen with the right camera are
is expressed by the following equation.

azx+bz”y’+cz=1 、’、Z=1/ (aX十bY十c)       −
G3)(1)式より x=yX=X/ (aX十bY十c)    =−(4
)y=zY=Y/  (aX 十 bY+c)    
    ・ (5)一方、右画像に関しては視差u=x
−x’ として次式が得られる。
azx+bz"y'+cz=1,',Z=1/ (aX 10 bY 10 c) -
G3) From formula (1), x=yX=X/ (aX + bY + c) = - (4
)y=zY=Y/ (aX 10 bY+c)
・(5) On the other hand, for the right image, parallax u=x
The following equation is obtained as −x'.

x’ =x−u (X/ (aX十bX十c))−u ・・・(6) ・・・■ ・・・■ y’  −y z’  =z よって X’ =X’ /z’ = ((X/ (aX+bY十c)ン (1/ (aX+bY+c)) −X−u (aX+bY十C) = (1−ua)x−ubY−uc CI)/ ・・・0) (1)式ヲ平面(7)方程式ax+by+cz=1に代
入すると ・・・(10) ここで01式の各係数をA、B、Cとおくと、X’ =
 (1−ua)X−ubY+uc−AX−BY十〇 即ち、AX+BY+C−X’ =O・・・(11)と表
わされる。これから (12)〜(14)式の3元1次方程式より次式が得ら
れる。
x' = x-u (X/ (aX 10 b = ( ( Substituting into the equation ax+by+cz=1 in plane (7)...(10) Here, let each coefficient of equation 01 be A, B, and C, then X' =
(1-ua)X-ubY+uc-AX-BY10, that is, AX+BY+C-X'=O...(11) From this, the following equation can be obtained from the three-dimensional linear equations of equations (12) to (14).

よって平面の方程式は ((A−1)/u)  −X+(B/口)  l+(C
/u)  ・ z=1   −(15)ここでn個の屈
曲点から最小2乗誤差平面を求める場合について説明す
る。
Therefore, the equation of the plane is ((A-1)/u) -X+(B/mouth) l+(C
/u)・z=1−(15) Here, a case will be described in which a least squares error plane is obtained from n bending points.

求める平面の方程式をax+by+cz+d=0(但し
a2+b2+c2=1 >とし、屈曲点をP−(x、、
 yi 、Zi )i=1.2. ・ nとすす る。点Piと平面との距離は次式で与えられる。
The equation of the plane to be found is ax+by+cz+d=0 (however, a2+b2+c2=1 >, and the bending point is P-(x, ,
yi, Zi)i=1.2.・Let's say n. The distance between point Pi and the plane is given by the following equation.

従って (12)式からC−X+ と、 AX+ −BY+ とおく +CZ・+d)2・・・(17) を最小とする、つまり各点から平面までの距離の絶対値
の総和を最小とするa、b、c、dを求めれば良い。
Therefore, from equation (12), set C-X+ and AX+ -BY+ +CZ・+d)2... (17) Minimize the sum of the absolute values of the distances from each point to the plane a , b, c, and d.

ラグランジェの未定数法により、解であるa。By Lagrange's method of unknowns, a is the solution.

b、c、dに対して次式が成立する。The following formula holds true for b, c, and d.

 n L(a)−λa、L(b)=λb。n L(a)-λa, L(b)=λb.

L(C)=λc、L(d)=0 特にL(d)−〇を引算すると a3x 十b3y +cSz +nd=0・・・(18
) i=1 よって d= (aSx十bSy+cSz)/nこれを(18)
式の他の3式に入れるとC(Sxz−進」1〕−λa a(3XV −5LSL) 1.l) (3yy工旦し
ユ2〕+ c (S yz −迂LSL)−λb a (S xz −5−LS−1) +b (S VZ
シL&L)+ c(Szz−f−S江ユ2〕−λC ・・・(19) (但し3xx=Σx−2,3xy=ΣXiV5yy=Σ
y・ 2.5yz=Σy・ ZSzz=Σz−2,5x
z=ΣJ J >L (a、b、c、d)の最小値は(
17)、 (181式%式%( となる。つまり固有値が最小2乗誤差である。平面方稈
式のa、b、C,dはこの連立方程式の解でa2+b2
+c2=1を満たすようにしたものである。
L(C)=λc, L(d)=0 In particular, subtracting L(d)-〇 gives a3x +b3y +cSz +nd=0...(18
) i=1 Therefore, d= (aSx + bSy+cSz)/n (18)
If we put it into the other three equations, we get C(Sxz-decimal 1) - λa a (3XV - 5LSL) 1.l) (3yy factory 2) + c (S yz - round LSL) - λb a ( S xz -5-LS-1) +b (S VZ
SI L&L) + c(Szz-f-S Eyu 2]-λC...(19) (However, 3xx=Σx-2, 3xy=ΣXiV5yy=Σ
y・2.5yz=Σy・ZSzz=Σz−2,5x
z=ΣJ J >L The minimum value of (a, b, c, d) is (
17), (Equation 181%Equation %( In other words, the eigenvalue is the least square error.A, b, C, d of the plane square equation are the solutions of this simultaneous equation, a2+b2
+c2=1 is satisfied.

これによって第3図(E)に示す屈曲点a、b。This results in bending points a and b shown in FIG. 3(E).

c、dを持つ面の方程式が求められ、屈曲点a。The equation of the surface with c, d is found, and the inflection point a.

b、c、d夫々の3次元座標が求められる。The three-dimensional coordinates of b, c, and d are determined.

第1図に示す距離画像表示部29は第5図に示す構成で
ある。第5図中、面方程式によって3次元座標の算出さ
れた各面上の屈曲点41はベクトル生成部42に供給さ
れ、領域分割によって得た面の輪郭情報をもとに第6図
に示す処理によってベクトルが生成される。
The distance image display section 29 shown in FIG. 1 has a configuration shown in FIG. 5. In FIG. 5, the bending points 41 on each surface whose three-dimensional coordinates have been calculated by the surface equation are supplied to the vector generation unit 42, and are processed as shown in FIG. 6 based on the contour information of the surface obtained by area division. A vector is generated by

ここで、画素が格子状に配列されX、X座標が整数値の
屈曲点(aL、R+ )と(bL、R2)との間に直線
を張るものとする。
Here, it is assumed that a straight line is drawn between bending points (aL, R+) and (bL, R2) where pixels are arranged in a grid and the X and X coordinates are integer values.

例えばbl >alかつRz >R+かツbLaL>R
2−R+である場合、線分は始点(aL。
For example, bl >al and Rz >R+katsu bLaL>R
2-R+, the line segment is at the starting point (aL.

R+)から00≦α≦456の角度で右上に向かうこと
になる。求める線分を描くにはal−≦X≦2 bLの範囲の各列に1点ずつ点を表示する。各点のX座
標は点(al 、R+ )と点(bL、R2>とを結ぶ
真の線分に最も近い位置を選び、任意のX座標X、でy
座標Viの位置の点を表示した場■ 合には次に(×・+1.y)又は(Xi+1゜y、+i
>のいずれかの点を表示すれば良い。X座標X、で真の
線分の高さとViとの誤差eをとす ると、X座標がX・+1となったとき真の線分の嘗 高さが線分の傾きの分だけ上がってしまうので誤差eは
それだけ増す。e′−e+ (R+ −R+ )/ (
bL−aL)            −(21)この
誤差e′が0.5より小であれば点(x、+1゜y・)
を表示し、e′は(21)式となる。また誤差■ e′が0.5より大であれば(×・+1.Vi +1>
! を表示し、表示点が1つy方向に上がっているため、 e’ =e+ (R2−R+ )/(bL−aL)−1
・・・(22) となる。
R+) to the upper right at an angle of 00≦α≦456. To draw the desired line segment, display one point in each column in the range al-≦X≦2bL. For the X coordinate of each point, select the position closest to the true line segment connecting the point (al, R+) and the point (bL, R2>, and set the arbitrary X coordinate X, y
If the point at the coordinate Vi is displayed, then (×・+1.y) or (Xi+1°y, +i
>It is sufficient to display any of the points. If we take the error e between the height of the true line segment and Vi at the X coordinate X, when the X coordinate becomes X +1, the height of the true line segment will rise by the amount of the slope of the line segment. Therefore, the error e increases accordingly. e′−e+ (R+ −R+ )/(
bL-aL) - (21) If this error e' is smaller than 0.5, the point (x, +1°y・)
, and e' is expressed as equation (21). Also, if the error ■ e' is larger than 0.5 (×・+1.Vi +1>
! is displayed, and the display point is raised by one in the y direction, so e' = e+ (R2-R+)/(bL-aL)-1
...(22) becomes.

このようにして線分を求めるため、第6図では 3 4 ステップ50でaL−bLの絶対値を変数LENG T
 Hにセットし、R+−R2の絶対値とLENGTHを
比較しくステップ51 >、LENGTHの値が犬であ
ればR+−R2の絶対値をり、 E N GTHに再セ
ットする(ステップ52)。次にalt)i、R+  
R2夫々をり、 E N G T Hで割って変数X 
inc 、 Yinc夫々にセットしくステップ53)
 、aL十0.5.bL+ 0.5夫々を変数X。
In order to obtain the line segment in this way, in FIG.
Set to H and compare the absolute value of R+-R2 and LENGTH (step 51); if the value of LENGTH is dog, then calculate the absolute value of R+-R2 and reset to E N GTH (step 52). Next alt)i, R+
R2 each, divide by E N G T H and get variable X
Set each inc and Yinc (Step 53)
, aL10.5. bL+0.5 each as variable X.

Yに夫々セットし、かつ変数1に「1Jをセットする(
ステップ54)。この後L E N G T Hが変数
iを越えるまで(ステップ55)、変数X、Y夫々に変
数 X inc 、 Y inc夫々を加算する(ステ
ップ56)。
Set Y respectively, and set variable 1 to 1J (
Step 54). Thereafter, variables X inc and Y inc are added to variables X and Y, respectively, until L E N G T H exceeds variable i (step 55).

第5図に示ず面塗り部43はベクトルに囲まれた各面の
内部を水平方向にスキャンして塗り潰し、これによって
距離画像44を得て表示を行なう。
A surface painting section 43 (not shown in FIG. 5) horizontally scans and fills the inside of each surface surrounded by vectors, thereby obtaining and displaying a distance image 44.

このように、左右画像の屈曲点から而の方程式を求め、
この面の方程式から屈曲点の3次元座標を得ているため
、視差の量子化誤差に起因する距離の誤差が生じず、距
離画像の距離情報が正確であり、凸凹のない距離画像を
得ることができる。
In this way, find the equation from the bending point of the left and right images,
Since the three-dimensional coordinates of the bending point are obtained from the equation of this surface, there is no distance error due to parallax quantization error, the distance information of the distance image is accurate, and a distance image without unevenness can be obtained. I can do it.

〔発明の効果〕〔Effect of the invention〕

上述の如く、本発明の両眼立体視の距離画像生成方法に
よれば、距離画像の距離情報が正確で、誤差による凸凹
のない距離画像を得ることができ、距離画像の品質が向
上し、実用上きわめて有用である。
As described above, according to the binocular stereoscopic distance image generation method of the present invention, it is possible to obtain a distance image with accurate distance information and no irregularities due to errors, and the quality of the distance image is improved. It is extremely useful in practice.

【図面の簡単な説明】[Brief explanation of drawings]

第1図は本発明方法の一実施例の構成図、第2図は領域
分割部の構成図、 第3図は本発明方法を説明するための図、第4図面の簡
単な説明するための図、 第5図は距離画像表示部の構成図、 第6図はベクトル生成部の処理のフローチャート、 第7図は従来方法の構成図である。 5 図において、 20は右カメラ、 21は左カメラ、 22は領域分割部、 23は左ラベル画像、 24は右ラベル画像、 25は屈曲点抽出部、 26は対応材は部、 27は面方程式算出部、 28は早小2乗誤差法計算部、 29は距離画像表示部 を示す。  6
Fig. 1 is a block diagram of an embodiment of the method of the present invention, Fig. 2 is a block diagram of a region dividing section, Fig. 3 is a diagram for explaining the method of the present invention, and Fig. 4 is a diagram for briefly explaining the method of the present invention. 5 is a block diagram of the distance image display section, FIG. 6 is a flowchart of the processing of the vector generation section, and FIG. 7 is a block diagram of the conventional method. 5 In Figure 5, 20 is the right camera, 21 is the left camera, 22 is the area division section, 23 is the left label image, 24 is the right label image, 25 is the bending point extraction section, 26 is the corresponding material section, 27 is the surface equation 28 is a small square error method calculation unit; 29 is a distance image display unit. 6

Claims (1)

【特許請求の範囲】 一対のカメラで撮像した被写体の左右画像夫々を領域分
割し、 該領域分割で得られた各面の輪郭上の3点以上の屈曲点
を抽出し、 該左右画像の対応する各面の3点以上の屈曲点から該各
面の方程式を求め、 該各面の方程式から屈曲点の3次元座標を求め、該各面
の輪郭に沿って各面の屈曲点間を結ぶベクトルを生成し
、 該各面のベクトルで囲まれる内部を塗り潰した距離画像
を得ることを特徴とする両眼立体視の距離画像生成方法
[Claims] The left and right images of a subject captured by a pair of cameras are divided into regions, three or more bending points on the contour of each surface obtained by the region division are extracted, and the left and right images correspond to each other. Find the equation of each surface from three or more bending points on each surface, find the three-dimensional coordinates of the bending point from the equation of each surface, and connect the bending points of each surface along the contour of each surface. A binocular stereoscopic distance image generation method characterized by generating vectors and obtaining a distance image in which the interior surrounded by the vectors on each surface is filled.
JP1337663A 1989-12-26 1989-12-26 Distance image generating method for both-eye stereoscopy Pending JPH03196373A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP1337663A JPH03196373A (en) 1989-12-26 1989-12-26 Distance image generating method for both-eye stereoscopy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP1337663A JPH03196373A (en) 1989-12-26 1989-12-26 Distance image generating method for both-eye stereoscopy

Publications (1)

Publication Number Publication Date
JPH03196373A true JPH03196373A (en) 1991-08-27

Family

ID=18310779

Family Applications (1)

Application Number Title Priority Date Filing Date
JP1337663A Pending JPH03196373A (en) 1989-12-26 1989-12-26 Distance image generating method for both-eye stereoscopy

Country Status (1)

Country Link
JP (1) JPH03196373A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6199139B1 (en) 1998-01-27 2001-03-06 International Business Machines Corporation Refresh period control apparatus and method, and computer
CN112966561A (en) * 2021-02-03 2021-06-15 成都职业技术学院 Portable university student innovation and entrepreneurship multifunctional recording method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6199139B1 (en) 1998-01-27 2001-03-06 International Business Machines Corporation Refresh period control apparatus and method, and computer
CN112966561A (en) * 2021-02-03 2021-06-15 成都职业技术学院 Portable university student innovation and entrepreneurship multifunctional recording method and device
CN112966561B (en) * 2021-02-03 2024-01-30 成都职业技术学院 Portable university student innovation and entrepreneur multifunctional recording method and device

Similar Documents

Publication Publication Date Title
CN110853075B (en) Visual tracking positioning method based on dense point cloud and synthetic view
Murray et al. Using real-time stereo vision for mobile robot navigation
US10930008B2 (en) Information processing apparatus, information processing method, and program for deriving a position orientation of an image pickup apparatus using features detected from an image
EP1536378B1 (en) Three-dimensional image display apparatus and method for models generated from stereo images
US6473536B1 (en) Image synthesis method, image synthesizer, and recording medium on which image synthesis program is recorded
CN111192235B (en) Image measurement method based on monocular vision model and perspective transformation
US20030058945A1 (en) Optical flow estimation method and image synthesis method
JP2006221603A (en) Three-dimensional-information reconstructing apparatus, method and program
CN113223135B (en) Three-dimensional reconstruction device and method based on special composite plane mirror virtual image imaging
CN104976950B (en) Object space information measuring device and method and image capturing path calculating method
JP5178538B2 (en) Method for determining depth map from image, apparatus for determining depth map
JP2009530701A5 (en)
JPH03196373A (en) Distance image generating method for both-eye stereoscopy
Fantin et al. An efficient mesh oriented algorithm for 3d measurement in multiple camera fringe projection
JP2006300656A (en) Image measuring technique, device, program, and recording medium
Skulimowski et al. Refinement of depth from stereo camera ego-motion parameters
US6788808B1 (en) Method and device for optically determining depth
Yan et al. Camera calibration in binocular stereo vision of moving robot
Riou et al. Calibration and disparity maps for a depth camera based on a four-lens device
JP2001266129A (en) Navigation controller by picture of navigation body and navigation control method
Scheuing et al. Computing depth from stereo images by using optical flow
JP3504128B2 (en) Three-dimensional information restoration apparatus and method
Ariyawansa et al. High-speed correspondence for object recognition and tracking
Godding et al. 4D Surface matching for high-speed stereo sequences
JP3499117B2 (en) Three-dimensional information restoration apparatus and method