JPH11345334A - Three-dimensional picture processing method and processor using accumulated difference table, and recording medium having recorded three-dimensional picture processing program - Google Patents

Three-dimensional picture processing method and processor using accumulated difference table, and recording medium having recorded three-dimensional picture processing program

Info

Publication number
JPH11345334A
JPH11345334A JP10149739A JP14973998A JPH11345334A JP H11345334 A JPH11345334 A JP H11345334A JP 10149739 A JP10149739 A JP 10149739A JP 14973998 A JP14973998 A JP 14973998A JP H11345334 A JPH11345334 A JP H11345334A
Authority
JP
Japan
Prior art keywords
image
dimensional
difference
feature points
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP10149739A
Other languages
Japanese (ja)
Inventor
Mikio Shintani
幹夫 新谷
Takafumi Saito
隆文 斎藤
Takeaki Mori
偉明 森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to JP10149739A priority Critical patent/JPH11345334A/en
Publication of JPH11345334A publication Critical patent/JPH11345334A/en
Pending legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

PROBLEM TO BE SOLVED: To make the processing efficient when the number of feature points and picture elements increases in a three-dimensional picture processing. SOLUTION: A feature point extraction part 13 extracts a feature point on an epipolar picture. A feature point correspondence generation part 14 accumulates differences between picture element values at two adjacent photographing times from the end point of a scanning line, holds it as an accumulated difference table and obtains the total of the difference of the picture element values between the two feature points as the difference of values corresponding to the two feature points in the accumulation difference table. The difference of the picture element values at a adjacent photographing times is accumulated from a picture end and it is previously held as a two-dimensional table. The total of the difference values of the picture elements between the respective feature points is obtained as the difference of two places in the two-dimensional table. Thus, a processing can be made efficient even if the number of the feature points and the picture elements increases.

Description

【発明の詳細な説明】DETAILED DESCRIPTION OF THE INVENTION

【0001】[0001]

【発明の属する技術分野】本発明は、3次元コンピュー
タグラフィックス(以下、CGと称する)やバーチャル
リアリティ用画像を生成するための3次元画像処理方法
および装置に関し、特に、描くべき物体や情景の3次元
構造を、実画像をもとに推定するための方法および装置
に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a three-dimensional image processing method and apparatus for generating three-dimensional computer graphics (hereinafter, referred to as CG) or virtual reality images, and more particularly to an object or scene to be drawn. The present invention relates to a method and an apparatus for estimating a three-dimensional structure based on a real image.

【0002】[0002]

【従来の技術】従来からの3次元情報処理手法のうち、
カメラを動かしながら撮影した画像列から時空間画像を
形成し、これをもとに3次元構造を推定する方法として
は、運動立体視法(例:Bolles R. C., Baker H. H. an
d Marimont D. H., “Epipolar-plane image analysis:
and approach to determining structure from motio
n”, IJCV. Vol.1, No.1, pp.7-55, 1987) が知られて
いる。また、運動立体視法に基づいて、より現実に近い
画像を生成する方法としては、撮影した画像列と推定し
た3次元構造から再構成した画像列との差を最小にする
方法(特願平8−338388)が知られている。
2. Description of the Related Art Among conventional three-dimensional information processing methods,
As a method of forming a spatio-temporal image from an image sequence taken while moving a camera and estimating a three-dimensional structure based on the spatio-temporal image, a motion stereoscopic method (eg, Bolles RC, Baker HH an
d Marimont DH, “Epipolar-plane image analysis:
and approach to determining structure from motio
n ”, IJCV. Vol.1, No.1, pp.7-55, 1987). As a method for generating a more realistic image based on the motion stereoscopic vision, photographing is used. A method for minimizing the difference between an estimated image sequence and an image sequence reconstructed from an estimated three-dimensional structure (Japanese Patent Application No. 8-338388) is known.

【0003】運動立体視法では、図6に示すように、時
空間画像41を時間軸tに平行にスライスしてできるエ
ピポーラ画像42上の特徴点43を追跡し、その傾きか
ら3次元空間内での位置を求める。カメラを像面に平行
かつ等速で移動させた場合、3次元空間内の特徴点43
は、エピポーラ画像42上では直線上に並んだ特徴点列
(以下、これを特徴直線44と称する)となる。この特
徴直線群をいかに正しく得るかによって、求める3次元
構造ひいては再構成画像の品質は大きく左右される。図
6において、x軸はスキャンライン方向である。
[0003] In the motion stereoscopic method, as shown in FIG. 6, a feature point 43 on an epipolar image 42 formed by slicing a spatiotemporal image 41 in parallel with the time axis t is tracked, and its inclination is used to determine a three-dimensional space. Find the position at When the camera is moved at a constant speed in parallel with the image plane, the feature points 43 in the three-dimensional space
Is a sequence of feature points arranged on a straight line on the epipolar image 42 (hereinafter, referred to as a feature straight line 44). The quality of the three-dimensional structure to be obtained, and thus the quality of the reconstructed image, largely depends on how to correctly obtain the feature line group. In FIG. 6, the x-axis is the scan line direction.

【0004】エピポーラ画像上の特徴直線群の抽出方法
の一つとして、動的計画法を用いて隣接する時刻間ごと
に特徴点の対応づけを求め、対応する特徴点の順に追跡
して特徴直線を構成する方法が知られている。隣接時刻
間ごとの動的計画法については、両眼立体視法による公
知の方法(例:Y. Ohta and T. Kanade,“Stereo byint
ra-and inter-scanline search using dynamic program
ing”, IEEE Trans.PAMI, Vol.7, No.2, pp.139-154, 1
985)が利用できる。また、隣接する撮影時刻間の特徴点
の類似性評価値を、当該隣接時刻の直線直後を含む4つ
の撮影時刻における特徴点の位置および輝度変化から求
め、該評価値の総和を最小にするような特徴点対応の組
み合わせを各隣接時刻間ごとに求め、それらを辿ること
により特徴点軌跡を推定する方法が出願されており(特
願平10−144124、144125)それによっ
て、特徴点の軌跡をより正確に得ることができる。
As one method of extracting a group of feature lines on an epipolar image, a correspondence between feature points is determined for each adjacent time using a dynamic programming method, and the feature lines are traced in the order of the corresponding feature points. Are known. For a dynamic programming method between adjacent times, a known method based on binocular stereopsis (eg, Y. Ohta and T. Kanade, “Stereo byint
ra-and inter-scanline search using dynamic program
ing ”, IEEE Trans.PAMI, Vol.7, No.2, pp.139-154, 1
985) is available. Further, similarity evaluation values of feature points between adjacent shooting times are obtained from the positions and luminance changes of feature points at four shooting times including immediately after the straight line of the adjacent times, and the sum of the evaluation values is minimized. A method of estimating a feature point trajectory by obtaining a combination of feature point correspondences between adjacent times and tracing them has been filed (Japanese Patent Application No. 10-144124, 144125). More accurate can be obtained.

【0005】[0005]

【発明が解決しようとする課題】特願平10−1441
24、144125に示された方法で隣接時刻画素値の
類似性を判定する場合、特徴点間ごとに画素の差分を合
計することが必要である。しかしながら、特徴点の組み
合わせごとに画素数に比例した処理時間が必要であるた
め、特徴点数が多い場合や画素数が多い場合には効率が
悪くなる。
Problems to be Solved by the Invention Japanese Patent Application No. Hei 10-1441
24, 144125, when judging the similarity of adjacent time pixel values, it is necessary to sum up the pixel differences for each feature point. However, since a processing time proportional to the number of pixels is required for each combination of feature points, the efficiency is deteriorated when the number of feature points is large or when the number of pixels is large.

【0006】本発明の目的は、特徴点数や画素数が増え
ても処理の効率を図ることができる3次元画像処理方法
および装置を提供することにある。
An object of the present invention is to provide a three-dimensional image processing method and apparatus capable of improving processing efficiency even when the number of feature points and the number of pixels are increased.

【0007】[0007]

【課題を解決するための手段】本発明の3次元画像処理
方法は、走査線上の各画素位置において、隣接する2つ
の撮影時刻における画素値の差分を走査線の端点から累
計した値を、隣接時刻の視差ごとに計算して2次元表と
して保持し、2つの特徴点の組の間の画素値の差分の合
計を前記2次元表における前記2つの特徴点に対応する
個所の値の差として求め、これを用いて特徴点画素値の
類似性を求めるものである。
According to the three-dimensional image processing method of the present invention, at each pixel position on a scanning line, a value obtained by accumulating a difference between pixel values at two adjacent photographing times from an end point of the scanning line is calculated. Calculated for each parallax of time and stored as a two-dimensional table, and the sum of the differences between the pixel values between the two sets of feature points is defined as the difference between the values corresponding to the two feature points in the two-dimensional table. The similarity of the characteristic point pixel values is obtained using this.

【0008】また、本発明の3次元画像処理装置は、走
査線上の各画素位置において、隣接する2つの撮影時刻
における画素値の差分を走査線の端点から累計した値
を、隣接時刻の視差ごとに計算して2次元表として保持
し、2つの特徴点の組の間の画素値の差分の合計を前記
2次元表における前記2つの特徴点に対応する個所の値
の差として求め、これを用いて特徴点画素値の類似性を
求める手段を有する。
Further, the three-dimensional image processing apparatus of the present invention calculates, at each pixel position on the scanning line, a value obtained by accumulating the difference between the pixel values at two adjacent photographing times from the end point of the scanning line for each parallax at the adjacent time. The two-dimensional table is stored as a two-dimensional table, and the sum of the differences between the pixel values between the two sets of feature points is obtained as the difference between the values corresponding to the two feature points in the two-dimensional table. Means for calculating the similarity of feature point pixel values using the same.

【0009】従来の方法では、画素値の差分の合計を、
その都度求めていたため、特徴点の組み合わせごとに画
素数に比例した処理時間が必要であった。しかし、本発
明では累計差分表(2次元表)をあらかじめ求めてお
き、各特徴点の組み合わせごとの処理は表を2カ所参照
して差をとるだけでよいため、特徴点数や画素数が増え
ても効率よく処理できる。
In the conventional method, the sum of the pixel value differences is calculated as
Since it is calculated every time, a processing time proportional to the number of pixels is required for each combination of feature points. However, in the present invention, a cumulative difference table (two-dimensional table) is obtained in advance, and the process for each combination of feature points only needs to refer to two places in the table and take the difference, so that the number of feature points and the number of pixels increase. Can be processed efficiently.

【0010】[0010]

【発明の実施の形態】次に、本発明の実施の形態につい
て図面を参照して説明する。
Next, embodiments of the present invention will be described with reference to the drawings.

【0011】図1を参照すると、本発明の一実施形態の
3次元画像処理装置は、時空間画像入力部11とエピポ
ーラ画像作成部12と特徴点抽出部13と特徴点対応生
成部14と直線算出部15と3次元構造推定部16から
構成される。
Referring to FIG. 1, a three-dimensional image processing apparatus according to an embodiment of the present invention includes a spatiotemporal image input unit 11, an epipolar image creation unit 12, a feature point extraction unit 13, a feature point correspondence generation unit 14, It comprises a calculation unit 15 and a three-dimensional structure estimation unit 16.

【0012】時空間画像入力部11は、カメラを移動さ
せながら撮影された画像列を記憶装置に蓄積して時空間
画像41を形成する。カメラは光軸に垂直、かつ水平方
向に等速直線運動させながら撮影することを原則とす
る。
The spatio-temporal image input unit 11 forms a spatio-temporal image 41 by accumulating a sequence of images taken while moving the camera in a storage device. In principle, the camera shoots while moving linearly at a constant speed in the vertical and horizontal directions to the optical axis.

【0013】エピポーラ画像作成部12では、撮影され
た時空間画像41をスキャンラインy=y0 でスライス
し、縦軸が時間方向t、横軸がx方向であるようなエピ
ポーラ画像42を作成する。時空間画像入力部11にお
いて、カメラの光軸方向、移動方向、移動速度が前述の
条件と異なる場合には、これらを補正の上、エピポーラ
画像42を作成する。以後の処理は、各スキャンライン
y=y0 に対応するエピポーラ画像42ごとに行う。
[0013] In epipolar image creation unit 12 slices the spatial image 41 at the scan line y = y 0 when taken, the vertical axis is the time direction t, the horizontal axis to create a epipolar image 42 such that the x-direction . In the spatiotemporal image input unit 11, when the optical axis direction, the moving direction, and the moving speed of the camera are different from the above-described conditions, these are corrected and the epipolar image 42 is created. Subsequent processing is performed for each epipolar image 42 corresponding to each scan line y = y 0 .

【0014】特徴点抽出部13では、エピポーラ画像4
2上での輝度変化もしくは色変化の大きい点を特徴点4
3として抽出する。特徴点43は、エピポーラ画像42
上の各時刻t=t0 において、x方向の画素値の差分を
とるか、元の画像上でエッジ検出フィルタを適用するこ
とによって、抽出できる。
In the feature point extracting unit 13, the epipolar image 4
The point at which the luminance change or color change is large on 2 is the feature point 4
Extract as 3. The feature point 43 is the epipolar image 42
At each time t = t 0 above, the difference can be extracted by taking the difference between the pixel values in the x direction or by applying an edge detection filter on the original image.

【0015】特徴点対応生成部13では、エピポーラ画
像42上の各時刻tにおいて、時刻t0 での特徴点と隣
接する時刻t1 =t0 +1での特徴点との対応関係を求
める。
At each time point t on the epipolar image 42, the feature point correspondence generation unit 13 obtains the correspondence between the feature point at time t 0 and the adjacent feature point at time t 1 = t 0 +1.

【0016】まず、t=t0 での特徴点とt=t1 での
特徴点との組、すなわち特徴点対応ペアの候補を、任意
の組み合わせについて作成し、t=t0 でのx座標の順
に配列に格納する。このとき、カメラの移動方向と隣接
時刻間での視差の最大量を制約条件として、特徴点の組
み合わせを絞り込むことができる。例えば、カメラが左
から右に移動し、視差が高々5画素である場合、t=t
0 ,t1 におけるx座標をそれぞれx0 ,x1 とする
と、x0 −5≦x1 ≦x0 を満たすx0 ,x1 の組だけ
を作成すればよい。図2に示すように、こうして得られ
た特徴点の組を、順にL1 ,L2 ,・・・,Ln とし、
i を構成する各特徴点をそれぞれPi,0,Pi,1 、そ
れらのx座標をそれぞれxi,0 ,xi,1 とする。
First, a set of a feature point at t = t 0 and a feature point at t = t 1 , that is, a candidate for a feature point correspondence pair is created for an arbitrary combination, and the x coordinate at t = t 0 In the order of At this time, the combination of feature points can be narrowed down using the maximum amount of parallax between the moving direction of the camera and the adjacent time as a constraint. For example, if the camera moves from left to right and the parallax is at most 5 pixels, then t = t
0, if each x coordinate in t 1 and x 0, x 1, may be created by a set of x 0, x 1 satisfying x 0 -5 ≦ x 1 ≦ x 0. As shown in FIG. 2, thus resulting set of feature points, in order L 1, L 2, · · ·, and L n,
Each feature point constituting L i is denoted by P i, 0 , P i, 1 , and their x-coordinates are denoted by x i, 0 , x i, 1 , respectively.

【0017】次に、2つの特徴点の組Li とLj とが特
徴点対応ペアとして隣接して選択された場合の評価関数
h(i,j)を計算する。この評価関数hは、例えば特
願平10−144124、144125に記載のよう
に、様々な情報を統合して計算することが有効である。
その一つの情報として、図3に示すように、Li ,Lj
に囲まれた特徴点間の画素21を、t=t0 とt=t1
とで端から順に比較したときの画素値の差の合計d
(i,j)を用いる。ただし、t=t0 とt=t1 とで
画素数が異なる場合は、多い方の片端もしくは両端の画
素を無視し、同数としてから求める。
Next, an evaluation function h (i, j) when two sets of feature points L i and L j are adjacently selected as a feature point corresponding pair is calculated. It is effective to calculate the evaluation function h by integrating various information as described in Japanese Patent Application No. 10-144124, 144125, for example.
As one of the information, as shown in FIG. 3, L i, L j
The pixels 21 between the feature points surrounded by are represented by t = t 0 and t = t 1
And the sum d of pixel value differences when compared in order from the end
(I, j) is used. However, if the number of pixels is different between t = t 0 and t = t 1 , the larger number of pixels at one end or both ends is ignored, and the number is determined as the same.

【0018】d(i,j)を求めるには、以下の方法を
用いる。
To obtain d (i, j), the following method is used.

【0019】まず、各時刻t0 において、図4に示すよ
うに、画素値差分の累計
First, at each time t 0 , as shown in FIG.

【0020】[0020]

【数1】 をあらかじめ計算する。ここで、kは隣接時刻間の視差
であり、例えば視差が高々5画素である場合には0≦k
≦5の範囲について計算する。mは、画像左端からの画
素数である。I(x,t)はエピポーラ画像42上の画
素値22である。こうして求めたS(k,m)は、累計
差分表23として保持しておく。
(Equation 1) Is calculated in advance. Here, k is the disparity between adjacent times. For example, when the disparity is at most 5 pixels, 0 ≦ k
Calculate for the range of ≦ 5. m is the number of pixels from the left end of the image. I (x, t) is a pixel value 22 on the epipolar image 42. The S (k, m) thus obtained is stored as a cumulative difference table 23.

【0021】2つの特徴点の組Li ,Lj の間の画素値
の差の合計d(i,j,)は、画素が同数の場合、
The sum d (i, j,) of the pixel value differences between the two sets of feature points L i , L j is given by:

【0022】[0022]

【数2】 となり、累計差分表23の2カ所の差として求めること
ができる。ここで、k=xi,1 −xi,0 である。
(Equation 2) And can be obtained as a difference between two places in the cumulative difference table 23. Here, k = x i, 1 −x i, 0 .

【0023】以上の方法により、Li ,Lj の全ての組
み合わせ(0≦i≦n,1≦j≦n+1)について、d
(i,j)を求め、さらにh(i,j)を求める。ただ
し、L0 ,Ln+1 は、画面左端同士、右端同士の組とす
る。
According to the above method, for all combinations of L i and L j (0 ≦ i ≦ n, 1 ≦ j ≦ n + 1), d
(I, j) is obtained, and h (i, j) is further obtained. However, L 0 and L n + 1 are pairs of left ends of the screen and right ends of the screen.

【0024】次に、特徴点の組L1 ,l2 ,・・・,L
n 、すなわち特徴点対応ペアの候補の中から、対応とし
て最も適切な組み合わせを求める。いま、{L1 、L
2 ,・・・,Ln }の中からk個(0≦k≦n)を選
び、これを{Lp1 ,Lp2 ,・・・,Lpk }とする
と、隣接する対応ペアの評価関数の和:
Next, a set of feature points L 1 , l 2 ,..., L
From n , that is, feature point correspondence pair candidates, the most appropriate combination as the correspondence is determined. Now, {L 1 , L
2, · · ·, L n} to select k pieces (0 ≦ k ≦ n) out of, this {Lp 1, Lp 2, ··· , When Lp k}, the evaluation function of the adjacent corresponding pairs Sum of:

【0025】[0025]

【数3】 が最小になるような集合が最も適切と考えられる。この
ような集合は、例えば特願平10−144124、14
4125に記載の方法により求めることができる。
(Equation 3) The set that minimizes is considered most appropriate. Such a set is described in, for example, Japanese Patent Application No. 10-144124,
4125.

【0026】直線算出部15では、特徴点対応生成部1
4で得られた対応関係をもとに、3次元空間の特徴点に
対応する、エピポーラ画像上の特徴直線を算出する。こ
の部分の処理には、例えば特願平10−144124、
144125に記載の方法が利用できる。
In the straight line calculation section 15, the feature point correspondence generation section 1
Based on the correspondence obtained in step 4, a feature line on the epipolar image corresponding to a feature point in a three-dimensional space is calculated. For the processing of this part, for example, Japanese Patent Application No. 10-144124,
144125 can be used.

【0027】3次元構造推定部16では、上記で得られ
た特徴直線群から、被写体の3次元構造を推定する。こ
の部分の処理には、例えば特願平8−338388に記
載の方法などが利用できる。
The three-dimensional structure estimating section 16 estimates the three-dimensional structure of the subject from the group of characteristic lines obtained as described above. For the processing of this portion, for example, a method described in Japanese Patent Application No. 8-338388 can be used.

【0028】図5を参照すると、本発明の他の実施形態
の3次元画像処理装置は、カメラである入力装置31
と、カメラで撮影された画像が蓄積される記憶装置32
と、被写体の推定された3次元構造が出力される、ディ
スプレイ装置、ファイル装置などの出力装置33と、図
1に示した各部11〜16の処理からなる3次元画像処
理プログラムを記録した、フロッピディスク、CD−R
OM、光磁気ディスク、半導体メモリなどの記録媒体3
4と、記録媒体34から3次元画像処理プログラムを読
み込んで、これを実行するデータ処理装置(CPU)3
5で構成されている。
Referring to FIG. 5, a three-dimensional image processing apparatus according to another embodiment of the present invention includes an input device 31 which is a camera.
And a storage device 32 for storing images taken by the camera
And an output device 33 such as a display device or a file device for outputting the estimated three-dimensional structure of the subject, and a floppy disk storing a three-dimensional image processing program including the processes of the units 11 to 16 shown in FIG. Disc, CD-R
Recording medium 3 such as OM, magneto-optical disk, and semiconductor memory
And a data processing device (CPU) 3 which reads a three-dimensional image processing program from the recording medium 34 and executes the program.
5.

【0029】[0029]

【発明の効果】以上説明したように、本発明は、隣接撮
影時刻間での画素値の差分を画像端から累計して2次元
表としてあらかじめ保持し、各特徴点間の画素の差分値
の合計を、該2次元表における2カ所の差として求める
ことにより、特徴点数や画素数が増えても処理の効率化
を図ることができる。
As described above, according to the present invention, the difference between pixel values between adjacent photographing times is accumulated from the end of an image and held in advance as a two-dimensional table. By calculating the sum as a difference between two points in the two-dimensional table, it is possible to improve the processing efficiency even if the number of feature points or the number of pixels increases.

【図面の簡単な説明】[Brief description of the drawings]

【図1】本発明の一実施形態の3次元画像処理装置の構
成図である。
FIG. 1 is a configuration diagram of a three-dimensional image processing apparatus according to an embodiment of the present invention.

【図2】特徴点の組の説明図である。FIG. 2 is an explanatory diagram of a set of feature points.

【図3】特徴点間の画素の差の説明図である。FIG. 3 is an explanatory diagram of a pixel difference between feature points.

【図4】累計差分表の説明図である。FIG. 4 is an explanatory diagram of a cumulative difference table.

【図5】本発明の他の実施形態の3次元画像処理装置の
構成図である。
FIG. 5 is a configuration diagram of a three-dimensional image processing apparatus according to another embodiment of the present invention.

【図6】エピポーラ画像上の特徴点の説明図である。FIG. 6 is an explanatory diagram of feature points on an epipolar image.

【符号の説明】[Explanation of symbols]

11 時空間画像入力部 12 エピポーラ画像作成部 13 特徴点抽出部 14 特徴点対応生成部 15 直線算出部 16 3次元構造推定部 21 特徴点間の画素 22 画素値 23 累計差分表 31 入力装置 32 記憶装置 33 出力装置 34 記録媒体 41 時空間画像 42 エピポーラ画像 43 特徴点 44 特徴直線 DESCRIPTION OF SYMBOLS 11 Spatiotemporal image input part 12 Epipolar image creation part 13 Feature point extraction part 14 Feature point correspondence generation part 15 Straight line calculation part 16 Three-dimensional structure estimation part 21 Pixel between feature points 22 Pixel value 23 Cumulative difference table 31 Input device 32 Storage Device 33 Output device 34 Recording medium 41 Spatio-temporal image 42 Epipolar image 43 Feature point 44 Feature straight line

Claims (3)

【特許請求の範囲】[Claims] 【請求項1】 カメラを移動させながら撮影された画像
列を蓄積して時空間画像を形成し、3次元画像を生成す
るための3次元構造を、蓄積された時空間画像に基づい
て推定する3次元画像処理方法であって、時空間画像を
時間軸に平行にスライスしてできるエピポーラ画像上の
特徴点を、隣接する撮影時刻ごとに特徴点間画素値の類
似性を用いて追跡し、その傾きから3次元空間内での位
置を求める3次元画像処理方法において、 走査線上の各画素位置において、隣接する2つの撮影時
刻における画素値の差分を走査線上の端点から累計した
値を、隣接時刻の視差ごとに計算して2次元表として保
持し、2つの特徴点の組の間の画素値の差分の合計を前
記2次元表における前記2つの特徴点に対応する個所の
値の差として求め、これを用いて特徴点画素値の類似性
を求めることを特徴とする3次元画像処理方法。
1. A spatio-temporal image is formed by accumulating a sequence of images taken while moving a camera, and a three-dimensional structure for generating a three-dimensional image is estimated based on the stored spatio-temporal image. A three-dimensional image processing method, wherein feature points on an epipolar image formed by slicing a spatio-temporal image in parallel with the time axis are tracked using similarity of pixel values between feature points for each adjacent shooting time, In a three-dimensional image processing method for obtaining a position in a three-dimensional space from the inclination, at each pixel position on a scanning line, a value obtained by accumulating a difference between pixel values at two adjacent photographing times from an end point on the scanning line is calculated as an adjacent value. Calculated for each parallax of time and stored as a two-dimensional table, and the sum of the differences between the pixel values between the two sets of feature points is defined as the difference between the values corresponding to the two feature points in the two-dimensional table. And use this 3-dimensional image processing method characterized by determining the similarity of the feature point pixel values.
【請求項2】 カメラを移動させながら撮影された画像
列を蓄積して時空間画像を形成し、前記時空間画像を時
間軸に平行にスライスしてエピポーラ画像を作成し、前
記エピポーラ画像上の特徴点を、隣接する撮影時刻ごと
に特徴間画素値の類似性を用いて追跡し、その傾きから
3次元空間での位置を求める3次元画像処理装置におい
て、 走査線上の各画素位置において、隣接する2つの撮影時
刻における画素値の差分を走査線の端点から累計した値
を、隣接時刻の視差ごとに計算して2次元表として保持
し、2つの特徴点の組の間の画素値の差分の合計を前記
2次元表における前記2つの特徴点に対応する個所の値
の差として求め、これを用いて特徴点画素値の類似性を
求める手段を有することを特徴とする3次元画像処理装
置。
2. A spatio-temporal image is formed by accumulating an image sequence photographed while moving a camera, and the spatio-temporal image is sliced parallel to a time axis to create an epipolar image. In a three-dimensional image processing apparatus that tracks feature points using the similarity of pixel values between features at each adjacent photographing time and obtains a position in a three-dimensional space from its inclination, The value obtained by accumulating the difference between the pixel values at the two photographing times from the end point of the scanning line is calculated for each parallax at the adjacent time and stored as a two-dimensional table, and the difference between the pixel values between the two feature point sets is calculated. Is calculated as a difference between values of the two feature points in the two-dimensional table, and the similarity of feature point pixel values is calculated using the difference. .
【請求項3】 カメラを移動しながら撮影された画像列
を記憶装置に蓄積し、時空間画像を形成する時空間画像
形成処理と、 前記時空間画像を時間軸に平行にスライスしてエピポー
ラ画像を作成するエピポーラ画像作成処理と、 前記エピポーラ画像上の特徴点を抽出する特徴点抽出処
理と、 走査線上の各画素位置において、隣接する2つの撮影時
刻における画素値の差分を走査線の端から累計した値
を、隣接時刻の視差ごとに計算して2次元表として保持
し、2つの特徴点の組の間の画素値の差分の合計を、前
記2次元表における前記2つの特徴点に対応する個所の
値の差として求める特徴点対応生成処理と、 前記特徴点対応生成処理で得られた対応関係をもとに特
徴点に対応する、エピポーラ画像上の特徴直線を算出す
る特徴点算出処理と、 前記特徴点算出処理で得られた特徴直線群から、被写体
の3次元構造を推定する3次元構造推定処理をコンピュ
ータに実行させるための3次元画像処理プログラムを記
録した記録媒体。
3. A spatio-temporal image forming process for accumulating an image sequence photographed while moving a camera in a storage device to form a spatio-temporal image, and slicing the spatio-temporal image in parallel with a time axis to form an epipolar image And a feature point extraction process for extracting feature points on the epipolar image. At each pixel position on the scanning line, a difference between pixel values at two adjacent photographing times is calculated from an end of the scanning line. The accumulated value is calculated for each disparity at adjacent times and stored as a two-dimensional table, and the sum of the differences in pixel values between two sets of feature points corresponds to the two feature points in the two-dimensional table. Feature point correspondence generation processing that is determined as a difference between the values of points to be performed, and feature point calculation processing that calculates a feature straight line on an epipolar image corresponding to a feature point based on the correspondence obtained in the feature point correspondence generation processing. , From said obtained feature straight lines in the feature point calculation process, the recording medium which records a 3-dimensional image processing program for executing the three-dimensional structure estimation processing for estimating a three-dimensional structure of an object on a computer.
JP10149739A 1998-05-29 1998-05-29 Three-dimensional picture processing method and processor using accumulated difference table, and recording medium having recorded three-dimensional picture processing program Pending JPH11345334A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP10149739A JPH11345334A (en) 1998-05-29 1998-05-29 Three-dimensional picture processing method and processor using accumulated difference table, and recording medium having recorded three-dimensional picture processing program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP10149739A JPH11345334A (en) 1998-05-29 1998-05-29 Three-dimensional picture processing method and processor using accumulated difference table, and recording medium having recorded three-dimensional picture processing program

Publications (1)

Publication Number Publication Date
JPH11345334A true JPH11345334A (en) 1999-12-14

Family

ID=15481746

Family Applications (1)

Application Number Title Priority Date Filing Date
JP10149739A Pending JPH11345334A (en) 1998-05-29 1998-05-29 Three-dimensional picture processing method and processor using accumulated difference table, and recording medium having recorded three-dimensional picture processing program

Country Status (1)

Country Link
JP (1) JPH11345334A (en)

Similar Documents

Publication Publication Date Title
US10846913B2 (en) System and method for infinite synthetic image generation from multi-directional structured image array
US10818029B2 (en) Multi-directional structured image array capture on a 2D graph
US10540773B2 (en) System and method for infinite smoothing of image sequences
JP5156837B2 (en) System and method for depth map extraction using region-based filtering
KR100748719B1 (en) Apparatus and method for 3-dimensional modeling using multiple stereo cameras
KR100793076B1 (en) Edge-adaptive stereo/multi-view image matching apparatus and its method
EP2291825B1 (en) System and method for depth extraction of images with forward and backward depth prediction
KR100560464B1 (en) Multi-view display system with viewpoint adaptation
EP1442614A1 (en) Method and apparatus for image matching
WO2018053952A1 (en) Video image depth extraction method based on scene sample library
JP7285834B2 (en) Three-dimensional reconstruction method and three-dimensional reconstruction apparatus
JP6300346B2 (en) IP stereoscopic image estimation apparatus and program thereof
JP7479729B2 (en) Three-dimensional representation method and device
KR20090008808A (en) Apparatus and method for converting 2d image signals into 3d image signals
US20230394833A1 (en) Method, system and computer readable media for object detection coverage estimation
JP6285686B2 (en) Parallax image generation device
JP7312026B2 (en) Image processing device, image processing method and program
Mulligan et al. Stereo-based environment scanning for immersive telepresence
Wang et al. Block-based depth maps interpolation for efficient multiview content generation
KR20110133677A (en) Method and apparatus for processing 3d image
CN104463958A (en) Three-dimensional super-resolution method based on disparity map fusing
CN112991419A (en) Parallax data generation method and device, computer equipment and storage medium
JPH11345334A (en) Three-dimensional picture processing method and processor using accumulated difference table, and recording medium having recorded three-dimensional picture processing program
KR102523788B1 (en) Light Field Image inpainting method using GANs
Huang et al. Generation of Animated Stereo Panoramic Images for Image-Based Virtual Reality Systems