JPH06195465A - Moving image processing method - Google Patents

Moving image processing method

Info

Publication number
JPH06195465A
JPH06195465A JP35692092A JP35692092A JPH06195465A JP H06195465 A JPH06195465 A JP H06195465A JP 35692092 A JP35692092 A JP 35692092A JP 35692092 A JP35692092 A JP 35692092A JP H06195465 A JPH06195465 A JP H06195465A
Authority
JP
Japan
Prior art keywords
image pickup
optical flow
pickup means
accuracy
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP35692092A
Other languages
Japanese (ja)
Inventor
Toshio Ito
敏夫 伊東
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Daihatsu Motor Co Ltd
Original Assignee
Daihatsu Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Daihatsu Motor Co Ltd filed Critical Daihatsu Motor Co Ltd
Priority to JP35692092A priority Critical patent/JPH06195465A/en
Publication of JPH06195465A publication Critical patent/JPH06195465A/en
Pending legal-status Critical Current

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Analysis (AREA)

Abstract

PURPOSE:To improve the accuracy of optical flow. CONSTITUTION:In this method, first and second image pickup means 11, 12 whose visual fields are shifted in a horizontal direction and overlap in the only part are provided, the accuracy curves A, B of optical flow are derived from the image pickup signals of both image pickup means 11, 12 and are made to overlap with each other, an image pickup means imparting the accuracy curve of the lower value of the accuracy curve in an picture element range where these overlap is preliminarily specified, and the optical flow by the preliminarily specified image pickup means is selectively extracted in the picture element range where the optical flow overlaps when an image pickup is practically performed for an object to be imaged by both of image pickup means 11, 12. Thus, the accuracy of the optical flow in the vicinity of the optical center of both of the image pickup means 11, 12 can be remarkably improved.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】この発明は、撮像手段からの撮像
信号を処理して被撮像体の画像のオプティカルフローを
導出する動画像処理方法に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a moving image processing method for processing an image pickup signal from an image pickup means to derive an optical flow of an image of an object to be picked up.

【0002】[0002]

【従来の技術】従来の動画像処理方法では図5に示す構
成の装置が用いられており、図6に示すように自動車1
の室内の前部に設けられたCCDカメラ等の撮像手段2
により前方の被撮像体が撮像され、この撮像手段2から
出力される撮像信号がアナログ/デジタル変換(以下A
/D変換という)器3によりA/D変換され、A/D変
換されて得られたあるフレームの画素ごとの濃度データ
がフレームメモリ4に記憶され、同様にA/D変換され
て得られる次のフレームの画素ごとの濃度データとフレ
ームメモリ4に記憶された前のフレームの濃度データと
に基づき、オプティカルフロー演算部5により被撮像体
の画像の移動ベクトルであるオプティカルフローが導出
されるようになっている。
2. Description of the Related Art In a conventional moving image processing method, a device having a configuration shown in FIG. 5 is used, and as shown in FIG.
Means 2 such as a CCD camera provided in the front of the room
The subject to be imaged in front is imaged by the image pickup device 2, and the image pickup signal output from the image pickup means 2 is subjected to analog / digital conversion (hereinafter referred to as A
The density data for each pixel of a certain frame which is A / D converted by the device 3 and is A / D converted is stored in the frame memory 4, and the density data obtained by the A / D conversion is similarly obtained. Based on the density data of each pixel of the frame and the density data of the previous frame stored in the frame memory 4, the optical flow calculation unit 5 derives an optical flow that is a movement vector of the image of the imaged object. Has become.

【0003】ところで、オプティカルフローの導出手順
について簡単に説明すると、撮像手段2の光学軸を含む
水平走査線y上の濃度データからこの水平走査線y上に
おけるオプティカルフローが導出されるが、詳細には撮
像手段2によるあるフレーム画像の水平走査線y上の濃
度分布曲線をEa (x)、次のフレームの画像の水平走
査線y上の濃度分布曲線をEb (x)とすると、両濃度
分布曲線Ea (x),Eb (x)の第n画素における勾
配の平均En は、x=nと置くことにより次式のように
表される。
The procedure for deriving the optical flow will be briefly described. The optical flow on the horizontal scanning line y is derived from the density data on the horizontal scanning line y including the optical axis of the image pickup means 2. Let E a (x) be the density distribution curve on the horizontal scanning line y of a certain frame image by the image pickup means 2 and E b (x) be the density distribution curve on the horizontal scanning line y of the image of the next frame. An average E n of the gradients of the concentration distribution curves E a (x) and E b (x) at the nth pixel is expressed by the following equation by setting x = n.

【0004】[0004]

【数1】En =(dEa (n)/dx +dEb (n)/dx)/2 …(I)## EQU1 ## E n = (dE a (n) / dx + dE b (n) / dx) / 2 (I)

【0005】このとき、x軸は撮像手段2の撮像面にお
ける水平走査線yの画素を表わし、一水平走査線上に例
えば512個の画素が配列されているとすると、撮像面
の一方の端が第0画素、他方の端が第511画素に相当
する。
At this time, the x-axis represents the pixels of the horizontal scanning line y on the imaging surface of the imaging means 2, and if 512 pixels are arranged on one horizontal scanning line, one end of the imaging surface is The 0th pixel corresponds to the 511th pixel at the other end.

【0006】さらに、第n画素における両曲線E
a (n),Eb (n)の濃度値の差Et は次式のように
なる。
Further, both curves E in the nth pixel
The difference E t between the density values of a (n) and E b (n) is given by the following equation.

【0007】[0007]

【数2】Et =Ea (n)−Eb (n) …(II)[Equation 2] E t = E a (n) −E b (n) (II)

【0008】従って、あるフレームと次のフレームの画
像データから水平走査線y上の第n画素の濃度差を上記
(II)式によって演算すればよく、オプティカルフローf
は上記(I) ,(II)式で与えられるEn ,Et を用いて、
次式のように表される。
Therefore, from the image data of a certain frame and the next frame, the density difference of the nth pixel on the horizontal scanning line y is calculated as above.
The optical flow f can be calculated by the equation (II).
Using E n and E t given by the above equations (I) and (II),
It is expressed as the following equation.

【0009】[0009]

【数3】 f=−Et /En …(III)## EQU3 ## f = -E t / E n (III)

【0010】なお、実際に勾配の平均値En を求める場
合、まず両曲線Ea (n),Eb (n)の勾配を求める
が、このとき両曲線Ea (n),Eb (n)が各画素の
濃度値をプロットして得られるため、例えば第n画素と
これに隣接した第(n−1)画素(又は第(n+1)画
素でもよい)のA/D変換による濃度値Ea (n),E
a (n−1)をメモリ等から読み出し、その差から曲線
a (x)の第n画素の位置における勾配(=dE
a (n)/dx)が得られ、同様にA/D変換による濃
度値Eb (n),Eb (n−1)の差から曲線E
b (x)の第n画素の位置における勾配(=dE
b (n)/dx)が得られる。
When the average value E n of the gradients is actually obtained, first the gradients of the curves E a (n) and E b (n) are obtained. At this time, the curves E a (n) and E b ( Since n) is obtained by plotting the density value of each pixel, for example, the density value by A / D conversion of the nth pixel and the (n-1) th pixel (or (n + 1) th pixel which may be adjacent thereto) E a (n), E
a (n-1) is read from the memory or the like, and the gradient (= dE) at the position of the nth pixel of the curve E a (x) is read from the difference.
a (n) / dx) is obtained, and similarly, a curve E is obtained from the difference between the density values E b (n) and E b (n-1) by A / D conversion.
The gradient (= dE) at the position of the nth pixel of b (x)
b (n) / dx) is obtained.

【0011】ただし、両曲線Ea (x),Eb (x)の
極大点及び極小点付近では、各画素ごとに両曲線の勾配
が大きく変化し、これら極大点,極小点付近以外では、
各画素の両曲線の勾配の変化はあまり大きく変化しない
ため、両曲線Ea (x),Eb (x)の極大点及び極小
点付近以外の画素の濃度値が選択的に演算に使用され
る。
However, in the vicinity of the maximum and minimum points of both curves E a (x) and E b (x), the gradients of both curves greatly change for each pixel, and except near these maximum and minimum points,
Since the changes in the gradients of both curves of each pixel do not change very much, the density values of pixels other than near the maximum and minimum points of both curves E a (x) and E b (x) are selectively used in the calculation. It

【0012】そして、あるフレームにおいて例えば図7
に示す画像が得られたとすると、撮像手段2の光学中心
と自動車の進行方向が一致している場合、図7中に矢印
に示すように光学中心を表す点Pに近いほど画像の移動
量は小さく、逆に点Pから離れるほど画像の移動量は大
きくなり、これをオプティカルフローの精度,即ちオプ
ティカルフローの真値と計算値との差の絶対値を真値で
除して得られる値の分布をとると図8に示すようにな
り、このオプティカルフローの精度曲線から明らかなよ
うに、光学中心Pの付近ではオプティカルフローの精度
が極めて悪い。
Then, in a certain frame, for example, as shown in FIG.
7 is obtained, when the optical center of the image pickup means 2 and the traveling direction of the vehicle coincide with each other, the closer to the point P representing the optical center as shown by the arrow in FIG. Smaller, conversely, the farther away from the point P, the larger the amount of movement of the image becomes, and the accuracy of the optical flow, that is, the absolute value of the difference between the true value of the optical flow and the calculated value is divided by the true value. The distribution is as shown in FIG. 8. As is clear from the accuracy curve of the optical flow, the accuracy of the optical flow is extremely poor near the optical center P.

【0013】[0013]

【発明が解決しようとする課題】このように、撮像手段
2の光軸と自動車の進行方向が一致している場合に、光
学中心Pに近いほどオプティカルフローの精度が悪いた
め、上記したように演算により導出したオプティカルフ
ローの精度における信頼性に欠けるという問題点があ
る。
As described above, when the optical axis of the image pickup means 2 and the traveling direction of the automobile coincide with each other, the closer the optical center P is, the worse the accuracy of the optical flow is. There is a problem in that the accuracy of the optical flow derived by calculation lacks reliability.

【0014】そこでこの発明は、上記のような問題点を
解消するためになされたもので、オプティカルフローの
精度の向上を図れるようにすることを目的とする。
Therefore, the present invention has been made to solve the above problems, and an object thereof is to improve the accuracy of the optical flow.

【0015】[0015]

【課題を解決するための手段】この発明に係る動画像処
理方法は、撮像手段により被撮像体を撮像して出力され
る撮像信号をアナログ/デジタル変換器によりアナログ
/デジタル変換し、得られたあるフレームのA/D変換
ごとの濃度データと次のフレームのA/D変換ごとの濃
度データとに基づきオプティカルフロー演算部により被
撮像体の画像のオプティカルフローを導出する動画像処
理方法において、前記撮像手段として、視野が水平方向
にずれて一部だけ重複した第1,第2の撮像手段を設
け、前記両撮像手段の撮像信号それぞれからオプティカ
ルフローの精度曲線を導出し、導出した前記両精度曲線
を前記両撮像手段の視野の重複に応じて重ね合わせ、前
記両精度曲線が重複する画素範囲において前記両精度曲
線のうち値が低い方の前記精度曲線を与える前記撮像手
段を予め特定しておき、前記両撮像手段により実際に被
撮像体を撮像して出力される撮像信号それぞれからオプ
ティカルフローを導出し、導出した前記両オプティカル
フローが重複する画素範囲では前記予め特定した方の撮
像手段によるオプティカルフローを選択的に抽出するこ
とを特徴としている。
A moving image processing method according to the present invention is obtained by analog / digital converting an image pickup signal output by picking up an image of an object by an image pickup means by an analog / digital converter. A moving image processing method for deriving an optical flow of an image of a subject by an optical flow calculation unit based on density data for each A / D conversion of a frame and density data for each A / D conversion of a next frame, As the image pickup means, first and second image pickup means, whose fields of view are shifted in the horizontal direction and partially overlap each other, are provided, and an accuracy curve of the optical flow is derived from each of the image pickup signals of the both image pickup means, and the derived both precisions are obtained. The curves are superposed according to the overlap of the fields of view of the two imaging means, and the one of the two accuracy curves having the lower value in the pixel range where the accuracy curves overlap. The image pickup means that gives the accuracy curve is specified in advance, an optical flow is derived from each of the image pickup signals actually output by picking up an image of the object by the image pickup means, and the derived optical flows overlap. In the pixel range to be set, the optical flow by the previously specified image pickup means is selectively extracted.

【0016】[0016]

【作用】この発明においては、両撮像手段により実際に
被撮像体を撮像して導出したオプティカルフローが重複
する画素範囲では予め特定した方の撮像手段によるオプ
ティカルフローを選択的に抽出することによって、両撮
像手段の光学中心付近のオプティカルフローの精度が従
来よりも大幅に向上する。
According to the present invention, by selectively extracting the optical flow by the previously specified image pickup means in the pixel range in which the optical flows derived by actually picking up the object to be picked up by both the image pickup means overlap. The accuracy of the optical flow in the vicinity of the optical center of both imaging means is significantly improved as compared with the conventional one.

【0017】[0017]

【実施例】図1はこの発明の動画像処理方法の一実施例
の動作説明図、図2は適用される装置のブロック図、図
3は図2の装置の一部の配置の説明図、図4は動作説明
図である。
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS FIG. 1 is an operation explanatory view of an embodiment of a moving image processing method of the present invention, FIG. 2 is a block diagram of an apparatus to which the present invention is applied, and FIG. FIG. 4 is an operation explanatory diagram.

【0018】まず、装置の構成について説明すると、図
3に示すように自動車1の室内の前部の左,右側にCC
Dカメラ等の第1,第2の撮像手段11,12が設けら
れ、このとき両撮像手段11,12は視野が水平方向に
ずれて一部だけが重複するように配設され、図2に示す
ように、両撮像手段11,12により前方の被撮像体が
撮像されて撮像信号がそれぞれ出力され、両撮像信号が
それぞれ第1,第2A/D変換器13,14によりA/
D変換され、A/D変換されて得られたあるフレームの
画素ごとの濃度データがそれぞれ第1,第2フレームメ
モリ15,16に記憶され、同様に両A/D変換器1
3,14によりA/D変換されて得られる次のフレーム
の画素ごとの濃度データそれぞれと、両フレームメモリ
15,16に記憶された前のフレームの濃度データそれ
ぞれとに基づき、オプティカルフロー演算部17により
オプティカルフローが導出される。
First, the structure of the apparatus will be described. As shown in FIG. 3, CCs are provided on the left and right sides of the front part of the interior of the automobile 1.
First and second image pickup means 11 and 12 such as a D camera are provided. At this time, the image pickup means 11 and 12 are arranged so that their visual fields are shifted in the horizontal direction and only a part thereof overlaps. As shown in the figure, both imaging means 11 and 12 image the object to be imaged in front and output imaging signals respectively, and both imaging signals are respectively converted into A / D signals by the first and second A / D converters 13 and 14, respectively.
The density data for each pixel of a certain frame obtained by D conversion and A / D conversion is stored in the first and second frame memories 15 and 16, respectively, and similarly, both A / D converters 1
The optical flow calculation unit 17 is based on the density data of each pixel of the next frame obtained by the A / D conversion by 3 and 14 and the density data of the previous frame stored in both frame memories 15 and 16. The optical flow is derived by.

【0019】ところで、図3に示すように自動車1の前
部の左,右側に、視野が水平方向にずれ一部だけ重複し
て両撮像手段11,12を配設した状態において、予め
ダミーの被撮像体を両撮像手段11,12により撮像
し、得られた撮像信号それぞれからオプティカルフロー
の精度曲線A,Bを導出し、導出した両精度曲線A,B
を図1に示すように両撮像手段11,12の視野の重複
に応じて重ね合わせ、両精度曲線A,Bが重複する画素
範囲においては、両精度曲線のうち値が低い方の精度曲
線を与える撮像手段を予め特定しておく。
By the way, as shown in FIG. 3, in the state in which the image pickup means 11 and 12 are arranged on the left and right sides of the front part of the automobile 1 with the fields of view shifted in the horizontal direction and only partially overlapping, the dummy image pickup means 11 and 12 are arranged in advance. The object to be imaged is imaged by both imaging means 11 and 12, the accuracy curves A and B of the optical flow are derived from the obtained imaging signals, respectively, and the derived accuracy curves A and B.
As shown in FIG. 1, the image pickup means 11 and 12 are overlapped in accordance with the overlap of the visual fields, and in the pixel range where the accuracy curves A and B overlap, the accuracy curve with the lower value of the accuracy curves is selected. The imaging means to be given is specified in advance.

【0020】即ち、図1中の太い実線で示すように、第
1の撮像手段11の光学中心P1 の付近における精度曲
線A,Bの画素範囲では値の低い方の精度曲線Bを与え
る第2の撮像手段12を、第2の撮像手段12の光学中
心P2 の付近における精度曲線A,Bの画素範囲では値
の低い方の精度曲線Aを与える第1の撮像手段11をそ
れぞれ特定しておき、それ以外の重複しない画素範囲で
は各々の精度曲線A,Bを与える撮像手段の撮像信号の
処理結果をそのまま採用する。
That is, as shown by the thick solid line in FIG. 1, the accuracy curve B having a lower value in the pixel range of the accuracy curves A and B near the optical center P 1 of the first image pickup means 11 is given. The second image pickup means 12 is specified as the first image pickup means 11 which gives the accuracy curve A having a lower value in the pixel range of the accuracy curves A and B near the optical center P 2 of the second image pickup means 12, respectively. In the other non-overlapping pixel ranges, the processing result of the image pickup signal of the image pickup means which gives the respective accuracy curves A and B is adopted as it is.

【0021】そして、両撮像手段11,12により実際
に被撮像体を撮像し、あるフレームにおいて、両撮像手
段11,12により例えば図4(a),(b)にそれぞ
れ示すような画像が得られたとすると、両撮像手段1
1,12からの撮像信号を処理して得られた当該フレー
ムの画素ごとの濃度データそれぞれと、次のフレームの
画素ごとの濃度データそれぞれとに基づき、オプティカ
ルフロー演算部17により両撮像手段11,12による
オプティカルフローがそれぞれ導出され、導出された両
オプティカルフローが重複する画素範囲では、上記した
ように予め特定した方の撮像手段によるオプティカルフ
ローが選択的に抽出され、重複しない画素範囲では導出
されたオプティカルフローがそのまま抽出され、図2に
は示されていない後段回路に抽出されたオプティカルフ
ローが出力される。
Then, the image pickup means 11 and 12 actually pick up the image of the object to be picked up, and in a certain frame, the image pickup means 11 and 12 obtain images such as those shown in FIGS. 4 (a) and 4 (b), respectively. If so, both imaging means 1
Based on the density data for each pixel of the frame and the density data for each pixel of the next frame obtained by processing the imaging signals from the first and the second frames, the optical flow computing unit 17 causes the imaging means 11, The optical flows by 12 are derived respectively, and in the pixel range where the derived optical flows overlap each other, the optical flow by the previously specified imaging unit is selectively extracted as described above, and is derived in the non-overlapping pixel ranges. The optical flow is extracted as it is, and the extracted optical flow is output to the subsequent circuit not shown in FIG.

【0022】従って、両撮像手段11,12の光学中心
付近のオプティカルフローの精度を従来よりも大幅に向
上することができ、演算によって導出したオプティカル
フローの精度の向上を図ることができる。
Therefore, the accuracy of the optical flow in the vicinity of the optical centers of the imaging means 11 and 12 can be greatly improved as compared with the conventional case, and the accuracy of the optical flow derived by the calculation can be improved.

【0023】また、撮像手段が1台の場合に比べ、有効
視野範囲を拡大することができ、側方からの移動物体の
飛び出しなどがあってもこれを検出することが可能にな
る。
Further, the effective visual field range can be expanded as compared with the case where there is only one image pickup means, and it is possible to detect even if a moving object jumps out from the side.

【0024】[0024]

【発明の効果】以上のように、この発明の動画像処理方
法によれば、視野が水平方向にずれ一部だけ重複して第
1,第2撮像手段を設け、両撮像手段の撮像信号からオ
プティカルフローの精度曲線を導出して重ね合わせ、こ
れらが重複する画素範囲において精度曲線の値が低い方
の精度曲線を与える撮像手段を予め特定しておき、実際
に両撮像手段により被撮像体を撮像したときのオプティ
カルフローが重複する画素範囲では予め特定した方の撮
像手段によるオプティカルフローを選択的に抽出するた
め、演算により導出したオプティカルフローの精度の向
上を図ることができ、自動車の走行中の安全性向上のた
めのシステム等に好適である。
As described above, according to the moving image processing method of the present invention, the first and second image pickup means are provided so that the fields of view are shifted in the horizontal direction and only partially overlap each other. The accuracy curve of the optical flow is derived and superposed, and the imaging means that gives the accuracy curve with the lower accuracy curve value in the pixel range where these overlap is specified in advance, and the object to be imaged is actually measured by both imaging means. In the pixel range in which the optical flows at the time of image pickup overlap, the optical flow by the previously specified image pickup means is selectively extracted, so that the accuracy of the optical flow derived by the calculation can be improved, and the vehicle is running. It is suitable for a system for improving the safety of the.

【図面の簡単な説明】[Brief description of drawings]

【図1】この発明の動画像処理方法の一実施例の動作説
明図である。
FIG. 1 is an operation explanatory diagram of an embodiment of a moving image processing method of the present invention.

【図2】この発明に適用される装置のブロック図であ
る。
FIG. 2 is a block diagram of an apparatus applied to the present invention.

【図3】図2の装置の一部の配置の説明図である。FIG. 3 is an explanatory view of a partial arrangement of the apparatus of FIG.

【図4】図2の動作説明図である。FIG. 4 is an operation explanatory diagram of FIG. 2;

【図5】従来の動画像処理方法に適用される装置のブロ
ック図である。
FIG. 5 is a block diagram of an apparatus applied to a conventional moving image processing method.

【図6】図5の一部の配置の説明図である。6 is an explanatory diagram of a part of the arrangement of FIG.

【図7】図6の動作説明図である。FIG. 7 is an operation explanatory diagram of FIG. 6;

【図8】図6の動作説明図である。8 is an explanatory diagram of the operation of FIG.

【符号の説明】[Explanation of symbols]

11,12 第1,第2の撮像手段 17 オプティカルフロー演算部 11, 12 First and second imaging means 17 Optical flow calculation unit

Claims (1)

【特許請求の範囲】[Claims] 【請求項1】 撮像手段により被撮像体を撮像して出力
される撮像信号をアナログ/デジタル変換器によりアナ
ログ/デジタル変換し、得られたあるフレームのA/D
変換ごとの濃度データと次のフレームのA/D変換ごと
の濃度データとに基づきオプティカルフロー演算部によ
り被撮像体の画像のオプティカルフローを導出する動画
像処理方法において、 前記撮像手段として、視野が水平方向にずれて一部だけ
重複した第1,第2の撮像手段を設け、前記両撮像手段
の撮像信号それぞれからオプティカルフローの精度曲線
を導出し、導出した前記両精度曲線を前記両撮像手段の
視野の重複に応じて重ね合わせ、前記両精度曲線が重複
する画素範囲において前記両精度曲線のうち値が低い方
の前記精度曲線を与える前記撮像手段を予め特定してお
き、前記両撮像手段により実際に被撮像体を撮像して出
力される撮像信号それぞれからオプティカルフローを導
出し、導出した前記両オプティカルフローが重複する画
素範囲では前記予め特定した方の撮像手段によるオプテ
ィカルフローを選択的に抽出することを特徴とする動画
像処理方法。
1. An A / D of a certain frame obtained by analog / digital converting an image pickup signal outputted by picking up an image of an image pickup object by an image pickup means by an analog / digital converter.
In a moving image processing method for deriving an optical flow of an image of an image-captured object by an optical flow calculation unit based on density data for each conversion and density data for each A / D conversion of a next frame, the visual field as the imaging unit is Providing first and second image pickup means that are offset in the horizontal direction and partially overlap each other, the accuracy curves of the optical flow are derived from the respective image pickup signals of the both image pickup means, and the derived both accuracy curves are used in the both image pickup means. The image pickup means that overlap in accordance with the overlap of the fields of view, and give the accuracy curve with the lower value of the accuracy curves in the pixel range in which the accuracy curves overlap, By deriving an optical flow from each of the image pickup signals actually output by picking up an image of the image pickup object, and the derived optical flows overlap each other. A moving image processing method, characterized in that the optical flow by the previously specified image pickup means is selectively extracted in a pixel range.
JP35692092A 1992-12-22 1992-12-22 Moving image processing method Pending JPH06195465A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP35692092A JPH06195465A (en) 1992-12-22 1992-12-22 Moving image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP35692092A JPH06195465A (en) 1992-12-22 1992-12-22 Moving image processing method

Publications (1)

Publication Number Publication Date
JPH06195465A true JPH06195465A (en) 1994-07-15

Family

ID=18451434

Family Applications (1)

Application Number Title Priority Date Filing Date
JP35692092A Pending JPH06195465A (en) 1992-12-22 1992-12-22 Moving image processing method

Country Status (1)

Country Link
JP (1) JPH06195465A (en)

Similar Documents

Publication Publication Date Title
US5734441A (en) Apparatus for detecting a movement vector or an image by detecting a change amount of an image density value
US5943090A (en) Method and arrangement for correcting picture steadiness errors in telecine scanning
JP2860702B2 (en) Motion vector detection device
US5296925A (en) Movement vector detection device
US4985765A (en) Method and apparatus for picture motion measurement whereby two pictures are correlated as a function of selective displacement
JPH0146002B2 (en)
US7421091B2 (en) Image-capturing apparatus
JPH06195465A (en) Moving image processing method
JPH04309078A (en) Jiggling detector for video data
JP2023005320A (en) Motion information imaging apparatus
JP3548213B2 (en) Multipoint ranging device and camera
JP3181201B2 (en) Moving object detection method
JPH0531995B2 (en)
JP3271387B2 (en) Motion amount detection device and motion amount detection method
JP2001174214A (en) Device and method for stereo positioning
JPH0412805B2 (en)
JP3126998B2 (en) Motion vector detection device
JPH04196775A (en) Still picture forming device
JPH04145777A (en) Motion vector detecting device
JP3119901B2 (en) Image processing vehicle detection device
JPH05108827A (en) Picture correlation arithmetic unit
JP2643135B2 (en) Registration correction processor
JP2971693B2 (en) Distance measuring device
JP2970974B2 (en) Road white line detection method
JPH05256610A (en) Distance detecting method by means of stereo picture processing