JPH05141919A - Corresponding point retrieving method for pickup image of right and left camera - Google Patents

Corresponding point retrieving method for pickup image of right and left camera

Info

Publication number
JPH05141919A
JPH05141919A JP3287538A JP28753891A JPH05141919A JP H05141919 A JPH05141919 A JP H05141919A JP 3287538 A JP3287538 A JP 3287538A JP 28753891 A JP28753891 A JP 28753891A JP H05141919 A JPH05141919 A JP H05141919A
Authority
JP
Japan
Prior art keywords
time
image
point
poa1
interest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP3287538A
Other languages
Japanese (ja)
Other versions
JP3055721B2 (en
Inventor
Atsushi Sato
藤 淳 佐
Fumiaki Tomita
田 文 明 富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Institute of Advanced Industrial Science and Technology AIST
Aisin Corp
Original Assignee
Agency of Industrial Science and Technology
Aisin Seiki Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agency of Industrial Science and Technology, Aisin Seiki Co Ltd filed Critical Agency of Industrial Science and Technology
Priority to JP3287538A priority Critical patent/JP3055721B2/en
Publication of JPH05141919A publication Critical patent/JPH05141919A/en
Application granted granted Critical
Publication of JP3055721B2 publication Critical patent/JP3055721B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

PURPOSE:To correctly retrieve the corresponding points of pickup images of the right and left cameras by processing the edges of objects in the right and left images into fine lines, setting one point of the left image to a remark point, calculating the shift positions on the right image, extracting the coordinates, calculating the three-dimensional positions, and calculat ing the correlation value with the image density. CONSTITUTION:When the edges of objects in the right and left images are processed into fine lines, there are corresponding points between the fine lines. The corresponding points of one point Poa1 on one fine line of the left image are many intersections between the fine lines in the right image and the corresponding scanning lines. When points Pob1-3 on the corresponding vertical coordinate (i) of the vertical coordinate (i) of the remark point Poa1 of the fine line of the right image are selected, the number of candidate points of the corresponding points are squeezed. The three-dimensional position of the corresponding point is calculated, and the correlation value C between the image densities of the right and left images is calculated. The similar processing is applied to the remaining candidate points, and the corresponding point on the right image corresponding to the remark point Poa1 is determined. The corresponding point is determined by the calculation and comparison of the correlation value between the right and left image densities, thus the reliability is high.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】本発明は、2個以上の撮像カメラ
で前方のシ−ンを撮映しステレオ画像処理により、該シ
−ンにあるものを摘出もしくは認識し、認識したものの
距離および速度を測定する、物***置監視に関し、特
に、左,右カメラの撮像画像中の物体の同一点の対応付
けすなわわち対応点検索に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention picks up or recognizes a scene in front by two or more imaging cameras and extracts or recognizes the scene, and the distance and speed of the recognized scene. The present invention relates to object position monitoring for measuring, and particularly to matching corresponding points of an object in images captured by left and right cameras, that is, searching for corresponding points.

【0002】[0002]

【従来の技術】例えば車両や船舶においては、前方車両
又は船舶もしくは障害物を自動検出する技術が望まれて
いる。これに答えるものとして特開昭61−44312
号公報および特開昭61−44313号公報には、車両
上の二台のテレビカメラで撮映したそれぞれ一画面分の
画像情報をステレオ処理して、カメラで映したシ−ン内
の物の距離を算出する距離測定装置が提示されている。
2. Description of the Related Art For vehicles and ships, for example, there is a demand for a technique for automatically detecting a forward vehicle, a ship, or an obstacle. As a response to this, Japanese Patent Laid-Open No. 61-44312
Japanese Patent Laid-Open No. 61-44313 and Japanese Patent Laid-Open No. 61-44313 perform stereo processing on the image information of one screen captured by two television cameras on the vehicle and display the objects in the scene captured by the cameras. A distance measuring device for calculating a distance is presented.

【0003】前者では、所定量離して配置した2台のカ
メラで撮影した画像上で、同一の明るさを持つ領域をブ
ロック化する。次に一方の画像上の各々のブロックを視
差が減る方向に動かし、もう一方のブロックと比較しブ
ロックが一致する位置を見つける。一致するまでに動か
した量から距離を計算する。
In the former method, areas having the same brightness are divided into blocks on images taken by two cameras arranged a predetermined distance apart. Next, each block on one image is moved in the direction in which the parallax is reduced and compared with the other block to find the position where the blocks match. Calculate the distance from the amount of movement until it matches.

【0004】後者では、所定量離して配置した2台のカ
メラで撮影した画像上で、同一の明るさを持つ領域をブ
ロック化する。次に一方の画像上のブロックともう一方
の画像上のブロックのそれぞれの特徴量を算出し、相異
なるカメラで撮影した各ブロックの間で特徴量を比較し
最も良く一致するブロックを検出する(対応ブロックの
検出)。そして対応ブロック間の位置差から距離を計算
する。
In the latter case, areas having the same brightness are divided into blocks on images taken by two cameras arranged a predetermined distance apart. Next, the feature amounts of the blocks on one image and the blocks on the other image are calculated, and the feature amounts are compared between the blocks photographed by different cameras to detect the best matching block ( Corresponding block detection). Then, the distance is calculated from the positional difference between the corresponding blocks.

【0005】また本出願人の出願にかかる特開平2−2
9878号公報および特開平2−29879号公報は、
前記シ−ン内の特定の物体、特に表面凹凸が多く複雑な
外表面を呈する物体、の撮映画面上の領域すなわちテク
スチャ領域を検出し、該物体の表面各部の距離すなわち
立体形状を検出する技術を提示している。
Further, Japanese Patent Application Laid-Open No. 2-2 filed by the present applicant
Japanese Patent No. 9878 and Japanese Patent Laid-Open No. 29879/1990,
Detects a region, that is, a texture region, on a film plane of a specific object in the scene, particularly an object having a complicated outer surface with many surface irregularities, and detects a distance, that is, a three-dimensional shape of each surface portion of the object. Presenting technology.

【0006】更に、本発明者の提案にかかる特願平2−
262269号では、車両上で前景を左,右カメラで撮
映し、前景画像中の路面を切出す画像処理方法を開示し
ている。
Furthermore, Japanese Patent Application No. 2-
No. 262269 discloses an image processing method in which the foreground is captured by left and right cameras on a vehicle and the road surface in the foreground image is cut out.

【0007】[0007]

【発明が解決しようとする課題】前記特開昭61−44
312号公報および特開昭61−44313号公報に開
示の如き距離測定装置では、2台のカメラの視差が大き
いときには非常に大きな領域を対応探索することになる
ので、似かよった特徴を持つブロックが多く存在する場
合には対応ブロック検出を誤る確率が高くなる。
DISCLOSURE OF THE INVENTION Problems to be Solved by the Invention
In the distance measuring device disclosed in Japanese Patent Application Laid-Open No. 312 and Japanese Patent Application Laid-Open No. 61-44313, when a parallax between two cameras is large, a very large area is searched for correspondingly. If many exist, the probability of erroneous detection of the corresponding block increases.

【0008】前記特開平2−29878号公報および特
開平2−29879号公報に開示の如き距離測定装置で
は、立体的に比較的に複雑な形状をした物体の検出に多
くの利点をもたらすが、立体的には比較的に単調でしか
も領域が広く、画面上に1つの独立体(単体)として現
われにくい物体、例えばテ−ブルの上面,床面,路面,
水面等々、平面又は擬似平面(以下平面状部という)、
の摘出は難しい。これはこれらの距離測定装置がもとも
と立体的な個体を正確に摘出しかつその表面凹凸をより
正確に検出しようとするものであることを考えれば当然
でもある。
The distance measuring devices disclosed in the above-mentioned Japanese Patent Laid-Open Nos. 2-29878 and 2-29879 bring many advantages for detecting an object having a relatively complicated three-dimensional shape. Three-dimensionally relatively monotonous and wide area, which is difficult to appear as one independent body (single body) on the screen, such as the top surface of the table, the floor surface, the road surface,
Water surface, etc., plane or pseudo plane (hereinafter referred to as plane-like portion),
Is difficult to extract. This is of course considering that these distance measuring devices are originally intended to accurately extract a three-dimensional object and detect the surface irregularities more accurately.

【0009】いずれの立体視による物体検出あるいは追
跡においても、左カメラで撮映した画像と右カメラで撮
映した画像の中の、物体の同一点を検索する対応処理に
おいて誤対応をとってしまうと、左,右カメラの立体視
処理技術を利用する物体の認識が混乱し、例えば車両上
における前方物体の認識では物体認識に基づいて前方物
体の位置および速度(自車両に対する相対速度)を算出
するが、これらが誤りとなる。
In any object detection or tracking by stereoscopic vision, an erroneous correspondence is taken in the correspondence processing for searching the same point of the object in the image captured by the left camera and the image captured by the right camera. Then, the recognition of the object using the stereoscopic processing technology of the left and right cameras is confused. For example, in the recognition of the front object on the vehicle, the position and speed of the front object (relative speed to the own vehicle) are calculated based on the object recognition. However, these are incorrect.

【0010】本発明は、左,右カメラの撮像画像の対応
点検索をより正確にすることを目的とする。
It is an object of the present invention to more accurately search corresponding points in images picked up by left and right cameras.

【0011】[0011]

【課題を解決するための手段】左,右の撮像カメラによ
りそれらの前方のシ−ンを撮映してそれぞれビデオ信号
を得て、これらのビデオ信号をデジタル処理して、左,
右画像にある同一物体の同一点を検索するにおいて、t
o時刻,これよりΔt後のt1時刻および更にΔt後の
t2時刻の左,右画像の内、少くともto時刻およびt
1時刻の左,右画像は、画像中の物体のエッジを細線で
表わす細線化処理を施して、これらto,t1およびt
2時刻の左,右画像に基づいて、時刻toの左(右)画
像にある細線のある点を注目点Poa1と定め、時刻toの
右(左)画像上の細線の、注目点Poa1の垂直座標iの対
応垂直座標i上にある点Pob1,Pob2,Po3bを摘出してto
時刻の右(左)画像上の対応候補点Pob1,Pob2,Po3bと
し、時刻t1の左(右)画像上で注目点Poa1の座標(j,
i)を中心とする所定領域の細線上の各点をt1時刻の左
(右)画像上の対応候補点P1a-d〜P1a-eとして、対応候
補点Pob1,Pob2,Po3bの1つ毎に、それと注目点Poa1の座
標で定まるto時刻の注目点Poa1の3次元位置、ならび
に、注目点Poa1が対応候補点P1a-d〜P1a-eのそれぞれに
移動した場合のそれぞれのt1時刻の右(左)画像上の
位置の連なりとt1時刻の右(左)画像上の細線の交点
P1bの座標で定まる、t1時刻の注目点Poa1の3次元位
置を算出して、両3次元位置の外挿によりt2時刻の注
目点Poa1の3次元位置を算出し、これを左,右画像上の
座標に変換して、これらの座標位置の、t2時刻の左,
右画像上の画像濃度の相関値Cを算出し、対応候補点Pob
1,Pob2,Po3bの内、相関値Cが最大の候補点を、時刻の左
(右)画像上の注目点Poa1の、to時刻の右(左)画像
上の対応点と定める。
Means for Solving the Problems Left and right imaging cameras image the scenes in front of them to obtain video signals, digitally process these video signals,
In searching the same point of the same object in the right image, t
o time, at least t time and t of the left and right images at t1 time after Δt and t2 time after Δt.
The left and right images at one time are subjected to thinning processing in which the edges of the object in the image are represented by thin lines, and these to, t1 and t
Based on the left and right images at two times, the point with the thin line in the left (right) image at time to is defined as the point of interest Poa1, and the thin line on the right (left) image at time to is perpendicular to the point of interest Poa1. The points Pob1, Pob2, Po3b on the corresponding vertical coordinate i of the coordinate i are extracted and to
Corresponding candidate points Pob1, Pob2, and Po3b on the right (left) image at the time are set, and the coordinates (j,
Each point on the thin line of the predetermined area centering on i) is set as the corresponding candidate points P1a-d to P1a-e on the left (right) image at the time t1 and the corresponding candidate points Pob1, Pob2, Po3b are set one by one. , And the three-dimensional position of the attention point Poa1 at to time determined by the coordinates of the attention point Poa1 and the right side of each t1 time when the attention point Poa1 moves to each of the corresponding candidate points P1a-d to P1a-e ( The intersection of the position on the left image and the thin line on the right (left) image at time t1
The three-dimensional position of the point of interest Poa1 at time t1 determined by the coordinates of P1b is calculated, and the three-dimensional position of the point of interest Poa1 at time t2 is calculated by extrapolation of both three-dimensional positions. Converted to the coordinates of, the left of t2 time of these coordinate positions,
The correlation value C of the image density on the right image is calculated, and the corresponding candidate point Pob
Of 1, Pob2 and Po3b, the candidate point with the largest correlation value C is defined as the corresponding point on the right (left) image at time to the point of interest Poa1 on the left (right) image at time.

【0012】[0012]

【作用】to時刻,これよりΔt後のt1時刻および更
にΔt後のt2時刻の左,右画像の内、少くともto時
刻およびt1時刻の左,右画像は、画像中の物体のエッ
ジを細線で表わす細線化処理を施すことにより、to時
刻およびt1時刻の左,右画像は、撮映画像中の物体の
縁を表わすので、左,右画像で細線同志の対応付けで対
応点がある。
The left and right images at to time, t1 time after Δt and t2 time after Δt, and at least the left and right images at to time and t1 time, are thin lines of the edge of the object in the image. By performing the thinning processing represented by, the left and right images at the time to and the time t1 represent the edges of the object in the captured image, and therefore there are corresponding points in the left and right images in association with each other.

【0013】左,右カメラを実質上水平にならべること
により、左カメラの画像中のある細線の、ある走査線
(水平線)上の一点Poa1の対応点は、右カメラの画
像中の細線と、該走査線に対応する走査線(水平線)と
の交点となる。この交点は通常複数(多数)となる。左
カメラの画像中の一点Poa1は右カメラでは該一点P
oa1の横方向位置jよりも左側に表われるので、時刻
toの右(左)画像上の細線の、注目点Poa1の垂直座標
iの対応垂直座標i上にある点Pob1,Pob2,Po3bを摘出して
to時刻の右(左)画像上の対応候補点Pob1,Pob2,Po3b
とすることにより、対応候補点数が絞られる。
By arranging the left and right cameras substantially horizontally, the corresponding point of one fine line in the image of the left camera on one scanning line (horizontal line) Poa1 is the thin line in the image of the right camera, It becomes an intersection with a scanning line (horizontal line) corresponding to the scanning line. This intersection is usually plural (many). One point Poa1 in the image of the left camera is the one point Pa in the right camera.
Since it appears on the left side of the horizontal position j of oa1, the vertical coordinate of the point of interest Poa1 of the thin line on the right (left) image at time to
Corresponding candidate points Pob1, Pob2, Po3b on the right (left) image at to time are extracted by extracting points Pob1, Pob2, Po3b on the corresponding vertical coordinate i of i.
As a result, the number of correspondence candidate points is narrowed down.

【0014】短時間Δt内では、カメラで撮映された物
体の移動範囲は狭いので、その範囲(所定領域)を定め
ておくことができる。しかして、時刻t1の左(右)画
像上で注目点Poa1の座標(j,i)を中心とする所定領域の
細線上の各点をt1時刻の左(右)画像上の対応候補点
P1a-d〜P1a-eとすることにより、to時刻の左(右)画
像上の注目点Poa1は、t1時刻の左(右)画像上の対応
候補点P1a-d〜P1a-eのいずれかである。そこで、to時
刻の右(左)画像上の対応候補点Pob1,Pob2,Po3bのそれ
ぞれを注目点Poa1の対応点と仮定し、例えば1つの候補
点Pob1に関しては、to時刻の左(右)画像上の注目点
=Poa1,to時刻の右(左)画像上の対応点=Pob1,t
1時刻の左(右)画像上の対応点=P1a-d〜P1a-e、とし
て、t1時刻の右(左)画像上の、P1a-d〜P1a-eに対応
する点列を求めて、t1時刻の右(左)画像上の細線と
この点列との交点P1bをt1時刻の右(左)画像上の対
応点P1bとして、Poa1とPob1よりto時刻の注目点Poa1
の3次元位置を求め、t1時刻の交点P1bとこれに対応
するt1時刻の左(右)画像上のP1b対応点よりt1時
刻の注目点Poa1の3次元位置を求めて、注目点Poa1の運
動が2Δtの間この時間が極く短いので、実質上等速直
線運動であるとして、to時刻の3次元位置とt1時刻
の3次元位置に外挿法を適用して、t2時刻の注目点Po
a1の対応点の3次元位置を算出する。この、算出した3
次元位置に対応する、t2時刻の左,右画像上の位置の
画像濃度の、左,右画像相関値Cを算出する。
Since the moving range of the object imaged by the camera is narrow within the short time Δt, the range (predetermined area) can be defined. Then, on the left (right) image at time t1, each point on the thin line of the predetermined area centered on the coordinates (j, i) of the point of interest Poa1 is the corresponding candidate point on the left (right) image at time t1.
By setting P1a-d to P1a-e, the attention point Poa1 on the left (right) image at time to is any of the corresponding candidate points P1a-d to P1a-e on the left (right) image at time t1. Is. Therefore, each of the corresponding candidate points Pob1, Pob2, Po3b on the right (left) image at the to time is assumed to be the corresponding point of the target point Poa1, and for example, for one candidate point Pob1, the left (right) image at the to time. Top point = Poa1, to corresponding point on the right (left) image at time to = Pob1, t
Assuming that corresponding points on the left (right) image at 1 time = P1a-d to P1a-e, a point sequence corresponding to P1a-d to P1a-e on the right (left) image at t1 time is obtained, The intersection P1b between the thin line on the right (left) image at t1 time and this sequence of points is set as the corresponding point P1b on the right (left) image at t1 time, and the point of interest Poa1 at to time is calculated from Poa1 and Pob1.
Of the target point Poa1 at the time t1 from the intersection P1b at the time t1 and the corresponding P1b corresponding point on the left (right) image at the time t1 to find the three-dimensional position of the target point Poa1. Since this time is extremely short for 2Δt, it is assumed that the motion is substantially uniform linear motion, and the extrapolation method is applied to the three-dimensional position at the time to and the three-dimensional position at the time t1 to obtain the target point Po at the time t2.
The three-dimensional position of the corresponding point of a1 is calculated. This calculated 3
The left and right image correlation values C of the image densities at the positions on the left and right images at time t2 corresponding to the dimensional position are calculated.

【0015】これと同様の処理を残りの候補点について
も同様に行なう。このようにして得た相関値の最大値を
得た候補点(Pob1,Pob2,Po3bの1つ)を、to時刻の左
(右)画像の注目点Poa1に対応する、to時刻の右(左)
画像上の対応点と決定する。
The same process as above is similarly performed for the remaining candidate points. The candidate point (one of Pob1, Pob2, and Po3b) that has the maximum correlation value obtained in this way is assigned to the right (left) of to time corresponding to the target point Poa1 of the left (right) image at to time. )
Determined as corresponding points on the image.

【0016】短時間Δt内では、カメラで撮映された物
体の移動範囲は狭いので、その範囲を常に含むように前
記所定領域を定めておくことにより、注目点Po1のΔt
後(t1時刻)の対応点が必ず、対応候補点P1a-d〜P1a
-eとして摘出される。すなわちt1時刻の対応候補点P1
a-d〜P1a-eの摘出が正確となる。これ(P1a-d〜P1a-e)
とto時刻の右(左)画像上の対応候補点Pob1,Pob2,Po
3bのそれぞれに基づいてt1時刻の対応点列(Pob1,Pob
2,Po3b毎)を算出するので、この点列とt1時刻の右
(左)画像上の細線との交点P1bでなる、t1時刻の右
(左)画像上の対応点P1b(Pob1,Pob2,Po3bそれぞれに
1個)も正確に求まる。物体の移動は等速直線運動に見
なせるので、to時刻の注目点Poaの3次元位置と、t
1時刻の対応点P1b(Pob1,Pob2,Po3bそれぞれに1個)
の3次元位置より、外挿によりt2時刻の対応点(Pob
1,Pob2,Po3bそれぞれに1個)の3次元位置を求めるの
で、t2時刻の対応点の推定精度が高い。加えて、物体
の同一点の左,右画像上の濃度は実質上同一であり、こ
れに着目して、t2時刻の各対応点(Pob1,Pob2,Po3bそ
れぞれに1個)に関して、t2時刻の左,右画像上の画
像濃度の相関値Cを算出し、相関値の最大値を得た対応
点補点(Pob1,Pob2,Po3bの1つ)を、to時刻の左(右)
画像の注目点Poa1に対応する、to時刻の右(左)画像上
の対応点と決定するので対応点の推定が正確である。
Within a short time Δt, the moving range of the object imaged by the camera is narrow. Therefore, by setting the predetermined area so as to always include the range, Δt of the point of interest Po1 is determined.
The corresponding points after (t1 time) must be the corresponding candidate points P1a-d to P1a.
-Extracted as e. That is, the corresponding candidate point P1 at time t1
Accurate extraction of ad ~ P1a-e. This (P1a-d ~ P1a-e)
Corresponding candidate points Pob1, Pob2, Po on the right (left) image of and to time
Based on each of 3b, the corresponding point sequence at time t1 (Pob1, Pob
2, every Po3b), the corresponding point P1b (Pob1, Pob2, Pob1, Pob2, Pb2, Pob2, Pb2) at the time t1 is defined as the intersection P1b of this point sequence and the thin line on the right (left) image at t1 time. One for each Po3b) can be obtained accurately. Since the movement of an object can be regarded as a uniform linear motion, the three-dimensional position of the point of interest Poa at time to and t
Corresponding point P1b at one time (one for each Pob1, Pob2, Po3b)
From the three-dimensional position of, the corresponding point at time t2 (Pob
Since one three-dimensional position is obtained for each of 1, Pob2, and Po3b), the estimation accuracy of the corresponding point at time t2 is high. In addition, the densities of the same point of the object on the left and right images are substantially the same, and paying attention to this, regarding each corresponding point at t2 time (one for each of Pob1, Pob2, and Po3b), at t2 time. The correlation value C of the image densities on the left and right images is calculated, and the corresponding point complementary point (one of Pob1, Pob2, and Po3b) that has obtained the maximum correlation value is set to the left (right) to time.
Since the corresponding point on the right (left) image at the to time corresponding to the attention point Poa1 of the image is determined, the estimation of the corresponding point is accurate.

【0017】このように、物体の運動による対応点の位
置変化を利用した外挿による対応点(X2a,Y2a),(X2b,Y2
b)の推定と、同一物体の同一部は同一の明るさであると
の自明の事項を利用した左右画像濃度の相関値の演算お
よび比較により、複数の候補点Pob1,Pob2,Po3bから一点
を対応点と決定するので、相対的に高速に移動する物体
の認識の信頼度が高い。
As described above, the corresponding points (X2a, Y2a), (X2b, Y2) are extrapolated by using the position change of the corresponding points due to the motion of the object.
By estimating b) and calculating and comparing the correlation values of the left and right image densities using the obvious matter that the same part of the same object has the same brightness, one point is selected from the plurality of candidate points Pob1, Pob2, Po3b. Since the points are determined as corresponding points, the reliability of recognition of an object that moves relatively fast is high.

【0018】本発明の他の目的および特徴は、図面を参
照した以下の実施例の説明より明らかになろう。
Other objects and features of the present invention will become apparent from the following description of embodiments with reference to the drawings.

【0019】[0019]

【実施例】図2に、本発明を一態様で実施する画像処理
装置の構成を示す。この装置は、コンピュ−タ1,撮像
カメラ2a(左),2b(右),画像メモリ4,ディス
プレイ5,プリンタ6,フロッピ−ディスク7およびキ
−ボ−ドタ−ミナル8等でなり、これらは乗用車に搭載
されている。左,右カメラ2a,2bは、それぞれ、前
方のシ−ンを撮映して512×512画素区分でアナロ
グ画像信号を出力する同諸元のITVカメラ(CCDカ
メラ)であり、これらの画像信号をA/Dコンバ−タ2
2a,22bが1画素256階調のデジタルデ−タ(画
像デ−タ)に変換する。撮映カメラ2a,2bは、平面
(路面)よりある高さhにあり、二台のステレオで斜め
下向き(下向き角度)で前方のシ−ンを撮映する。図3
に示すように左カメラ2aの撮映画面FLaを左画像
と、右カメラ2bの撮映画面FLbと呼ぶものとする。
FIG. 2 shows the structure of an image processing apparatus for carrying out the present invention in one mode. This device comprises a computer 1, image pickup cameras 2a (left), 2b (right), image memory 4, display 5, printer 6, floppy disk 7 and keyboard terminal 8 and the like. It is installed in passenger cars. The left and right cameras 2a and 2b are ITV cameras (CCD cameras) of the same specifications that image the front scene and output analog image signals in 512 × 512 pixel sections. A / D converter 2
2a and 22b convert into digital data (image data) having 256 gradations per pixel. The projection cameras 2a and 2b are located at a height h above the plane (road surface), and project the front scene diagonally downward (downward angle) with two stereos. Figure 3
As shown in, the shooting movie surface FLa of the left camera 2a is called a left image, and the shooting movie surface FLb of the right camera 2b is called.

【0020】図2に示す画像メモリ4は読み書き自在で
あり、左画像や右画像の原画像デ−タを始めとして種々
の処理デ−タを記憶する。ディスプレイユニット5およ
びプリンタ6は、コンピュ−タ1の処理結果等を出力
し、フロッピ−ディスク7はその処理結果を登録してお
く。また、キ−ボ−ドタ−ミナル8はオペレ−タにより
操作されて、各種の指示が入力される。コンピュ−タ1
には、さらにホストコンピュ−タが接続されており、そ
こから与えられた指示、またはキ−ボ−ドタ−ミナル8
より与えられた指示に従って各部を制御し、対応する処
理を行なう。以下、その処理のうち、前方のシ−ンの中
の物体を認識しその位置(カメラからの距離)および速
度(カメラとの相対速度)を検出するための、「前方物
体の速度監視」について説明する。
The image memory 4 shown in FIG. 2 is readable and writable and stores various processing data including original image data of left and right images. The display unit 5 and the printer 6 output the processing result of the computer 1 and the like, and the floppy disk 7 registers the processing result. Further, the keyboard terminal 8 is operated by the operator to input various instructions. Computer 1
Further, a host computer is connected to the computer, and an instruction given from there or a keyboard terminal 8 is provided.
Each unit is controlled according to the instruction given by the unit and the corresponding processing is performed. In the following, of the processing, "speed monitoring of the front object" for recognizing the object in the front scene and detecting its position (distance from the camera) and speed (relative speed with the camera) explain.

【0021】この種の画像処理装置による物体認識にお
いては、前方に2つの点(物体又はその一部)が存在
し、図4に示すように、それらの点がto時刻でA1,
B1にあり、それよりΔt後のt1時刻でA2,B2に
あり、更にΔt後のt2時刻でA3,B3にあったと
し、物体認識で誤対応を生じて、図4に示すように、両
点を一点(対応点)と認識して、PC1,PC2,PC
3と追跡すると、この認識点PC1,PC2,PC3は
等速直線運動とならない(ただし例外もある)。従って
等速直線運動と仮定して、to時刻のPC1とt1時刻
のPC2から外挿によりt2時刻の位置PC3’を求め
ると、実際に認識したPC3と異なり、しかもPC3
の、左画像上(PA3)の明るさと右画像上(PB3)
の明るさとが異なり、これらの明るさの相関値が低い。
位置ずれΔXは、次の(1)式で表わせる。
In object recognition by this type of image processing apparatus, there are two points (object or part thereof) in front, and as shown in FIG.
It is in B1, it is in A2 and B2 at t1 time after Δt, and it is in A3 and B3 at t2 time after Δt. Therefore, erroneous correspondence occurs in the object recognition, and as shown in FIG. Recognizing a point as one point (corresponding point), PC1, PC2, PC
When it is tracked as 3, the recognition points PC1, PC2, and PC3 do not become a uniform linear motion (with some exceptions). Therefore, assuming that it is a constant-velocity linear motion, when the position PC3 ′ at time t2 is obtained from PC1 at time to and PC2 at time t1 by extrapolation, it is different from the actually recognized PC3, and PC3
Brightness on the left image (PA3) and on the right image (PB3)
However, the correlation value of these brightness is low.
The positional deviation ΔX can be expressed by the following equation (1).

【0022】[0022]

【数1】 [Equation 1]

【0023】(1)式中の各パラメ-タは次の通りである。The parameters in the equation (1) are as follows.

【0024】[0024]

【数2】 [Equation 2]

【0025】[0025]

【数3】 [Equation 3]

【0026】[0026]

【数4】 [Equation 4]

【0027】[0027]

【数5】 [Equation 5]

【0028】[0028]

【数6】 [Equation 6]

【0029】[0029]

【数7】 [Equation 7]

【0030】この実施例の「前方物体の速度監視」は、
等速直線運動と明るさの相関値を積極的に利用して、迅
速かつ正確に対応点を求めようとするものである。図5
に、この実施例において、ホストコンピュ−タ又はキ−
ボ−ドタ−ミナル8より与えられた指示に応答してその
指示が解除されるまで、コンピュ−タ1が実質上所定周
期で繰返えす「前方物体の速度監視」ル−チンの概略を
示し、図6〜図11に、その中の主たる処理項目の処理
内容を示。
The "front object speed monitoring" of this embodiment is as follows.
By actively using the correlation value between the constant-velocity linear motion and the brightness, it is intended to find the corresponding point quickly and accurately. Figure 5
In this embodiment, the host computer or key
An outline of a "forward object speed monitoring" routine which the computer 1 repeats substantially in a predetermined cycle until the instruction given by the board terminal 8 is released is shown. 6 to 11 show the processing contents of the main processing items among them.

【0031】まず図5を参照すると、コンピュ−タ1
は、1サイクルの「前方物体の速度監視」処理の先頭に
おいて、左右カメラ2a,2bの左画像,右画像の画像
デ−タを画像メモリ4に書き込む(サブル−チン1)。
なお、以下カッコ内ではサブル−チンとかステップとい
う語を省略し、それに付した番号数字のみを示す。次に
画像デ−タの微分処理を行なう(2)。この内容を図6
に示す。
First, referring to FIG. 5, the computer 1
Writes the image data of the left and right images of the left and right cameras 2a and 2b in the image memory 4 at the beginning of the "front object speed monitoring" processing of one cycle (subroutine 1).
In the following, the word "subroutine" or "step" will be omitted in parentheses, and only the numerals and numbers attached to them will be shown. Next, the differential processing of the image data is performed (2). This content is shown in Figure 6.
Shown in.

【0032】「微分処理」(2)では、まず左画像に対
して順方向ラスタスキャンを設定する。なお、ラスタス
キャンの方向は、撮映シ−ンの水平方向(車上から前方
を見て左右方向)である。順方向ラスタスキャンは、図
3に示す撮像領域FLa(右画面ではFLb)内をXa
(Xb)に平行なua(ub)軸を主走査軸とし、Ya
(Yb)に平行なva(vb)軸を副走査軸として左上
端画素から右下画素に至る経路で各画素に注目する走査
を行なう。すなわちu方向(主走査方向)およびv(副
走査方向)の走査を、左上の画素(画像原点0,0)か
ら右下画素(512,512)まで行なう。順方向ラス
タスキャンを行いながら微分デ−タを生成する。
In the "differential processing" (2), first, forward raster scan is set for the left image. The direction of the raster scan is the horizontal direction of the shooting scene (the left-right direction when looking forward from the vehicle). In the forward raster scan, the inside of the imaging area FLa (FLb in the right screen) shown in FIG. 3 is Xa.
With the ua (ub) axis parallel to (Xb) as the main scanning axis, Ya
With the va (vb) axis parallel to (Yb) as the sub-scanning axis, scanning is performed with attention paid to each pixel along the path from the upper left pixel to the lower right pixel. That is, scanning in the u direction (main scanning direction) and v (sub scanning direction) is performed from the upper left pixel (image origin 0, 0) to the lower right pixel (512, 512). The differential data is generated while performing the forward raster scan.

【0033】微分デ−タは、注目画素およびその近傍画
素8個の原画像デ−タ(注目画素を中心とする3×3の
画像デ−タマトリックス)の原画像デ−タを、注目画素
の画像濃度をp0とし、その右隣りの画素の画像濃度を
p1とし、p1の上の画素の画像濃度をp2とし、注目
画素の上の画素の画像濃度をp3とし、p3の左隣りの
画素の画像濃度をp4とし、注目画素の左隣りの画素の
画像濃度をp5とし、p5の下の画素の画像濃度をp6
とし、注目画素の下の画素の画像濃度をp7とし、p7
の右隣りの画素の画像濃度をp8とすると、(p1+p
2+p8)−(p4+p5+p6)で与えられる主走査
方向微分デ−タと、(p2+p3+p4)−(p6+p
7+p8)で与えられる副走査方向微分デ−タとの和で
あり、原画像デ−タの空間的な変化量を示す。コンピュ
−タ1は、これらのデ−タを各画素に対応付けして画像
メモリ4に書き込む。
The differential data is obtained by converting the original image data of the target pixel and eight neighboring pixels (3 × 3 image data matrix centering on the target pixel) into the target pixel. , The image density of the pixel adjacent to the right of the pixel is p1, the image density of the pixel above p1 is p2, the image density of the pixel above the pixel of interest is p3, and the pixel adjacent to the left of p3 is Is set to p4, the image density of the pixel to the left of the target pixel is set to p5, and the image density of the pixel below p5 is set to p6.
Let p7 be the image density of the pixel below the pixel of interest, and p7
If the image density of the pixel on the right of is p8, then (p1 + p
2 + p8)-(p4 + p5 + p6) main scanning direction differential data and (p2 + p3 + p4)-(p6 + p)
7 + p8) which is the sum of the differential data in the sub-scanning direction and represents the spatial variation of the original image data. The computer 1 writes these data in the image memory 4 in association with each pixel.

【0034】コンピュ−タ1は次に、図7に示す「しき
い値処理」(3)により、微分デ−タをしきい値TH1
と比較して、TH1より大きい微分デ−タをそのまま画
素対応で残し、TH1より小さい微分デ−タは画素対応
で0に置換する。このようにして2値化した結果、原画
像デ−タの値の変化が大きい領域「エッジ領域」が検出
され該領域のみ微分デ−タがそのまま大きい値で残り、
他の領域では微分デ−タは0(明るさの変化なし:連続
領域)となる。
Next, the computer 1 uses the "threshold processing" (3) shown in FIG. 7 to convert the differential data to the threshold value TH1.
Compared with, the differential data larger than TH1 is left as it is for the pixel, and the differential data smaller than TH1 is replaced with 0 for the pixel. As a result of the binarization in this way, an area "edge area" in which the change in the value of the original image data is large is detected, and the differential data remains in that area as it is,
In other areas, the differential data is 0 (no change in brightness: continuous area).

【0035】コンピュ−タ1は次に、図8に示す「細線
化」(4)により、垂直方向の、微分デ−タの大から小
への切換わり点を「1」とし、他の点を「0」として、
「1」が境界(境界線の画素)を表わす2値画像に微分
デ−タを変換する。これにより画像中の物の境界が細線
で表わされる細線画像が得られる。
Next, the computer 1 sets "1" as the switching point of the differential data from large to small in the vertical direction by "thinning" (4) shown in FIG. 8 and other points. As "0",
The differential data is converted into a binary image in which "1" represents a boundary (pixel of boundary line). As a result, a thin line image in which the boundaries of objects in the image are represented by thin lines is obtained.

【0036】図12〜14に、to時刻,t1時刻およ
びt2時刻の左,右カメラ2a,2bの撮映画像(原画
像:画像入力1で得たもの)の一例を示す。これらの図
中の(a)の画像が左カメラ2aで撮映したもの、
(b)が右カメラ2bで撮映したものである。上述の微
分処理(2)〜細線化(4)の処理により、原画像(図
12〜14)の中の、明るさの変化が急激な位置、例え
ば路面とその上の白線との境界,路面と前景との境界,
路面と車両の境界,前景と車両との境界,車両の影と露
出路面との境界、一車両でもボディと窓との境界等々、
実質上物体の縁が細線となった細線画像が得られ、これ
がメモリに記憶される。
12 to 14 show examples of the images (original image: obtained by image input 1) of the left and right cameras 2a and 2b at time to, time t1 and time t2. The image of (a) in these figures is taken by the left camera 2a,
(B) is what was imaged with the right camera 2b. By the above-described differential processing (2) to thinning (4), a position in the original image (FIGS. 12 to 14) where the brightness changes abruptly, such as a boundary between the road surface and a white line on the road surface, the road surface And the foreground boundary,
The boundary between the road surface and the vehicle, the boundary between the foreground and the vehicle, the boundary between the shadow of the vehicle and the exposed road surface, the boundary between the body and the window of one vehicle, etc.
A thin line image in which the edge of the object is substantially a thin line is obtained and is stored in the memory.

【0037】コンピュ−タ1は、以上に説明した画像入
力(1),微分処理(2),しきい値処理(3)および
細線化(4)を、Δt周期で左,右カメラ2a,2bの
撮映画像のそれぞれについて実行する。この画像読取処
理を実行するときには、t1時刻の細線化画像を記憶す
るためのメモリ領域のデ−タをto時刻の細線化画像を
記憶するためのメモリ領域に移し、t2時刻の細線化画
像を記憶するためのメモリ領域のデ−タをt1時刻の細
線化画像を記憶するためのメモリ領域に移し、そして上
述の画像入力(1)〜細線化(4)を実行して、得た細
線画像を左,右画像別に、t2時刻の細線画像を記憶す
るためのメモリ領域に書込む。これにより、第1回の画
像入力(1)を実行してから2Δtが経過した時点か
ら、上述のメモリ領域には、最新に読取った細線画像
(t2時刻の細線画像),それよりΔt前に読取った細
線画像(t1時刻の細線画像)および2Δt前に読取っ
た細線画像(to時刻の細線画像)が、常時存在する。
なお、最新(t2時刻の)の原画像デ-タは、次に画像
入力(1)を実行するまで、原画像デ−タメモリ領域に
保持する。
The computer 1 performs the image input (1), the differential processing (2), the threshold processing (3) and the thinning (4) described above in the Δt cycle for the left and right cameras 2a, 2b. Perform each of the captured images of. When this image reading process is executed, the data in the memory area for storing the thinned image at time t1 is moved to the memory area for storing the thinned image at time to, and the thinned image at time t2 is displayed. The data of the memory area for storing is transferred to the memory area for storing the thinned image at time t1, and the above-mentioned image input (1) to thinning (4) is executed to obtain the thinned image. Is written into the memory area for storing the thin line image at time t2 separately for the left and right images. As a result, from the time when 2Δt has passed since the first image input (1) was executed, the thin line image read most recently (the thin line image at time t2) in the memory area described above, and Δt before that. The thin line image read (thin line image at time t1) and the thin line image read 2Δt before (thin line image at time to) are always present.
The latest original image data (at time t2) is held in the original image data memory area until the next image input (1) is executed.

【0038】コンピュ−タ1は、3時刻to,t1,t
3の細線画像が整った時点から、細線化(4)を終了す
る毎に、「時系列両眼立体視」(6)を実行する。この
内容を図9に示し、その中の「処理1」(68)の内容
を図10に、また図10の処理(1)の中の「処理2」
の内容を図11に示す。また、図1には、「時系列両眼
立体視」(6)の処理対象の細線画像を、理解を容易に
するために単純化して示す。「時系列両眼立体視」
(6)でコンピュ−タ1はまず、to時刻の左画像(細
線画像:以下、時系列両眼立体視6の説明中では、細線
画像を単に画像と表現する)。を読取走査して細線部の
黒(「1」)情報を探索する(61〜64)。1つの黒
情報Poa1が座標(j,i)にあるとこれを注目点Poa1とす
る。to時刻の右画像の、該水平走査線Yiと同一の走
査線上の、左画像の黒情報Poa1の水平位置iよりも左
側(iが小さい領域)の黒情報Pob1,Pob2,Pob3が、
左画像の注目点Poa1の対応候補点Pob1,Pob2,Pob3で
ある。
The computer 1 has three time points to, t1, t3.
When the thinning (4) is completed from the time when the thin line images of 3 are arranged, the “time-series binocular stereoscopic vision” (6) is executed. This content is shown in FIG. 9, the content of “Processing 1” (68) therein is shown in FIG. 10, and the content of “Processing 2” in processing (1) of FIG. 10 is shown.
The contents of the above are shown in FIG. Further, in FIG. 1, a thin line image to be processed in “time-series binocular stereoscopic vision” (6) is simplified for easy understanding. "Time series binocular stereoscopic vision"
In (6), the computer 1 first displays the left image at time to (thin line image: hereinafter, in the description of the time series binocular stereoscopic vision 6, the thin line image is simply referred to as an image). Is scanned and searched for black (“1”) information in the thin line portion (61 to 64). If one piece of black information Poa1 is located at the coordinates (j, i), this is set as the attention point Poa1. Black information Pob1, Pob2, Pob3 on the left side (area where i is smaller) of the horizontal position i of the black information Poa1 of the left image on the same scanning line as the horizontal scanning line Yi of the right image at time to,
Corresponding candidate points Pob1, Pob2, Pob3 of the attention point Poa1 in the left image.

【0039】−(1) to時刻の右画像上で第1の対応
候補点Pob1を検索すると(67)、「処理1」(6
8)を実行する。まず、t1時刻の左画像中に、注目点
Poa1の座標(j,i)を中心として(j±20,i±20)のウィン
ドウ領域Winを設定し、その内部の黒情報P1a−e〜P
1a-dをt1時刻の対応候補点群と仮に定める(681〜68
5)。
-(1) When the first corresponding candidate point Pob1 is searched on the right image at time to (67), "Processing 1" (6
Execute 8). First, in the left image at time t1, a window area Win of (j ± 20, i ± 20) is set around the coordinates (j, i) of the point of interest Poa1, and the black information P1a-e to P inside the window area Win is set.
Temporarily define 1a-d as the corresponding candidate point group at time t1 (681 to 68)
Five).

【0040】−(2) t1時刻の第1の対応候補点群の
第1の点P1a−eに関して、Poa1(to時刻の左画像),P
ob1(to時刻の右画像)およびP1a−e(t1時刻の左画像)
を各対応点と仮定して、これのt1時刻の右画像上の対
応点を求める(686)。これにおいては、次の(8)式で、求
めようとする対応点の水平方向の位置(X座標)X1b*を
算出する。垂直方向の位置(Y座標)は、P1a−eのもの
と同一とする。
-(2) Regarding the first point P1a-e of the first corresponding candidate point group at time t1, Poa1 (left image at time to), P
ob1 (right image at time to) and P1a-e (left image at time t1)
Assuming that each is a corresponding point, the corresponding point on the right image at time t1 is calculated (686). In this case, the horizontal position (X coordinate) X1b * of the corresponding point to be obtained is calculated by the following equation (8). The vertical position (Y coordinate) is the same as that of P1a-e.

【0041】[0041]

【数8】 [Equation 8]

【0042】なお、数式および図面の中の各種記号の意
味は次の通りである。 X1b*:t1時刻の右画像における予測対応点(P1a-d
〜P1a-eの対応点)のX座標, (Xca,Yca):左画像中心, (Xcb,Ycb):右左画像中心, Sxa,Sya:左画像のスケ−ルファクタ, Sxb,Syb:右画像のスケ−ルファクタ, θ:カメラの下向き角, Y1:t1時刻の右画像における対応点P1bのY座
標, (X2a,Y2a):t2時刻の左画像上の対応点, (X2b,Y2b):t2時刻の右画像上の対応点, F(Y,X):原画像デ−タ, G(Y,X):微分デ−タ, H(Y,X):しきい値処理後の微分デ−タ, P(Y,X):細線デ−タ(「1」:細線部/「0」:
細線部でない)。
The meanings of various symbols in the mathematical formulas and drawings are as follows. X1b *: Predicted corresponding point (P1a-d
(Corresponding point of P1a-e), X coordinate, (Xca, Yca): center of left image, (Xcb, Ycb): center of right left image, Sxa, Sya: scale factor of left image, Sxb, Syb: right image Scale factor, θ: downward angle of camera, Y1: Y coordinate of corresponding point P1b in right image at time t1, (X2a, Y2a): corresponding point on left image at time t2, (X2b, Y2b): time t2. Corresponding point on the right image, F (Y, X): original image data, G (Y, X): differential data, H (Y, X): differential data after threshold processing , P (Y, X): Fine line data (“1”: Fine line part / “0”:
Not the thin line part).

【0043】次に、求めた対応点(X1b*)に、t1
時刻の右画像に黒情報があるかをチェックする(689)。
第1の対応点P1a-eに対応する位置に、t1時刻の右画
像に黒情報がないと、ウインドウ領域Winのt1時刻の
対応候補点群P1a−e〜P1a-dの第2のものについて、
同様に、t1時刻の右画像上の対応点を求め(686)、そ
こに黒情報があるかをチェックする。このように、対応
候補点群P1a−e〜P1a-dのt1時刻の右画像上の対応
点を順次算出してそこに黒情報があるかをチェックし、
これを黒情報があるまで行なう。すなわち、対応候補点
群P1a−e〜P1a-dのt1時刻の右画像上の対応点の並
びで現わされる細線(図1に点線で示す)と、t1時刻の
右画像上に実際に存在する細線(図1に実線で示す)との
交点P1bを検索し、これを対応点と定める(685〜689)。
この交点P1bを算出した根拠となる、t1時刻の左画像
上の対応候補点群P1a−e〜P1a-dの1点(これをP1a
−pと呼ぶ)を把握しておく。
Next, t1 is added to the obtained corresponding point (X1b *).
Check if there is black information in the right image of the time (689).
If there is no black information in the right image at time t1 at the position corresponding to the first corresponding point P1a-e, regarding the second one of the corresponding candidate point groups P1a-e to P1a-d at time t1 in the window region Win. ,
Similarly, the corresponding point on the right image at time t1 is obtained (686), and it is checked whether or not there is black information. In this way, the corresponding points on the right image at time t1 of the corresponding candidate point groups P1a-e to P1a-d are sequentially calculated, and it is checked whether or not there is black information,
Repeat this until there is black information. That is, the thin lines (indicated by the dotted lines in FIG. 1) represented by the arrangement of corresponding points on the right image at the time t1 of the corresponding candidate point groups P1a-e to P1a-d and the actual image on the right image at the time t1 are actually displayed. An intersection P1b with an existing thin line (shown by a solid line in FIG. 1) is searched for and defined as a corresponding point (685 to 689).
One of the corresponding candidate point groups P1a-e to P1a-d on the left image at time t1 which is the basis for calculating this intersection P1b (this is P1a
-P).

【0044】−(3) 以上で、to時刻の、左画像上の
注目点Poa1とそれの右画像上の対応点Pob1、ならび
に、t1時刻の、左画像上の注目点P1a−pとそれの右
画像上の対応点P1bが定まった。そこで、この点(to
時刻のPoa1=Pob1,t1時刻のP1a−p=P1b)の、
to時刻の3次元位置と、t1時刻の3次元位置より、
それらを結ぶ直線上でt1時刻の3次元位置から、t
o,t1時刻の3次元位置間距離分だけ離れた位置にt
2時刻の3次元位置があるとして該t2時刻の3次元位
置を外挿法で求め、t2時刻の3次元位置の、t2時刻
の左画像上の位置(X2a,Y2a)および右上画像上
の位置(X2b,Y2b)を算出する(6901,6902)。算
出式の主要なものを次に示す。以上により、to時刻の
左,右画像の対応点Poa1,Pob1の、t1時刻の左,右
画像上の対応点P1a−p,P1bと、t2時刻の左,右画
像上の対応点(X2a,Y2a),(X2b,Y2b)
を推定したことになる。
-(3) As described above, the point of interest Poa1 on the left image and its corresponding point Pob1 on the right image at time to, and the point of interest P1a-p on the left image at time t1 and its point The corresponding point P1b on the right image has been determined. Therefore, this point (to
Poa1 = Pob1 at time, P1a−p = P1b at time t1),
From the three-dimensional position at time to and the three-dimensional position at time t1,
From the three-dimensional position at time t1 on the straight line connecting them, t
At a position separated by the distance between the three-dimensional positions at time points o and t1, t
Assuming that there is a three-dimensional position at two times, the three-dimensional position at t2 time is obtained by extrapolation, and the three-dimensional position at t2 time is located on the left image (X2a, Y2a) at t2 time and on the upper right image. (X2b, Y2b) is calculated (6901, 6902). The main calculation formulas are shown below. From the above, corresponding points P1a-p, P1b on the left and right images at the time t1 and corresponding points (X2a, P2a, P2a on the left and right images at the time t1 and corresponding points Poa1, Pob1 on the right image at the time t2. Y2a), (X2b, Y2b)
Has been estimated.

【0045】[0045]

【数9】 [Equation 9]

【0046】[0046]

【数10】 [Equation 10]

【0047】−(4) この推定の正確さを更に確保する
ため、コンピュ−タ1は、t2時刻の左,右原画像デ−
タより、それぞれ対応点(X2a,Y2a),(X2
b,Y2b)を中心とする3×3画素の画像デ−タを読
出して、次の(11)式で、3×3画素の画像濃度の和を算
出し、左画像の和と右画像の和の差を算出し、得た差の
逆数を算出し、これを相関値Cとする(6903)。すなわ
ち、対応点(X2a,Y2a),(X2b,Y2b)の
明るさの相関値Cを算出する。
-(4) In order to further secure the accuracy of this estimation, the computer 1 uses the left and right original image data at time t2.
Corresponding points (X2a, Y2a), (X2a
b, Y2b) and the image data of 3 × 3 pixels is read, and the sum of the image densities of 3 × 3 pixels is calculated by the following equation (11), and the sum of the left image and the right image is calculated. The sum difference is calculated, the reciprocal of the obtained difference is calculated, and this is set as the correlation value C (6903). That is, the correlation value C of the brightness of the corresponding points (X2a, Y2a), (X2b, Y2b) is calculated.

【0048】[0048]

【数11】 [Equation 11]

【0049】以上により、to時刻の右画像上の第1の
対応候補点Pob1について、それが対応点である可能性
の度合いを示す相関値Cが求まったことになる。
As described above, with respect to the first corresponding candidate point Pob1 on the right image at the time to, the correlation value C indicating the degree of possibility that it is the corresponding point is obtained.

【0050】 コンピュ−タ1は、to時刻の右画像
の、注目点Poa1の水平走査線Yiと同一の走査線上
の、注目点Poa1の水平位置iよりも左側(iが小さい
領域)の、上記相関値Cの算出処理を終了した黒情報P
ob1の次の黒情報Pob2についても、上述の−(1)〜
−(4)の相関値C算出処理を同様に実行し、黒情報Pob2
の相関値Cを、先に算出した黒情報Pob1の相関値と比
較し、大きい方の相関値を記憶保持し、大きい方の相関
値を得た候補点(Pob1とPob2の一方)をto時刻の対
応点として記憶し、大きい方の相関値を得た対応点(X2
a,Y2a),(X2b,Y2b)をt2時刻の対応点として記憶する(6
903〜6906)。
The computer 1 is located on the left side (the area where i is small) of the horizontal position i of the point of interest Poa1 on the same scanning line as the horizontal scanning line Yi of the point of interest Poa1 in the right image at time to. Black information P for which the calculation process of the correlation value C has been completed
As for the black information Pob2 next to ob1, the above-mentioned (-)-
Similarly, the correlation value C calculation process of (4) is executed, and the black information Pob2
The correlation value C of the above is compared with the correlation value of the black information Pob1 calculated previously, the larger correlation value is stored and held, and the candidate point (one of Pob1 and Pob2) from which the larger correlation value is obtained is to time. Corresponding point (X2
a, Y2a), (X2b, Y2b) are stored as corresponding points at time t2 (6
903-6906).

【0051】 以下同様にコンピュ−タ1は、to時
刻の右画像の、注目点Poa1の水平走査線Yiと同一の
走査線上の、注目点Poa1の水平位置iよりも左側(i
が小さい領域)の、残りの黒情報Pob3についても、上
述のの処理を同様に実行する(6903〜6906)。
Similarly, the computer 1 is on the left side (i) of the horizontal position i of the point of interest Poa1 on the same scanning line as the horizontal scanning line Yi of the point of interest Poa1 in the right image at time to.
For the remaining black information Pob3 in the area (small area), the above-mentioned processing is similarly executed (6903 to 6906).

【0052】to時刻の右画像の、注目点Poa1の水平
走査線Yiと同一の走査線上の、注目点Poa1の水平位
置iよりも左側(iが小さい領域)の、すべての対応候
補点すなわち黒情報Pob1,Pob2,Pob3について上述の
相関値C算出等の処理を終了すると、終了時点には、算
出した相関値Cの最大値と、これをもたらしたto時刻
の右画像上の対応候補点、ならびに、t2時刻の対応候
補点がコンピュ−タ1に記憶されている。以上により、
to時刻の左画像上の細線上の1点Poa1の、to時刻
の右画像上の対応点、ならびに、t2時刻の左,右画像
上の対応点が決定された。
On the right image at time to, all corresponding candidate points, that is, black, on the same scanning line as the horizontal scanning line Yi of the point of interest Poa1 to the left of the horizontal position i of the point of interest Poa1 (area where i is small). When the processing such as the above-described calculation of the correlation value C for the information Pob1, Pob2, Pob3 is completed, at the end point, the maximum value of the calculated correlation value C and the corresponding candidate point on the right image at the to time that brings this, In addition, the corresponding candidate point at time t2 is stored in the computer 1. From the above,
The corresponding point on the right image at to time and the corresponding point on the left and right images at time t2 of one point Poa1 on the thin line on the left image at time to were determined.

【0053】コンピユ−タ1は、このようなto時刻の
左,右細線画像の対応点検索を、画像中の細線を構成す
る各点について同様に実行する(61〜71)。to時
刻の左,右細線画像の対応点検索を終了すると、対応点
それぞれの3次元位置を算出することができ、3次元位
置デ−タに基づいて色々な方向から見た線図を描くこと
ができる。例えば3次元位置デ−タに基づいて各点を
X,Z平面上に描くと例えば図15の(a)に示す線図
が得られ、これはカメラ2a,2bを搭載した車両の前
方の、道路の平面図であり、路上の物体(車両)の存在
を示す。X,Y平面上に描くと例えば図15の(b)に
示す線図が得られ、これはカメラ2a,2bを搭載した
車両の前方の左,右,上下方向の物体の存在を示す。
X,Y,Z軸に斜交する平面上に描くと、例えば図15
の(c)に示す線図が得られ、これは道路の斜視図であ
る。更には、Y,Z平面上に描くと、例えば図15の
(d)に示す線図が得られ、これは道路の側面図であ
る。このように、道路およびその上の物体を立体的に認
識することができ、かつ立体的に表わすことができる。
The computer 1 similarly performs the corresponding point search of the left and right thin line images at the to time as described above for each point constituting the thin line in the image (61 to 71). When the corresponding points of the left and right thin line images at the to time are completed, the three-dimensional position of each corresponding point can be calculated, and a diagram viewed from various directions can be drawn based on the three-dimensional position data. You can For example, if each point is drawn on the X and Z planes based on the three-dimensional position data, the diagram shown in FIG. 15 (a) is obtained, which is in front of the vehicle equipped with the cameras 2a and 2b. It is a top view of a road and shows the existence of an object (vehicle) on the road. Drawing on the X and Y planes, for example, the diagram shown in FIG. 15B is obtained, which shows the presence of objects in the left, right, and up and down directions in front of the vehicle equipped with the cameras 2a and 2b.
Drawing on a plane oblique to the X, Y, and Z axes, for example, FIG.
(C) is obtained, which is a perspective view of the road. Furthermore, when drawn on the Y and Z planes, for example, the diagram shown in FIG. 15D is obtained, which is a side view of the road. In this way, the road and the object on it can be recognized three-dimensionally and can be expressed three-dimensionally.

【0054】一画面の対応点検索を終了するとコンピュ
−タ1はこの実施例では、「距離計算」(72)を実行
する。これにおいては、対応点検索を行なった各点につ
いて、to時刻の左,右対応点に基づいて、3次元位置
Xo,Yo,Zoを、次の(12)式で算出する。
When the one-screen corresponding point search is completed, the computer 1 executes "distance calculation" (72) in this embodiment. In this case, the three-dimensional position Xo, Yo, Zo is calculated by the following equation (12) for each point for which the corresponding point search is performed, based on the left and right corresponding points at the to time.

【0055】[0055]

【数12】 [Equation 12]

【0056】Xoはカメラから見て左,右方向の位置、
Yoは上下方向の位置、Zoが前方距離である。「距離
計算」(72)を終えるとコンピュ−タ1は、「速度ベ
クトル計算」(73)を実行する。これにおいては、次
の(13)式に示すように、対応点を検索した各点につい
て、to時刻の位置とt2時刻の位置の差を算出して2
(これは2Δtに対応)で割って、横方向の速度(カメ
ラに対する相対速度)VxおよびVyを算出する。
Xo is the position to the left and right when viewed from the camera,
Yo is the vertical position, and Zo is the front distance. When the "distance calculation" (72) is completed, the computer 1 executes the "speed vector calculation" (73). In this case, as shown in the following expression (13), the difference between the position at the time to and the position at the time t2 is calculated for each point for which the corresponding points are searched,
(This corresponds to 2Δt) to calculate lateral velocities (relative to the camera) Vx and Vy.

【0057】[0057]

【数13】 [Equation 13]

【0058】なお、to時刻の3次元位置とt2時刻の
3次元位置の差のZ成分を算出することにより、Z(前
方方向)の相対速度を算出することができる。Z方向の
速度を横軸とし、同一速度を有する点の数を縦軸にとる
と、例えば図16に示す如き、速度分布が得られる。図
16は、カメラ2a,2bを搭載した車両(自車)と同
程度の速度の車両(図16の横軸の中央位置:速度0)
と、自車より速い速度で遠ざかって行く車両(横軸の−
は遠ざかる方向)が存在することを意味する。同一X,
Y平面上に、to時刻とt2時刻の対応点を結ぶ直線を
描くと、図17に示す線図が得られる。これらの線は速
度ベクトルを示し、視認上は、直線の方向が自車に対す
る相対移動方向を、直線の長さが自車に対する相対移動
速度を示す。
The relative velocity in Z (forward direction) can be calculated by calculating the Z component of the difference between the three-dimensional position at time to and the three-dimensional position at time t2. When the velocity in the Z direction is plotted on the horizontal axis and the number of points having the same velocity is plotted on the vertical axis, a velocity distribution as shown in FIG. 16 is obtained. FIG. 16 shows a vehicle having the same speed as the vehicle (own vehicle) equipped with the cameras 2a and 2b (the central position of the horizontal axis in FIG. 16: speed 0).
And a vehicle moving away at a faster speed than the host vehicle (-on the horizontal axis
Means that there is a direction away from). Same X,
When a straight line connecting the corresponding points of time to t2 is drawn on the Y plane, the diagram shown in FIG. 17 is obtained. These lines show velocity vectors, and in the visual sense, the direction of the straight line shows the relative movement direction with respect to the own vehicle, and the length of the straight line shows the relative movement speed with respect to the own vehicle.

【0059】コンピュ−タ1は、以上に説明した一画面
上の処理(時系列両眼立体視6)を、「画像入力」
(1)を実行する毎に、すなわちΔt周期で、実行す
る。
The computer 1 performs the "image input" by performing the above-described processing on one screen (time series binocular stereoscopic vision 6).
It is executed every time (1) is executed, that is, in the Δt cycle.

【0060】[0060]

【発明の効果】短い時間区間Δtでは、物体の移動は等
速直線運動に見なせる。本発明では、to時刻の注目点
Poaの3次元位置と、t1時刻の対応点P1b(Pob1,Pob2,
Po3bそれぞれに1個)の3次元位置より、外挿によりt
2時刻の対応点(Pob1,Pob2,Po3bそれぞれに1個)の3
次元位置を求めるので、t2時刻の対応点の推定精度が
高い。加えて、物体の同一点の左,右画像上の濃度は実
質上同一であり、これに着目して、t2時刻の各対応点
(Pob1,Pob2,Po3bそれぞれに1個)に関して、t2時刻
の左,右画像上の画像濃度の相関値Cを算出し、相関値
の最大値を得た対応点補点(Pob1,Pob2,Po3bの1つ)を、
to時刻の左(右)画像の注目点Poa1に対応する、to
時刻の右(左)画像上の対応点と決定するので対応点の推
定が正確である。このように、物体の運動による対応点
の位置変化を利用した外挿による対応点(X2a,Y2a),(X2
b,Y2b)の推定と、同一物体の同一部は同一の明るさであ
るとの自明の事項を利用した左右画像濃度の相関値の演
算および比較により、複数の候補点Pob1,Pob2,Po3bから
一点を対応点と決定するので、相対的に高速に移動する
物体の認識の信頼度が高い。
In the short time interval Δt, the movement of the object can be regarded as a uniform linear motion. In the present invention, the point of interest for to time
The three-dimensional position of Poa and the corresponding point P1b at time t1 (Pob1, Pob2,
Extrapolation from the 3D position (1 for each Po3b)
3 of corresponding points at 2 times (one for each Pob1, Pob2, Po3b)
Since the dimensional position is obtained, the estimation accuracy of the corresponding point at time t2 is high. In addition, the densities of the same point of the object on the left and right images are substantially the same, and paying attention to this, regarding each corresponding point at t2 time (one for each of Pob1, Pob2, and Po3b), at t2 time. The correlation value C of the image densities on the left and right images is calculated, and the corresponding point complementary points (one of Pob1, Pob2, and Po3b) that has obtained the maximum correlation value are
corresponding to the attention point Poa1 of the left (right) image at the time to
Since the corresponding points on the right (left) image of the time are determined, the estimation of the corresponding points is accurate. Thus, the corresponding points (X2a, Y2a), (X2a, Y2a), (X2a
b, Y2b), and the calculation and comparison of the correlation values of the left and right image densities using the obvious matter that the same part of the same object has the same brightness, and from the plurality of candidate points Pob1, Pob2, Po3b Since one point is determined as the corresponding point, the reliability of recognition of an object moving at a relatively high speed is high.

【図面の簡単な説明】[Brief description of drawings]

【図1】 本発明の対応点検索を説明するために、撮
影画面を細線化した細線画像を単純化して示す平面図で
ある。
FIG. 1 is a plan view showing a simplified thin line image in which a shooting screen is thinned in order to explain a corresponding point search of the present invention.

【図2】 本発明を一態様で実施する画像処理装置の
構成概要を示すブロック図である。
FIG. 2 is a block diagram illustrating a schematic configuration of an image processing apparatus that implements the present invention in one aspect.

【図3】 図2に示す左,右カメラ2a,2bが撮映
した画像FLa,FLbとカメラ前方の平面上の点Pと
の光学的な距離関係を示す斜視図である。
3 is a perspective view showing an optical distance relationship between images FLa and FLb captured by the left and right cameras 2a and 2b shown in FIG. 2 and a point P on a plane in front of the cameras.

【図4】 図3に示す光学的な距離関係を、図3に示
すX,Z平面に投影した平面図である。
FIG. 4 is a plan view in which the optical distance relationship shown in FIG. 3 is projected on the X and Z planes shown in FIG.

【図5】 図2に示すコンピュ−タ1の、「前方物体
の速度監視」を行なう処理内容を示すフロ−チャ−トで
ある。
5 is a flowchart showing the processing contents of "speed monitoring of a front object" of the computer 1 shown in FIG.

【図6】 図5に示す「画像入力」(1)の処理内容
を示すフロ−チャ−トである。
FIG. 6 is a flowchart showing the processing contents of “image input” (1) shown in FIG.

【図7】 図5に示す「しきい値処理」(3)の処理
内容を示すフロ−チャ−トである。
FIG. 7 is a flowchart showing the processing contents of “threshold processing” (3) shown in FIG.

【図8】 図5に示す「細線化」(4)の処理内容を
示すフロ−チャ−トである。
FIG. 8 is a flowchart showing the processing contents of “thinning” (4) shown in FIG.

【図9】 図5に示す「時系列両眼立体視」(6)の
処理内容を示すフロ−チャ−トである。
9 is a flowchart showing the processing contents of "time-series binocular stereoscopic vision" (6) shown in FIG.

【図10】 図9に示す「処理1」(68)の処理内容
を示すフロ−チャ−トである。
FIG. 10 is a flowchart showing the processing contents of “processing 1” (68) shown in FIG.

【図11】 図10に示す「処理2」(690)の処理
内容を示すフロ−チャ−トである。
FIG. 11 is a flowchart showing the processing contents of “processing 2” (690) shown in FIG.

【図12】 図2に示す左,右カメラ2a,2bで撮映
した画像の一例を示す平面図であり、(a)が左カメラ
2aの撮映画像を、(b)が右カメラ2bの撮映画像を
示す。
FIG. 12 is a plan view showing an example of an image projected by the left and right cameras 2a and 2b shown in FIG. 2, where (a) is a projected image of the left camera 2a and (b) is a view of the right camera 2b. The projected image is shown.

【図13】 図2に示す左,右カメラ2a,2bで撮映
した画像の一例を示す平面図であり、図12に示す画像
よりΔt後の画像であり、(a)が左カメラ2aの撮映
画像を、(b)が右カメラ2bの撮映画像を示す。
FIG. 13 is a plan view showing an example of an image captured by the left and right cameras 2a and 2b shown in FIG. 2, which is an image after Δt from the image shown in FIG. 12, and (a) shows the left camera 2a. The projected image is shown in (b) of the right camera 2b.

【図14】 図2に示す左,右カメラ2a,2bで撮映
した画像の一例を示す平面図であり、図13に示す画像
よりΔt後の画像であり、(a)が左カメラ2aの撮映
画像を、(b)が右カメラ2bの撮映画像を示す。
14 is a plan view showing an example of an image captured by the left and right cameras 2a and 2b shown in FIG. 2, which is an image after Δt from the image shown in FIG. 13, and FIG. The projected image is shown in (b) of the right camera 2b.

【図15】 (a)は図12〜図14の画像の、細線化
および対応点検索により対応点を決定して得られる、画
面中物体の平面投影細線図、(b)は垂直面投影細線
図、(c)は斜視細線図、および(d)は側面細線図で
ある。
15A is a plane projection thin line diagram of an object in a screen obtained by determining corresponding points by thinning and searching for corresponding points of the images of FIGS. 12 to 14, and FIG. 15B is a vertical plane projection thin line. The figure, (c) is a perspective thin line drawing, and (d) is a side surface thin line drawing.

【図16】 図12〜図14の画像の、細線化および対
応点検索により対応点を決定して得られる、前方物体の
速度分布を示すグラフであり、横軸が速度を、縦軸が同
一速度の点の累算値を示す。
16 is a graph showing a velocity distribution of a front object obtained by determining corresponding points by thinning and searching for corresponding points in the images of FIGS. 12 to 14, in which the horizontal axis represents velocity and the vertical axis represents the same. Indicates the cumulative value of the speed points.

【図17】 図12〜図14の画像の、細線化および対
応点検索により対応点を決定して得られる、前方物体の
2Δt時間の移動方向と移動量を示す線図である。
FIG. 17 is a diagram showing a moving direction and a moving amount of a front object for 2Δt time, which are obtained by determining corresponding points by thinning and searching for corresponding points in the images of FIGS. 12 to 14;

【符号の説明】[Explanation of symbols]

Poa1:対応点検索対象の注目点/to時刻の左画像上
の細線上の一点 Pob1,Pob2,Pob3:to時刻の右画像上の対応候補点 P1a-d〜P1a-e:t1時刻の左画像上の対応候補点群 P1b:t1時刻の、算出した対応点 (X2a,Y2a):t2時刻の左画像上の、算出した対応点 (X2b,Y2b):t2時刻の右画像上の、算出した対応点
Poa1: Point of interest of corresponding point search / one point on thin line on left image at to time Pob1, Pob2, Pob3: Corresponding candidate point on right image at time to P1a-d to P1a-e: Left image at time t1 Above corresponding candidate point group P1b: Calculated corresponding point at time t1 (X2a, Y2a): t2 time on left image, calculated corresponding point (X2b, Y2b): Calculated at right time image at t2 time Corresponding point

───────────────────────────────────────────────────── フロントページの続き (72)発明者 富 田 文 明 茨城県つくば市梅園1丁目1番4 工業技 術院電子技術総合研究所 内 ─────────────────────────────────────────────────── ─── Continuation of the front page (72) Inventor Fumiaki Tomita 1-4-1 Umezono, Tsukuba-shi, Ibaraki Electronic Technology Research Institute, Industrial Technology Institute

Claims (2)

【特許請求の範囲】[Claims] 【請求項1】左,右の撮像カメラによりそれらの前方の
シ−ンを撮映してそれぞれビデオ信号を得て、これらの
ビデオ信号をデジタル処理して、左,右画像にある同一
物体の同一点を検索するにおいて、 to時刻,これよりΔt後のt1時刻および更にΔt後
のt2時刻の左,右画像の内、少くともto時刻および
t1時刻の左,右画像は、画像中の物体のエッジを細線
で表わす細線化処理を施して、これらto,t1および
t2時刻の左,右画像に基づいて、 時刻toの左(右)画像にある細線のある点を注目点Po
a1と定め、時刻toの右(左)画像上の細線の、注目点
Poa1の垂直座標iの対応垂直座標i上にある点Pob1,Pob2,
Po3bを摘出してto時刻の右(左)画像上の対応候補点
Pob1,Pob2,Po3bとし、 −(1) 時刻t1の左(右)画像上で注目点Poa1の座標
(j,i)を中心とする所定領域の細線上の各点をt1時刻
の左(右)画像上の対応候補点P1a-d〜P1a-eとして、 −(2) 注目点Poa1がt1時刻の左(右)画像上の対応
候補点P1a-d〜P1a-eのそれぞれに移動したと仮定した場
合の、これらに対応するto時刻の右(左)画像上の対
応候補点Pob1,Pob2,Po3bの1つ Pob1 のt1時刻の右
(左)画像上の各移動位置を演算し、これらの移動位置
の連なりと重複する、t1時刻の右(左)画像上の細線
上の点P1bの座標(L,n)を摘出し、 −(3) to時刻の注目点Poa1と前記1つの対応候補点
Pob1が同一点であるとした場合のto時刻の注目点Poa1
の3次元位置と、t1時刻の右(左)画像上の上記重複
点P1bが注目点Poa1のt1時刻の対応点とした場合のt
1時刻の注目点Poa1の3次元位置とより、外挿によりt
2時刻の注目点Poa1の3次元位置を算出し、 −(4) 上記t2時刻の注目点Poa1の3次元位置に対応
するt2時刻の左画像上の位置(X2a,Y2a)を中心とし
た、t2時刻の左画像上の所定領域の画像濃度と、上記
t2時刻の注目点Poa1の3次元位置に対応するt2時刻
の右画像上の位置(X2b,Y2b)を中心とした、t2時刻の
右画像上の所定領域の画像濃度との相関値Cを算出し、 to時刻の右(左)画像上の対応候補点Pob1,Pob2,Po3b
の残りのものPob2,Po3bそれぞれにつき、上記−(1)〜
−(4)と同様にして相関値Cを算出して、相関値Cが最
大となった、to時刻の右(左)画像上の対応候補点Po
b1,Pob2,Po3bの1つ、をto時刻の右(左)画像上の、
注目点Poa1の対応点と定める、ことを特徴とする、左,
右カメラの撮像画像の対応点検索方法。
1. Left and right imaging cameras image the scenes in front of them to obtain video signals, digitally process these video signals, and image the same objects in the left and right images. In searching for one point, the left and right images at least at the to time and the t1 time among the left and right images at the time t1, the time t1 after the time Δt, and the time t2 after the time Δt from the time are the images of the object in the image. By performing a thinning process in which an edge is represented by a thin line, a point with a thin line in the left (right) image at time to is calculated based on the left and right images at times to, t1, and t2.
It is defined as a1 and the point of interest of the thin line on the right (left) image at time to
Points Pob1, Pob2, on the corresponding vertical coordinate i of the vertical coordinate i of Poa1
Po3b is extracted and corresponding candidate points on the right (left) image at time to
Pob1, Pob2, Po3b, and- (1) Coordinates of the point of interest Poa1 on the left (right) image at time t1
Each point on the thin line of the predetermined area centered on (j, i) is set as a corresponding candidate point P1a-d to P1a-e on the left (right) image at t1 time, and- (2) the attention point Poa1 is at t1 time. Corresponding candidate points P1a-d to P1a-e on the left (right) image of each of the corresponding candidate points Pob1, Pob2, on the right (left) image at to time corresponding to these. One of Po3b Pob1 calculates the movement position on the right (left) image at t1 time, and overlaps the sequence of these movement positions, and the coordinates of point P1b on the thin line on the right (left) image at t1 time (L, n) is extracted, and- (3) to the point of interest Poa1 and the one corresponding candidate point
Point of interest at to time Poa1 when Pob1 is the same point
Of the three-dimensional position of the target point Poa1 and the overlapping point P1b on the right (left) image at the time t1 at the time t1.
From the three-dimensional position of the point of interest Poa1 at one time, t is extrapolated.
The three-dimensional position of the point of interest Poa1 at two times is calculated, and- (4) the position (X2a, Y2a) on the left image at the time t2 corresponding to the three-dimensional position of the point of interest Poa1 at the time t2 is centered, The image density of a predetermined area on the left image at t2 time and the right of t2 time centered on the position (X2b, Y2b) on the right image at t2 time corresponding to the three-dimensional position of the point of interest Poa1 at t2 time. A correlation value C with the image density of a predetermined area on the image is calculated, and corresponding candidate points Pob1, Pob2, Po3b on the right (left) image at time to
For the rest of Pob2 and Po3b,-(1) ~
-The correlation value C is calculated in the same manner as in (4), and the corresponding candidate point Po on the right (left) image at the time to, at which the correlation value C becomes maximum, is calculated.
One of b1, Pob2, Po3b, on the right (left) image at time to,
Left, characterized by defining as the corresponding point of the attention point Poa1
A method for searching corresponding points in the image captured by the right camera.
【請求項2】左,右の撮像カメラによりそれらの前方の
シ−ンを撮映してそれぞれビデオ信号を得て、これらの
ビデオ信号をデジタル処理して、左,右画像にある同一
物体の同一点を検索するにおいて、 to時刻,これよりΔt後のt1時刻および更にΔt後
のt2時刻の左,右画像の内、少くともto時刻および
t1時刻の左,右画像は、画像中の物体のエッジを細線
で表わす細線化処理を施して、これらto,t1および
t2時刻の左,右画像に基づいて、時刻toの左(右)
画像にある細線のある点を注目点Poa1と定め、時刻to
の右(左)画像上の細線の、注目点Poa1の垂直座標iの
対応垂直座標i上にある点Pob1,Pob2,Po3bを摘出してt
o時刻の右(左)画像上の対応候補点Pob1,Pob2,Po3bと
し、時刻t1の左(右)画像上で注目点Poa1の座標(j,
i)を中心とする所定領域の細線上の各点をt1時刻の左
(右)画像上の対応候補点P1a-d〜P1a-eとして、 対応候補点Pob1,Pob2,Po3bの1つ毎に、それと注目点Po
a1の座標で定まるto時刻の注目点Poa1の3次元位置、
ならびに、注目点Poa1が対応候補点P1a-d〜P1a-eのそれ
ぞれに移動した場合のそれぞれのt1時刻の右(左)画
像上の位置の連なりとt1時刻の右(左)画像上の細線
の交点P1bの座標で定まる、t1時刻の注目点Poa1の3
次元位置を算出して、両3次元位置の外挿によりt2時
刻の注目点Poa1の3次元位置を算出し、これを左,右画
像上の座標に変換して、これらの座標位置の、t2時刻
の左,右画像上の画像濃度の相関値Cを算出し、 対応候補点Pob1,Pob2,Po3bの内、相関値Cが最大の候補
点を、時刻の左(右)画像上の注目点Poa1の、to時刻
の右(左)画像上の対応点と定める、ことを特徴とす
る、左,右カメラの撮像画像の対応点検索方法。
2. The left and right imaging cameras image the scenes in front of them to obtain video signals respectively, and these video signals are digitally processed to obtain the same object in the left and right images. In searching for one point, the left and right images at least at the to time and the t1 time among the left and right images at the time t1, the time t1 after the time Δt, and the time t2 after the time Δt from the time are the images of the object in the image. A thinning process is performed in which edges are represented by thin lines, and the left (right) of time to is based on the left and right images at these to, t1, and t2 times.
The point with a thin line in the image is defined as the point of interest Poa1, and the time to
The points Pob1, Pob2, Po3b on the corresponding vertical coordinate i of the vertical coordinate i of the point of interest Poa1 of the thin line on the right (left) image of
o Corresponding candidate points Pob1, Pob2, Po3b on the right (left) image at time t, and the coordinates (j,
Each point on the thin line of the predetermined area centering on i) is set as the corresponding candidate points P1a-d to P1a-e on the left (right) image at the time t1 for each one of the corresponding candidate points Pob1, Pob2, Po3b. , And the point of interest Po
The three-dimensional position of the point of interest Poa1 at the to time determined by the coordinates of a1,
Also, when the point of interest Poa1 moves to each of the corresponding candidate points P1a-d to P1a-e, a series of positions on the right (left) image at each t1 time and a thin line on the right (left) image at t1 time 3 of attention point Poa1 at time t1, which is determined by the coordinates of intersection P1b of
The three-dimensional position is calculated, and the three-dimensional position of the point of interest Poa1 at time t2 is calculated by extrapolation of both three-dimensional positions, and the three-dimensional position is converted into coordinates on the left and right images. The correlation value C of the image densities on the left and right images at the time is calculated, and the candidate point with the largest correlation value C among the corresponding candidate points Pob1, Pob2, Po3b is the attention point on the left (right) image at the time. A corresponding point search method for picked-up images of left and right cameras, characterized in that it is defined as a corresponding point on the right (left) image of Poa1 to time.
JP3287538A 1991-11-01 1991-11-01 Method for searching corresponding points of images captured by left and right cameras Expired - Fee Related JP3055721B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP3287538A JP3055721B2 (en) 1991-11-01 1991-11-01 Method for searching corresponding points of images captured by left and right cameras

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP3287538A JP3055721B2 (en) 1991-11-01 1991-11-01 Method for searching corresponding points of images captured by left and right cameras

Publications (2)

Publication Number Publication Date
JPH05141919A true JPH05141919A (en) 1993-06-08
JP3055721B2 JP3055721B2 (en) 2000-06-26

Family

ID=17718638

Family Applications (1)

Application Number Title Priority Date Filing Date
JP3287538A Expired - Fee Related JP3055721B2 (en) 1991-11-01 1991-11-01 Method for searching corresponding points of images captured by left and right cameras

Country Status (1)

Country Link
JP (1) JP3055721B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011027564A1 (en) * 2009-09-07 2011-03-10 パナソニック株式会社 Parallax calculation method and parallax calculation device
JP2016176736A (en) * 2015-03-19 2016-10-06 トヨタ自動車株式会社 Image distance measuring device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011027564A1 (en) * 2009-09-07 2011-03-10 パナソニック株式会社 Parallax calculation method and parallax calculation device
JP2011058812A (en) * 2009-09-07 2011-03-24 Panasonic Corp Method and device for parallax calculation
US8743183B2 (en) 2009-09-07 2014-06-03 Panasonic Corporation Parallax calculation method and parallax calculation device
US9338434B2 (en) 2009-09-07 2016-05-10 Panasonic Intellectual Property Management Co., Ltd. Parallax calculation method and parallax calculation device
JP2016176736A (en) * 2015-03-19 2016-10-06 トヨタ自動車株式会社 Image distance measuring device

Also Published As

Publication number Publication date
JP3055721B2 (en) 2000-06-26

Similar Documents

Publication Publication Date Title
EP1394761B1 (en) Obstacle detection device and method therefor
Smith et al. ASSET-2: Real-time motion segmentation and shape tracking
JP4963964B2 (en) Object detection device
JP3054681B2 (en) Image processing method
JP4002919B2 (en) Moving body height discrimination device
KR920001616B1 (en) Method and apparatus for detecting objects
JPH10143659A (en) Object detector
JP4872769B2 (en) Road surface discrimination device and road surface discrimination method
JPH11252587A (en) Object tracking device
JP2000517452A (en) Viewing method
JPH1166319A (en) Method and device for detecting traveling object, method and device for recognizing traveling object, and method and device for detecting person
JP3577875B2 (en) Moving object extraction device
JP2007280387A (en) Method and device for detecting object movement
JP3465531B2 (en) Object recognition method and apparatus
JP3055721B2 (en) Method for searching corresponding points of images captured by left and right cameras
JPH05157518A (en) Object recognizing apparatus
JP2536549B2 (en) Inter-vehicle distance detection method
JPH0991439A (en) Object monitor
JP3253328B2 (en) Distance video input processing method
JP4584405B2 (en) 3D object detection apparatus, 3D object detection method, and recording medium
JP2993610B2 (en) Image processing method
JP2993611B2 (en) Image processing method
JPH05141930A (en) Three-dimensional shape measuring device
JP4055785B2 (en) Moving object height detection method and apparatus, and object shape determination method and apparatus
JPH10283478A (en) Method for extracting feature and and device for recognizing object using the same method

Legal Events

Date Code Title Description
R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090414

Year of fee payment: 9

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090414

Year of fee payment: 9

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100414

Year of fee payment: 10

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110414

Year of fee payment: 11

LAPS Cancellation because of no payment of annual fees