JPS6129704A - Measuring method - Google Patents

Measuring method

Info

Publication number
JPS6129704A
JPS6129704A JP15184984A JP15184984A JPS6129704A JP S6129704 A JPS6129704 A JP S6129704A JP 15184984 A JP15184984 A JP 15184984A JP 15184984 A JP15184984 A JP 15184984A JP S6129704 A JPS6129704 A JP S6129704A
Authority
JP
Japan
Prior art keywords
image
irradiation points
irradiation point
signals
measured
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP15184984A
Other languages
Japanese (ja)
Inventor
Mitsuo Iso
三男 磯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Zosen Corp
Original Assignee
Hitachi Zosen Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Zosen Corp filed Critical Hitachi Zosen Corp
Priority to JP15184984A priority Critical patent/JPS6129704A/en
Publication of JPS6129704A publication Critical patent/JPS6129704A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

PURPOSE:To measure the shape and position of a body to be measured precisely in a short time by obtaining different signals between image signals of respective irradiation points obtained by a couple of image pickup means and image signals of last irradiation points respectively. CONSTITUTION:Plural places on the surface of the object body 1 in a specific direction within the overlap visual field of a couple of fixed image pickup means (image sensors 10a and 10b) which pick up an image of the object body 1 are irradiated successively with slit light from a light source and images of respective irradiation points formed with the slit light are picked up by the image pickup means. Then, an image processing means obtains and processes difference signals between image signals of the respective irradiation points and held image signals of last irradiation points. Consequently, noise components in the image signals are removed and only components of the irradiation points are exctracted, so the positions of the respective irradiation points are measured precisely. Every time the overlap visual field of both image pickup means is shifted by a specific quantity, said operation is performed repeatedly to measure the shape and position of the object body.

Description

【発明の詳細な説明】[Detailed description of the invention]

〔産業上の利用分野〕 この発明は、被計測体上の点を計測する計測方法に関す
る。 〔従来技術〕 従来、被計測体上の点を計測して被計測体の形状を計測
する手法として、センシングプローブを被計測体に接触
させて被計測体の形状等を計測する接触法や、2台のカ
メラによシ同一対象物を同時に撮像しその両画像の共通
点を求めて被計測体の形状を計測するステレオ写真法、
基準面におけるモアレ縞および被計測体表面におけるモ
アレ縞にもとづき被計測体の形状等を計測するモアレト
ポグラフィ法、および縦長のスリット光を被計測体に照
射して当該照射個所をテレビカメラにより撮像し、当該
画像の特徴的な点を求めて被計測体の形状等を計測する
光切断法などの非接触法があシ、これらの各手法が産業
用ロボットの物体認識技術として、あるいは各種検査装
置等における物体認識技術として広く応用されている。 ところが、前記した接触法では計測に長時間を要すると
いう欠点があり、非接触式の場合も、被計測体の形状を
認識しているだけで、被計測体の位置、すなわち任意の
座標系における座標を直接計測しているのではないため
被計測体の位置を求めるには、得られた画像から対象と
すべき点を求めたのち、求めた点の位置すなわち座標を
演算。 導出しなければならず、演算に時間がかかり、しかもこ
れらの手法を実現する計測装置は分解能が非常に低いだ
め、被計測体そのものが小さい場合。 あるいは被計測体表面に小さな凹凸がある場合には、精
度よく被計測体や表面の凹凸の形状を認識できず、信頼
性に欠けるという欠点があり、被計測体の形状計測する
には不十分である。 〔発明の目的〕 この発明は、前記の点に留意してなされたものであり、
被計測体の形状および位置を短時間で精度よく計測1毒
るようにすることを目的とする。 〔発明の構成〕 この発明は、被計測体を撮像する固定された1対の撮像
手段を備え、前記両撮像手段の重複視野内の前記被計測
体表面の所定方向の複数個所に光源からのスリット光を
順次照射し、前記両撮像手段により前記各スリット光ご
とのそれぞれの照射点を撮像し、画像処理手段により、
前記両撮像手段による前記各照射点の画像信号それぞれ
と、前記各照射点それぞれの直前の照射点の保持された
画像信号それぞれとの差信号を導出し、前記差信号を処
理して前記各照射点の位置を計測することを特徴とする
計測方法である。 〔発明の効果〕 したがって、この発明の計測方法によると、両撮像手段
の重複視野内の被計測体表面の所定方向の重複個所に順
次照射されるスリット光ごとのそれぞれの照射点を両撮
像手段により撮像し、画像処理手段により、両撮像手段
による各照射点の画像信号それぞれと、直前の照射点の
画像信号それぞれとの差信号を導出するようにしたこと
により、両撮像手段の撮像時K、画像信号中に照射点の
成分以外のノイズ成分が含まれていても、画像信号中の
ノイズ成分を除去して照射点の成分のみを取り出すこと
ができ、各照射点の位置を精度よく計測することが可能
となり、両撮像手段の重複視野を所定量ずらすごとに前
記の動作を繰シ返すことにより被計測体の形状および位
置を短時間で正確に計測することができ、信頼性に優れ
、非常に実用的である。 〔実施例〕 つぎにこの発明を、その1実施例を示した図面とともに
詳細に説明する。 まず、計測装置を示す第1図において、(1)は被計測
体(以下ワークという)、(2)は支持体、(3)は支
持体(2)に右方から見た平面内において回転自在に支
持された左右方向に長尺の筐体、(4)は支持体(2)
の右側面に取シ付けられて筐体(3)を回転させるモー
タ、(5)は筐体(3)内に設けられスリット光である
スポット光を照射するレーザ、円形スリット付きのキセ
ノンランプ等からなる光源、(6)は筐体(3]内に回
転自在に設けられ光源(5)からのスボ゛ノド光の光路
を角度θ回転させて変更する反射鏡、(7B>。 (7b)はそれぞれ筐体(3)内の反射鏡の左、右の両
側に設けられ後述の両レンズの中心点をそれぞれ回転中
心として前方から見た平面内において回転する1対の本
体、(8a)、(8b)はそれぞれ両本体(7B) 。 (7b)内の上端部に設けられ左右方向に複数個の受光
素子か1次元的に配列された撮像面、(9B)、(9b
)は集光レンズであり、それぞれ本体(7a) 、(7
b)の下端部に両撮像面(8a)、(8b)に対して回
転自在に設けられ、本体(7&) 、(7b)とともに
それぞれ角度θ′回転してワーク(1)からの反射光を
それぞれ両撮像面か)。 (8b)に集光するようになっておシ、本体(7す、(
7b)。 である第1.第2イメージセンサ(]Oa) 、(]O
b)が構成されるとともに支持体(2)、筐体(3)、
モータ(4)。 光源(5)1反射鏡(6)および両イメージセンサ(i
oa)。 (]Ob)によシ計測装置aυが構成されている。 つぎに、両イメージセンサ(10&)、(10b)から
の信号を処理する処理回路を示す第2図に右いて、(2
)は撮像面(8a)の各受光素子の各輝度信号を合成し
て撮像面(8a)から出力される合成輝度信号である画
像信号をスライスしてデジタル画像信号に変換するスラ
イサ、(至)はスライサ(6)のスライスレベル設定器
、α4は撮像面(8a)からの画像信号のうち不要部分
を除去するマスキング回路、0f9I/i前記画像信号
の不要部分を設定するマスキング設定スイッチ、αQは
スライスされた前記画像信号において前記スライスレベ
ルよりも高レベルパルスの存在する撮像面(8&)の受
光素子に対応したアドレスをカウントするアドレスカウ
ンタ、0ηはカウンタa・によりカウントされたアドレ
スデータを表示するデータ表示部、(至)はカウンタα
・によりカウントされたアドレスデータを記憶して当該
アドレスデータにもとづくスライサ@によシスライスさ
れた画像信号を保持する記憶部、Q呻はインターフェイ
スであり、出力端子(ホ)より記憶部(ト)に記憶され
たデータにもとづく画像信号を転送出力するようになっ
ておシ、スライサ@、設定器α葎、マスキング回路0I
O1設定スイッチ(至)、カウンタ(lf9 、表示部
αη。 記憶部(至)およびインターフェイスaeにより第1イ
メージセンサ(10&)の処理回路(21a)が構成さ
れるとともに、同様に第2イメージセンサ(10b)の
処理回路(21b)が構成されている。 さらに、演算回路を示す第8図において、磐は両イメー
ジセンサ(10&) 、(10b)からのデータにもと
づき、ワーク(1)上のスポット光の照射点の任意のx
yz座標系における座標を導出する演算部、■は表示部
であシ、演算部(イ)によシ導出された前記照射点の座
標を表示するようになって詔り、演算部(2)および表
示部(ホ)によシコンピユータ等からなる演算回路(ハ
)が構成され、画処理回路(21B) 。 (21b)および演算回路(ホ)により画像処理手段(
ハ)が構成されている。 そして、第1図に示すワーク(1)上の点の位置を計測
してワーク(1)の形状を計測する場合、反射鏡(6)
の回転によシスポット光の照射点を両イメージセンサ(
10B)、(10b)の重複視野内において、所定方向
である両イメージセンサ(10a)、(10b)の各受
光素子の配列方向、すなわち左右方向に移動させ、たと
えばある照射点が第1図に示すように点Pである場合、
両イメージセンサ(10a)、(10b)により、第4
図(a)に示すような2個のスタートパルスSの出力期
間に照射点Pの近辺が撮像され、たとえばの非常に高い
合成輝度信号である画像信号が形成され、両イメージセ
ンサ(10a) 、(Job)からそれぞれ第4図(b
)に示すような画像信号が出力されるとともに、両方の
処理回路(21B)、(21b)において、マスキング
回路α→により前記両画像信号にマスキング領域Mが設
定されると同時に、スライサα環それぞれによシ同図(
b)に示すようなスライスレベルl以下がそれぞれカッ
トされてスライスされ、照射点Pに相当する前記レベル
i;xpも高いハイレベルパルスのみが取シ出されて同
図(C)に示すようなデジタル画像信号がそれぞれ得ら
れ、アドレスカウンタQlにより、前記両デジタル画像
信号のハイレベルパルスの存在するアドレス、すなわち
取シ出された信号の出力源である受光素子に対応するア
ドレスNがカウントされ、カウントされたアドレスNが
アドレスデータとして記憶部(ト)に記憶されると同時
に、前記アドレスデータにもとづくデジタル画像信号が
記憶保持される。 ところで、計測すべき場所の周囲か明るい場合や、計測
すべきワーク(1)の表面に激しい凹凸がある場合には
、光源(5)によるスポット光の明るさが゛弱いと、ス
ポット光を照射しても、スポット光の照射点の明るさと
その周辺の明るさとを識別することができなかったシ、
ワーク(1)の表面での乱反射により両イメージセンサ
(10&) 、(Job)による画像信号に複数個のピ
ークが現われ、両イメージセンサ(10B) 、 (1
0b)の撮像画像からスポット光の照射点の位置を導出
することができなくなり、スポット光を強くすれば、前
記の不都合は解消されるが、ワーク(1)の種類によっ
ては熱歪等が生じるものかあシ、あまりスポット光を強
くできず、スポット光が弱くても照射点を容易に識別で
きるようにすることが望まれる。 そして、スポット光が弱り、シかもワーク(1)の表面
の乱反射が激しい場合、まず第5図に示すようなワーク
(1)にスポット光を照射せずに、両イメージセンサ(
10B)、(10b) K J: p ワー り(1)
を撮像すると、たとえば第1イメージセンサ(10a)
からの画像信号は、第6図(a)に示すように、ノイズ
成分である多数のピークを持つ波形となシ、これにスポ
ット光を第5図中の点P1に照射すると、第1イメージ
センサ(10a)からの画像信号は、第6図(b)に示
すように、同図(a)に示す波形に照射点P1の成分が
重畳された波形となシ、同図(b)に示す画像信号を処
理回路(21B)のスライサq4によりスライスレベル
l′でスライスすると、前記スライスにより得られるデ
ジタル画像信号は同図(C)に示すように16個のハイ
レベルパルスが存在することになシ、カウンタOfjに
よシ各ハイレベルパルスの出力源である各受光素子に対
応するアドレスがカウントされ、記憶部(ト)により、
カウントされた各アドレスおよび該各アドレスにもとづ
く同図(C)に示す画像信号が記憶保持されるが、この
デジタル画像信号からはどのハイレベルパルスが照射点
P+の成分であるかを識別できない。 つぎに、第6図(C)に示すデジタル画像信号が記憶部
(ト)により記憶保持されたのち、両イメージセンサ(
xoa) 、 (]Ob)の視野は固定したままで、反
射鏡(6)を回転し、第5図に示すように照射点P1か
ら、所定方向である両イメージセンサ(10’a)、(
10b)における各受光素子の配列方向、すなわち左右
方向へ進んだワーク(1)上の次の点P2にスポット光
を移動して照射すると、第1イメージセンサ(10a)
からの画像信号は、第6図(d)に示すように、同図(
a)に示す波形に照射点P2の成分が重畳された波形と
なり、同図(d)に示す画像信号を処理回路(21&)
のスライサα4によシ同図(b)と同じスライスレベル
g′でスライスすると、前記スライスによシ得られるデ
ジタル画像信号は、同図(e)に示すように、照射点P
2以外のノイズ成分は同図(C)に示す信号の照射点P
I以外のノイズ成分と全く同じになシ、カウンタσQに
より各ハイレベルパルスの出力源である各受光素子に対
応するアドレスがカウントされ、記憶部(至)によυ、
カウントされた各アドレスおよび該各アドレスにもとづ
く同図(e)に示す画像信号が記憶、保持され、同図(
8)に示す信号から記憶部(至)に予め記憶保持された
同図(0)に示す信号を引算すると、同図(f)に示す
ように、ノイズ成分が除去されて両照射点P■、P2の
成分のみが残った差信号が得られ、同図璃)に示すよう
にたとえば照射点P2の成分のみを取り出してそのアド
レスを記憶部(ト)から読み出すことにより、照射点P
2の撮像面(8a)上の位置が算出されることになると
ともに、同様にして第2イメージセンサ(] Ob)の
照射点PI、P2の画像信号の差をとることにより、第
2イメージセンサ(10b)における照射点El、P2
の成分のみが残った差信号が得られる。 さらに、第6図(e)に示すデジタル画像信号が記憶部
(至)により記憶保持されたのち、照射点P2から左右
方向へ進んだワーク(1)上の次の照射点についても前
記と同様にして差信号を導出するとともに、以降これら
の動作を繰り返すことにより各差信号を導出し、各差信
号から2番目の照射点の成分のみを取り出すことにより
、前記各照射点の撮像面(8a)上の位置が算出される
ことになるとともに、同様にして前記各照射点の撮像面
(8b)上の位置が算出される。 そして、両イメージセンサ(IQa)、(10b)によ
りある照射点Pを撮像して得られるデジタル画像信号そ
れぞれから、照射点Pの直前の照射点を撮像して得られ
るデジタル画像信号それぞれを引算して導出される差信
号にもとづき、前記照射点Pの成分のみを取り出した信
号がそれぞれ第7図(a)。 (b)に示すようになったとすると、画処理回路(21
&) 。 (21b)のインターフェイス0りを介して照射点Pの
成分であるハイレベルパルスの出力源である受光素子に
対応するアドレスN+ 、N2が演算回路(ハ)の演算
部(イ)に転送され、演算部(イ)により前記両アドレ
スN+ 、N2にもとづき、照射点Pの座標が演算導出
される。 すなわち、第1図中に示すように、左右方向。 上下方向1前後方向をそれぞれx、y、zの各軸を座標
軸とする任意のXYz座標系のXY平面のみを考え、た
とえば第8図に示すようにXY座標系を想定し、第6図
に示すように両イメージセンサ(1oa)、(1ob)
の両レンズの倍率をに+ 、に2 、!: L、両イメ
ージセンサ(10a) 、 (10b)の視野のY軸に
近い方の限界線R1,R2とX軸とのそれぞれの交点V
+。 ■2の座標をそれぞれ(a、0)、(b、0)とすると
、両イメージセンサ(+oa)、(10b)と実際のワ
ーク(1)上の照射点Pとをそれぞれ結ぶ線とX軸との
それぞれ交点Vl’、V2’の座標のX軸成分α、βは
それぞれ、α =  a  −1−K+  ・N菖  
                         
          ・・・ ■β=b+に2・N2 
            ・・・■と表わされ、さらに
両イメージセンサ(10& > 、 (10b)Kとし
、両イメージセンサ(10B) 、(10b)間のXY
平面における距離をLとすると、ワーク(1)上の照射
点Pの座標(xp、yp)のX軸、Y軸成分はそれぞれ と表わされ、点V+’、L+を通る直線と点V’、L2
を通る直線の交点として与えられることになり、演算条
件として前記した各点Vl 、V2 、Ll 、L2の
座標(a、0)。 (b、0)、(α0IK)+(βO,K)、両レンズの
倍率Kl 、に2および両イメージセンサ(10a)、
(lob)間の距離りを予め演算部(イ)に入力してお
くことにより、画像処理により得られたアドレスデータ
Nl、N2にもとづき、前記■、■式に従って点Vl’
、V2’の座標が演算され、演算された点Vl’、V2
’の座標のX軸成分α。 βにもとづき、前記■、■式に従って照射点Pの座標(
xp、yp)が導出される。 さらに、これらの動作を繰り返すことにより、両イメー
ジセンサ(10&) 、 (10b)の重複視野内の左
右方向への線上の複数個のスポット光の照射点の座標が
導出されるとともに、モータ(4)の作動により筐体(
3)を回転させて再び前記の動作を繰り返し、ワーク(
1)上の複数個の照射点の座標を導出することにより、
ワーク(1)の形状および位置を計測すると同時に、ワ
ーク(1)の表面状態を計測する。 したがって、前記実施例によると、両イメージセンサ(
10a) 、 (job)の撮像時に、画像信号中に照
射点の成分以外のノイズ成分が含まれていても、1番目
と2番目の照射点の画像信号の差をとるのみで、画像信
号中のノイズ成分を除去して照射点の成分のみを取シ出
すことができ、各照射点の位置を精度よく計測すること
が可能となシ、モータ
[Industrial Field of Application] The present invention relates to a measuring method for measuring a point on an object to be measured. [Prior Art] Conventionally, as a method of measuring the shape of a measured object by measuring a point on the measured object, there is a contact method in which a sensing probe is brought into contact with the measured object to measure the shape of the measured object, etc. Stereo photography, in which two cameras simultaneously capture images of the same object and find common points between the two images to measure the shape of the object.
The moire topography method measures the shape of the object based on the moire fringes on the reference plane and the moire fringes on the surface of the object, and the moire topography method, which irradiates the object with a vertical slit light and images the irradiated area with a television camera. , there are non-contact methods such as optical cutting methods that measure the shape of the object by finding characteristic points in the image, and each of these methods can be used as object recognition technology for industrial robots or for various inspection devices. It is widely applied as an object recognition technology in applications such as However, the contact method described above has the disadvantage that measurement takes a long time, and even in the case of a non-contact method, the position of the object, that is, the position of the object in any coordinate system, can only be determined by recognizing the shape of the object. Since the coordinates are not directly measured, in order to determine the position of the object to be measured, the target point is determined from the obtained image, and then the position, or coordinates, of the determined point is calculated. The measurement equipment that implements these methods has extremely low resolution, and the object to be measured itself is small. Alternatively, if there are small irregularities on the surface of the object to be measured, the shape of the object to be measured or the unevenness of the surface cannot be recognized accurately, resulting in a lack of reliability, which is insufficient for measuring the shape of the object to be measured. It is. [Object of the invention] This invention has been made with the above points in mind,
The purpose is to accurately measure the shape and position of an object to be measured in a short time. [Structure of the Invention] The present invention includes a pair of fixed imaging means for taking an image of an object to be measured, and a plurality of locations in a predetermined direction on the surface of the object to be measured within the overlapping field of view of both the imaging means. The slit light is sequentially irradiated, the respective irradiation points of each of the slit lights are imaged by both the imaging means, and the image processing means is used to
A difference signal between each of the image signals of each of the irradiation points obtained by the two imaging means and each of the retained image signals of the irradiation point immediately before each of the irradiation points is derived, and the difference signal is processed to determine the irradiation point of each of the irradiation points. This is a measurement method characterized by measuring the position of a point. [Effects of the Invention] Therefore, according to the measurement method of the present invention, each irradiation point of each slit light that is sequentially irradiated onto an overlapping location in a predetermined direction on the surface of the object to be measured within the overlapping field of view of both the imaging means is set by both imaging means. , and the image processing means derives a difference signal between each image signal of each irradiation point by both imaging means and each image signal of the immediately preceding irradiation point. , even if the image signal contains noise components other than the components of the irradiation point, it is possible to remove the noise components in the image signal and extract only the components of the irradiation point, allowing accurate measurement of the position of each irradiation point. By repeating the above operation every time the overlapping fields of view of both imaging means are shifted by a predetermined amount, the shape and position of the object to be measured can be measured accurately in a short time, and is highly reliable. , very practical. [Embodiment] Next, the present invention will be described in detail with reference to drawings showing one embodiment thereof. First, in Figure 1 showing the measuring device, (1) is the object to be measured (hereinafter referred to as the work), (2) is the support, and (3) is the support (2) rotated within the plane seen from the right. A casing that is freely supported and long in the left and right direction, (4) is a support body (2)
A motor (5) is attached to the right side of the housing (3) to rotate the housing (3), a laser (5) is installed inside the housing (3) and emits a spot light as a slit light, a xenon lamp with a circular slit, etc. (6) is a reflecting mirror (7B>) which is rotatably provided in the housing (3) and changes the optical path of the sub-node light from the light source (5) by rotating the angle θ. (7b) are a pair of bodies, (8a), which are provided on the left and right sides of the reflecting mirror in the housing (3), respectively, and rotate within a plane seen from the front with the center points of both lenses, which will be described later, as rotation centers, respectively; (8b) are both main bodies (7B) respectively. (7b) are provided at the upper end and have a plurality of light receiving elements arranged one-dimensionally in the left and right direction; (9B), (9b)
) are condensing lenses, and the main body (7a) and (7
It is provided at the lower end of b) so as to be rotatable with respect to both the imaging surfaces (8a) and (8b), and rotates by an angle θ' together with the main bodies (7&) and (7b) to capture the reflected light from the workpiece (1). (Both imaging surfaces, respectively). The light is now focused on (8b), the main body (7s, (
7b). The first one is Second image sensor (]Oa), (]O
b) is composed of a support (2), a housing (3),
Motor (4). Light source (5) 1 reflecting mirror (6) and both image sensors (i
oa). (]Ob) constitutes a measuring device aυ. Next, on the right side of FIG. 2 showing the processing circuit that processes the signals from both image sensors (10&) and (10b),
) is a slicer that synthesizes each luminance signal of each light receiving element on the imaging surface (8a) and slices the image signal, which is a composite luminance signal output from the imaging surface (8a), and converts it into a digital image signal, (to) is a slice level setter of the slicer (6), α4 is a masking circuit that removes unnecessary parts of the image signal from the imaging surface (8a), 0f9I/i is a masking setting switch that sets unnecessary parts of the image signal, αQ is a An address counter that counts addresses corresponding to light-receiving elements on the imaging surface (8&) where pulses with a higher level than the slice level exist in the sliced image signal, and 0η displays address data counted by the counter a. Data display section, (to) counter α
・A storage unit that stores the address data counted by and holds the image signal sliced by the slicer @ based on the address data, Q is an interface, and is connected from the output terminal (E) to the storage unit (G). It is now possible to transfer and output image signals based on stored data.Slicer @, setting device α, masking circuit 0I
O1 setting switch (to), counter (lf9, display section αη. Storage section (to) and interface ae configure the processing circuit (21a) of the first image sensor (10 &), and similarly the second image sensor ( A processing circuit (21b) of the image sensor (10b) is configured.Furthermore, in FIG. Any x of the point of light irradiation
A calculation unit (2) is used to display the coordinates of the irradiation point derived by the calculation unit (A); An arithmetic circuit (c) consisting of a computer and the like is constructed on the display section (e), and an image processing circuit (21B). (21b) and the arithmetic circuit (e), the image processing means (
c) is configured. When measuring the shape of the workpiece (1) by measuring the position of the point on the workpiece (1) shown in FIG.
By rotating the system, the irradiation point of the syspot light is placed on both image sensors (
10B) and (10b), the light receiving elements of both image sensors (10a) and (10b) are moved in a predetermined direction, that is, in the horizontal direction, so that, for example, a certain irradiation point is shown in FIG. If the point P as shown,
Both image sensors (10a) and (10b) allow the fourth
The vicinity of the irradiation point P is imaged during the output period of the two start pulses S as shown in FIG. (Job) to Figure 4 (b)
) is output, and at the same time, in both processing circuits (21B) and (21b), a masking area M is set for both image signals by the masking circuit α→, and at the same time, the slicer α rings are respectively The same figure (
The pulses below the slice level l shown in b) are each cut and sliced, and only the high-level pulses corresponding to the irradiation point P, which are also high at the level i; Each digital image signal is obtained, and an address counter Ql counts the address where the high-level pulse of both digital image signals exists, that is, the address N corresponding to the light receiving element that is the output source of the extracted signal, At the same time that the counted address N is stored as address data in the storage section (g), a digital image signal based on the address data is stored and held. By the way, if the area to be measured is bright, or if the surface of the workpiece (1) to be measured has severe unevenness, the brightness of the spot light from the light source (5) may be weak. However, it was not possible to distinguish between the brightness of the spot light irradiation point and the brightness of its surroundings.
Due to diffuse reflection on the surface of the workpiece (1), multiple peaks appear in the image signals from both image sensors (10&) and (Job), and both image sensors (10B) and (1)
It is no longer possible to derive the position of the spot light irradiation point from the captured image of 0b), and if the spotlight is strengthened, the above-mentioned inconvenience can be resolved, but depending on the type of workpiece (1), thermal distortion etc. may occur. However, it is desirable to not be able to make the spotlight too strong and to be able to easily identify the irradiated point even if the spotlight is weak. If the spot light becomes weak and the diffused reflection on the surface of the workpiece (1) is intense, first, do not irradiate the workpiece (1) with the spotlight light as shown in Fig. 5, and use both image sensors (
10B), (10b) K J: p war ri (1)
For example, the first image sensor (10a)
As shown in Figure 6(a), the image signal from As shown in FIG. 6(b), the image signal from the sensor (10a) has a waveform in which the component of the irradiation point P1 is superimposed on the waveform shown in FIG. 6(a), and as shown in FIG. 6(b). When the image signal shown in FIG. The counter Ofj counts the address corresponding to each light receiving element which is the output source of each high level pulse, and the memory section (g)
Although each counted address and the image signal shown in FIG. 3C based on each address are stored and held, it is not possible to identify which high-level pulse is the component of the irradiation point P+ from this digital image signal. Next, after the digital image signal shown in FIG. 6(C) is stored and held in the storage section (g), both image sensors (
While keeping the field of view of xoa) and (]Ob fixed, the reflecting mirror (6) is rotated, and as shown in FIG.
When the spot light is moved and irradiated to the next point P2 on the workpiece (1) that has proceeded in the arrangement direction of each light receiving element in 10b), that is, in the left-right direction, the first image sensor (10a)
As shown in FIG. 6(d), the image signal from
The waveform shown in a) is obtained by superimposing the component of the irradiation point P2, and the image signal shown in FIG.
When sliced by the slicer α4 at the same slice level g' as in FIG.
Noise components other than 2 are at the irradiation point P of the signal shown in the same figure (C).
The counter σQ counts the address corresponding to each light-receiving element that is the output source of each high-level pulse, and the memory unit stores υ,
Each counted address and the image signal shown in FIG.
By subtracting the signal shown in (0) in the figure previously stored in the storage section (to) from the signal shown in (8), the noise component is removed and both irradiation points P are removed, as shown in (f) in the figure. (2) A difference signal in which only the P2 component remains is obtained, and as shown in the same figure, for example, by extracting only the component of the irradiation point P2 and reading its address from the storage section (G), the irradiation point P
The position on the imaging surface (8a) of the second image sensor (] Ob) is calculated, and by similarly calculating the difference between the image signals of the irradiation points PI and P2 of the second image sensor (] Ob), the position of the second image sensor (] Ob) is calculated. Irradiation point El, P2 at (10b)
A difference signal in which only the component remains is obtained. Furthermore, after the digital image signal shown in FIG. 6(e) is stored and held in the storage section (to), the same applies to the next irradiation point on the workpiece (1) that has proceeded from the irradiation point P2 in the left-right direction. Then, by repeating these operations, each difference signal is derived, and by extracting only the component of the second irradiation point from each difference signal, the imaging surface (8a) of each irradiation point is ) is calculated, and the position of each irradiation point on the imaging surface (8b) is calculated in the same way. Then, from each digital image signal obtained by imaging a certain irradiation point P with both image sensors (IQa) and (10b), each digital image signal obtained by imaging the irradiation point immediately before the irradiation point P is subtracted. FIG. 7(a) shows a signal obtained by extracting only the component of the irradiation point P based on the difference signal derived. If it becomes as shown in (b), the image processing circuit (21
&). Addresses N+ and N2 corresponding to the light receiving element that is the output source of the high-level pulse that is the component of the irradiation point P are transferred to the arithmetic unit (A) of the arithmetic circuit (C) through the interface 0 of (21b), The arithmetic unit (a) calculates and derives the coordinates of the irradiation point P based on both addresses N+ and N2. That is, as shown in FIG. 1, in the left and right direction. Consider only the XY plane of an arbitrary XYz coordinate system whose coordinate axes are the x, y, and z axes in the vertical direction and the front-back direction. For example, assuming an XY coordinate system as shown in Figure 8, Both image sensors (1OA), (1OB) as shown
The magnification of both lenses is +, 2,! : L, the respective intersections V of the limit lines R1 and R2 of the field of view of both image sensors (10a) and (10b), which are closer to the Y-axis, and the X-axis;
+. ■If the coordinates of 2 are (a, 0) and (b, 0), respectively, the line connecting both image sensors (+oa) and (10b) and the irradiation point P on the actual workpiece (1) and the X axis The X-axis components α and β of the coordinates of the intersections Vl' and V2', respectively, are α = a −1−K+ ・N iris

... ■β=b+2・N2
...■, and furthermore, both image sensors (10 &> , (10b)) are expressed as
When the distance on the plane is L, the X-axis and Y-axis components of the coordinates (xp, yp) of the irradiation point P on the workpiece (1) are expressed as, respectively, and the straight line passing through points V+' and L+ and point V' ,L2
The coordinates (a, 0) of each of the points Vl, V2, Ll, and L2 mentioned above as calculation conditions. (b, 0), (α0IK) + (βO,K), magnification Kl of both lenses, 2 and both image sensors (10a),
By inputting the distance between (lob) into the calculation unit (a) in advance, the point Vl' is calculated based on the address data Nl and N2 obtained by image processing according to the formulas
, V2' coordinates are calculated, and the calculated points Vl', V2
' X-axis component α of the coordinate. Based on β, the coordinates of the irradiation point P (
xp, yp) are derived. Furthermore, by repeating these operations, the coordinates of the irradiation points of the plurality of spot lights on the line in the left and right direction within the overlapping field of view of both image sensors (10&) and (10b) are derived, and the ), the housing (
3) Rotate and repeat the above operation again to remove the workpiece (
1) By deriving the coordinates of multiple irradiation points above,
At the same time as measuring the shape and position of the workpiece (1), the surface condition of the workpiece (1) is also measured. Therefore, according to the embodiment, both image sensors (
10a) When imaging (job), even if the image signal contains noise components other than the components of the irradiation point, the difference between the image signals of the first and second irradiation points is simply taken, and the difference in the image signal is It is possible to remove noise components and extract only the components at the irradiation point, making it possible to accurately measure the position of each irradiation point.

【4】の作動により両イメージセ
ンサ(IOa) 、(1ob)の重複視野を所定量ずら
すごとに前記の動作を繰り返せば、ワーク(1)の形状
および位置を短時間で正確に計測することができ、信頼
性に優れ、非常に実用的である。 さらに、差信号を導出する際、各照射点の画像信号から
直前の照射点の画像信号を順次引算しているため、各照
射点を撮像しなから差信号を導出して各照射点の位置を
連続的に計測することができ、計測時間が非常に短くな
り、実用性をいっそワーク(1)がゴム等の柔軟で変形
し易いものであっても、形状を容易に計測することがで
きる。 さらに、スポット光を使用しているため、エネルギー密
度が低く9弱い光でよく、照明を使用したときの照明熱
により、ワーク(1)に歪が生じたシすることもない。 なお、両撮像手段としてCOD型イメージセンサを使用
したが、MO8型イメージセンサや撮像管等によシ構成
してもよいことは勿論である。 また、反射鏡(6)によりスポット光の光路を回転させ
るだけでなく、電子ビーム等により磁気的にスポット光
の光路を回転させるようにしてもよい。
By repeating the above operation every time the overlapping field of view of both image sensors (IOa) and (1ob) is shifted by a predetermined amount by the operation of [4], the shape and position of the workpiece (1) can be accurately measured in a short time. It is highly reliable and extremely practical. Furthermore, when deriving the difference signal, the image signal of the previous irradiation point is sequentially subtracted from the image signal of each irradiation point, so the difference signal is derived without imaging each irradiation point. The position can be measured continuously, the measurement time is extremely short, and the shape can be easily measured even if the workpiece (1) is a flexible and deformable material such as rubber. can. Furthermore, since spot light is used, the energy density is low and only weak light is required, and the workpiece (1) will not be distorted by the heat of the illumination when the illumination is used. Although a COD type image sensor is used as both image pickup means, it goes without saying that an MO8 type image sensor, an image pickup tube, or the like may be used. In addition to rotating the optical path of the spot light using the reflecting mirror (6), the optical path of the spot light may be rotated magnetically using an electron beam or the like.

【図面の簡単な説明】[Brief explanation of drawings]

図面は、この発明の計測方法の1実施例を示し、第1図
は計測装置の斜視図、第2図は処理回路のブロック図、
第8図は演算回路のブロック図、第4図(a)〜(C)
はそれぞれ動作説明用の各信号波形図、第5図は照射点
の撮像時の正面図、第6図(a)〜@)は画像信号の処
理動作説明用の各信号波形図、第7図(&) 、 (b
)はそれぞれ両撮像手段からの合成撮像信号の波形図、
第8図は動作説明図である。 (11−・・被計測体1、(5)−・・光源、(1oa
)、(xob) ・・・イメージセンサ、(ハ)・・・
画像処理手段、PI、 P2 、 P・・・照射点。
The drawings show one embodiment of the measuring method of the present invention, with FIG. 1 being a perspective view of a measuring device, and FIG. 2 being a block diagram of a processing circuit.
Figure 8 is a block diagram of the arithmetic circuit, Figures 4 (a) to (C)
are respective signal waveform diagrams for explaining the operation, FIG. 5 is a front view when imaging the irradiation point, FIGS. 6(a) to @) are each signal waveform diagram for explaining the image signal processing operation, and FIG. 7 is (&) , (b
) are the waveform diagrams of the composite imaging signals from both imaging means, respectively.
FIG. 8 is an explanatory diagram of the operation. (11--Measurement object 1, (5)--Light source, (1oa
), (xob) ... image sensor, (c) ...
Image processing means, PI, P2, P...irradiation point.

Claims (1)

【特許請求の範囲】[Claims] (1)被計測体を撮像する固定された1対の撮像手段を
備え、前記両撮像手段の重複視野内の前記被計測体表面
の所定方向の複数個所に光源からのスリット光を順次照
射し、前記両撮像手段により前記各スリット光ごとのそ
れぞれの照射点を撮像し、画像処理手段により、前記両
撮像手段による前記各照射点の画像信号それぞれと、前
記各照射点それぞれの直前の照射点の保持された画像信
号それぞれとの差信号を導出し、前記差信号を処理して
前記各照射点の位置を計測することを特徴とする計測方
法。
(1) A pair of fixed imaging means for taking images of the object to be measured is provided, and slit light from a light source is sequentially irradiated to a plurality of locations in a predetermined direction on the surface of the object to be measured within the overlapping field of view of both the imaging means. , the respective irradiation points for each of the slit lights are imaged by both the imaging means, and the image processing means is used to image the respective irradiation points of the irradiation points by both the imaging means and the irradiation point immediately before each of the irradiation points. A measurement method comprising: deriving a difference signal from each of the retained image signals, and processing the difference signal to measure the position of each of the irradiation points.
JP15184984A 1984-07-20 1984-07-20 Measuring method Pending JPS6129704A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP15184984A JPS6129704A (en) 1984-07-20 1984-07-20 Measuring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP15184984A JPS6129704A (en) 1984-07-20 1984-07-20 Measuring method

Publications (1)

Publication Number Publication Date
JPS6129704A true JPS6129704A (en) 1986-02-10

Family

ID=15527611

Family Applications (1)

Application Number Title Priority Date Filing Date
JP15184984A Pending JPS6129704A (en) 1984-07-20 1984-07-20 Measuring method

Country Status (1)

Country Link
JP (1) JPS6129704A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02306109A (en) * 1989-05-19 1990-12-19 Hamamatsu Photonics Kk Recognizing apparatus for three-dimensional position
US4993836A (en) * 1988-03-22 1991-02-19 Agency Of Industrial Science & Technology Method and apparatus for measuring form of three-dimensional objects
US5376796A (en) * 1992-11-25 1994-12-27 Adac Laboratories, Inc. Proximity detector for body contouring system of a medical camera

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4993836A (en) * 1988-03-22 1991-02-19 Agency Of Industrial Science & Technology Method and apparatus for measuring form of three-dimensional objects
JPH02306109A (en) * 1989-05-19 1990-12-19 Hamamatsu Photonics Kk Recognizing apparatus for three-dimensional position
US5376796A (en) * 1992-11-25 1994-12-27 Adac Laboratories, Inc. Proximity detector for body contouring system of a medical camera

Similar Documents

Publication Publication Date Title
KR100753885B1 (en) Image obtaining apparatus
US6549288B1 (en) Structured-light, triangulation-based three-dimensional digitizer
JP2008241643A (en) Three-dimensional shape measuring device
WO1998036238A1 (en) Outdoor range finder
CN104439695A (en) Visual detector of laser machining system
JPH11166818A (en) Calibrating method and device for three-dimensional shape measuring device
US9594028B2 (en) Method and apparatus for determining coplanarity in integrated circuit packages
JP7353757B2 (en) Methods for measuring artifacts
JP2007093412A (en) Three-dimensional shape measuring device
JPH08210812A (en) Length measuring instrument
TW201835852A (en) Apparatus and method for three-dimensional inspection
JP2001148025A5 (en)
JP6781969B1 (en) Measuring device and measuring method
JPS6129704A (en) Measuring method
JP3324809B2 (en) Measurement point indicator for 3D measurement
JPS6125003A (en) Configuration measuring method
JP2008164338A (en) Position sensor
JPH0282106A (en) Optical measuring method for three-dimensional position
Hata et al. 3D vision sensor with multiple CCD cameras
JPH01227910A (en) Optical inspection device
JPS5818110A (en) Measuring method for solid body
JP3369235B2 (en) Calibration method for measuring distortion in three-dimensional measurement
JP3381420B2 (en) Projection detection device
JP2006220425A (en) Visual inspection device and visual inspection method for printed circuit board
JPS61159102A (en) Two-dimensional measuring method