JPH05303662A - Stereo character reading method using light-section method - Google Patents

Stereo character reading method using light-section method

Info

Publication number
JPH05303662A
JPH05303662A JP4107894A JP10789492A JPH05303662A JP H05303662 A JPH05303662 A JP H05303662A JP 4107894 A JP4107894 A JP 4107894A JP 10789492 A JP10789492 A JP 10789492A JP H05303662 A JPH05303662 A JP H05303662A
Authority
JP
Japan
Prior art keywords
image
character
slit
dimensional
range finder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP4107894A
Other languages
Japanese (ja)
Inventor
Hiroyuki Suganuma
沼 孫 之 菅
Masataka Toda
田 昌 孝 戸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aisin Corp
Original Assignee
Aisin Seiki Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aisin Seiki Co Ltd filed Critical Aisin Seiki Co Ltd
Priority to JP4107894A priority Critical patent/JPH05303662A/en
Publication of JPH05303662A publication Critical patent/JPH05303662A/en
Pending legal-status Critical Current

Links

Landscapes

  • Character Input (AREA)

Abstract

PURPOSE:To obtain the binary image which are free from the blurs of characters by forming a two-dimensional character pattern based on the information on the height measured through a range finder. CONSTITUTION:A recognizing subject 3 is irradiated by the laser slit light 2 emitted from a range finder 1. The slit image is photographed by a camera contained in the finder 1 and displayed on a black/white monitor 6. Then the slit image is converted into the image data by an image processor. A command is inputted to a personal computer 9 from a keyboard 10, and the computer 9 instructs the revolving angle of a motor to a motor driver 5 and moves an XY table 4. Then the computer 9 instructs the image processor to fetch the images at each time and displays this image fetching result on a CRT 8. The highest and lowest points of the stereo character strings are decided based on the irradiated slit image and by means of the triangular surveying principle. Then the XY coordinate value of both highest and lowest points are plotted into a two-dimensional array so that a character dot pattern is obtained.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】本発明は光切断法により得られた
高さ情報をもとに立体文字を認識する立体文字読取方法
に関するものである。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a three-dimensional character reading method for recognizing a three-dimensional character based on height information obtained by a light cutting method.

【0002】[0002]

【従来の技術】本発明に係る従来技術としては特開昭6
2−293492号の公報がある。
2. Description of the Related Art As a prior art relating to the present invention, Japanese Patent Laid-Open No.
There is a gazette of 2-293492.

【0003】このものは、通常のTVカメラからの映像
信号を、あるしきい値レベルで“0”と“1”の2値に
変換して図8に示すように文字パターンを読取り、操作
線を文字外周全面捜走して、文字凹凸部の位置を検出し
文字認識を行う方法である。
This device converts a video signal from a normal TV camera into a binary value of "0" and "1" at a certain threshold level and reads a character pattern as shown in FIG. Is to search the entire outer circumference of the character, detect the position of the uneven portion of the character, and perform character recognition.

【0004】[0004]

【発明が解決しようとする課題】前記方法を立体文字認
識に使用する場合に、その立体文字の断面形状が図9に
示すB〜B断面形状を(b)、C〜C断面形状を(C)
に示すような「1」の文字の場合に、一様平面光下で文
字と背景とのコントラストがある場合に、通常のカメラ
で読取ろうとする時、コントラストを出すためにリング
照明を利用するが、文字高さが変化しているので文字の
各部分における散乱角が一定でなくなり、文字と背景の
輝度値が同じになる所が存在して、あるしきい値でもっ
て2値画像にすると図9(d)に示すように文字かすれ
が起き、文字を読取ることが出来なくなるという問題点
がある。
When the above method is used for recognizing a three-dimensional character, the three-dimensional character has a cross-sectional shape shown in FIG. )
In the case of the character "1" as shown in Fig. 2, when there is a contrast between the character and the background under uniform plane light, when reading with a normal camera, the ring illumination is used to provide the contrast. Since the height of the character changes, the scattering angle at each part of the character is not constant, and there is a place where the brightness value of the character and the background are the same. As shown in FIG. 9 (d), there is a problem that a character is faint and the character cannot be read.

【0005】本発明は画像処理装置を使用して刻印文字
等の立体文字の認識において、2次元画像を用いて2値
画像を得る場合に、文字かすれが起きない立体文字の認
識方法を技術的課題とするものである。
The present invention is technically directed to a method of recognizing a three-dimensional character such that a character is not blurred when a binary image is obtained using a two-dimensional image in recognizing a three-dimensional character such as a stamped character using an image processing device. This is an issue.

【0006】[0006]

【課題を解決するための手段】課題を解決するための技
術的手段は次のようである。
The technical means for solving the problems are as follows.

【0007】XYテーブル上に文字認識対象物を載置
し、レンジファインダーよりレーザースリット光を照射
し、そのスリット画像はレンジファインダーに内蔵した
カメラで取り込み後、画像処理装置に記憶し、パソコン
本体に記憶されたプログラムによりモータのドライバに
モータの回転角を指示し、XYテーブルを移動させ、画
像処理装置にそのつど画像を取り込み、光切断法を用い
た取り込まれたスリット画像より文字の高さを計測し、
その情報を基に2次元の文字パターンを構成させ、文字
を読み取る立体文字読取方法である。
A character recognition object is placed on an XY table, laser slit light is emitted from a range finder, and the slit image is captured by a camera incorporated in the range finder, stored in an image processing device, and stored in a personal computer body. The stored program instructs the motor driver to indicate the rotation angle of the motor, moves the XY table, captures an image each time in the image processing device, and displays the character height from the captured slit image using the optical cutting method. Measure,
It is a three-dimensional character reading method for reading a character by forming a two-dimensional character pattern based on the information.

【0008】[0008]

【作用】レーザスリット光は、高強度赤外光であるため
に外乱光に強く、かつ局所的であるためにコンスラスト
調整も容易である。
The laser slit light is a high-intensity infrared light, so it is strong against ambient light, and since it is local, it is easy to adjust the contrast.

【0009】従ってレンジファインダーから測定された
高さ情報をもとに2次元の文字パターンを構成すれば、
従来のような文字かすれが起きない2値画像が得られる
ものである。
Therefore, if a two-dimensional character pattern is constructed based on the height information measured from the range finder,
It is possible to obtain a binary image in which the conventional character blurring does not occur.

【0010】[0010]

【実施例】以下実施例について説明する。EXAMPLES Examples will be described below.

【0011】従来より使用されているレンジファインダ
(R.F.)1からレーザスリット光2を認識対象物3
上に照射する。
The object 3 to recognize the laser slit light 2 from the conventionally used range finder (RF) 1.
Irradiate on top.

【0012】そのスリット画像は、レンジファインダ1
に内蔵されているカメラで取り込まれた後、白黒モニタ
6に表示され画像処理装置で画像データとして記憶処理
される。
The slit image is the range finder 1
After being taken in by a camera built in, the image is displayed on the black and white monitor 6 and stored as image data in the image processing apparatus.

【0013】キーボード10からパソコン本体9にコマ
ンドを入力し、パソコン本体9はモータドライバ5にモ
ータの回転角を指示し、XYテーブルを移動させ画像処
理装置6にそのつど、画像取込み、各種処理を行うよう
に指示しその結果をCRT8に表示させる。
A command is input from the keyboard 10 to the personal computer body 9, the personal computer body 9 instructs the motor driver 5 about the rotation angle of the motor, the XY table is moved, and the image processing device 6 takes in an image and performs various processes. An instruction is given to display the result on the CRT 8.

【0014】読取文字は、認識対象物表面上にある図2
のような本体的なものであり、刻印文字のような凹文字
11や、浮出し文字のような凸文字12である。
The read character is on the surface of the object to be recognized as shown in FIG.
Such a main body is a concave character 11 such as a stamped character or a convex character 12 such as an embossed character.

【0015】次に具体的な文字の読取手順について図3
〜図7を用いて説明する。
Next, a specific character reading procedure will be described with reference to FIG.
~ It demonstrates using FIG.

【0016】図3に示すように立体文字列に照射したス
リット画像をもとに三角測量の原理を用いて計測した結
果得られた立体文字列のYZ断面を図4に示す。
FIG. 4 shows a YZ cross section of the three-dimensional character string obtained as a result of measurement using the principle of triangulation based on the slit image irradiated on the three-dimensional character string as shown in FIG.

【0017】立体文字を認識するためには図3に示すよ
うな立体文字の中心線13を抽出すればよい。
In order to recognize a three-dimensional character, the center line 13 of the three-dimensional character as shown in FIG. 3 may be extracted.

【0018】立体文字中心線13は、図4に示すように
凹文字の場合、谷形の最低点14の集合があり、凸文字
の場合、山形の最高点15の集合である。
As shown in FIG. 4, the solid character center line 13 has a set of valley-shaped lowest points 14 in the case of a concave character, and a set of mountain-shaped highest points 15 in the case of a convex character.

【0019】この最低点あるいは最高点の決定方法には
いろいろあるが、例えば最低点を求める場合、Y=0か
ら順次Z座標を比べて今のZの値が基準高さ16よりも
小さく、前のZの値よりも小さくなければ、そのX、Y
座標を記憶するという一般によく使われる最低値抽出ア
ルゴリズムを応用して決定できる。
There are various methods for determining the lowest point or the highest point. For example, when obtaining the lowest point, the Z coordinates are sequentially compared from Y = 0, and the current Z value is smaller than the reference height 16, If it is not smaller than the Z value of
It can be determined by applying a commonly used minimum value extraction algorithm of storing coordinates.

【0020】このようにして求めた最低点あるいは最高
点のX−Y座標値を配列に割りあてる。図5に示すよう
に、例えば最低点の座標が(X、Y)=(0.15m
m、0.21mm)とすると、X座標は0.1≦×<
0.2であるからi=2とし、Y座標は0.2≦Y<
0.3であるからJ=3として、(i,j)=(2,
3)の配列を黒色にする。(ただし、初期配列の色はす
べて白とする)。次に図6に示すようにスリット光2を
スリット走査方向(M)18と(N)19(もしくは
(P)20と(Q)21)というような直交する2方向
で走査して最低点あるいは最高点を2次元配列にプロッ
トすれば図7のような文字のドットパターンが得られ、
このパターンを画像処理装置のRAMに転送すれば、従
来から使われている2値画像の文字特徴抽出法を用いて
文字を認識することができる。
The X-Y coordinate values of the lowest point or the highest point thus obtained are assigned to the array. As shown in FIG. 5, for example, the coordinates of the lowest point are (X, Y) = (0.15 m
m, 0.21 mm), the X coordinate is 0.1 ≦ × <
Since it is 0.2, i = 2 and the Y coordinate is 0.2 ≦ Y <
Since it is 0.3, J = 3 and (i, j) = (2
Make the array of 3) black. (However, the initial array color is all white). Next, as shown in FIG. 6, the slit light 2 is scanned in two orthogonal directions such as slit scanning directions (M) 18 and (N) 19 (or (P) 20 and (Q) 21) to obtain the lowest point or If you plot the highest points in a two-dimensional array, you will get a dot pattern of characters as shown in Figure 7.
If this pattern is transferred to the RAM of the image processing apparatus, the character can be recognized by using the conventionally used character feature extraction method of the binary image.

【0021】[0021]

【発明の効果】本発明は次の効果を有する。すなわち、 (1)従来技術では照明方法を考慮しても、局所的に文
字と背景の輝度値が同じになり、通常のTVカメラでは
認識するために必要な文字パターンを十分に読み取れな
いような立体文字でも本実施例の方法では読取り可能で
ある。
The present invention has the following effects. That is, (1) In the prior art, even if the illumination method is taken into consideration, the brightness values of the character and the background are locally the same, and a character pattern necessary for recognition cannot be sufficiently read by a normal TV camera. Even a three-dimensional character can be read by the method of this embodiment.

【0022】(2)本手法は局所的な3次元情報を得る
ことができるため、図10のような立体文字の文字高さ
h、h2 及び文字幅S1 、S2 等の測定ができ、従来法
ではできなかった文字の“品質”を評価できる。
(2) Since this method can obtain local three-dimensional information, it is possible to measure the character heights h and h 2 and the character widths S 1 and S 2 of a three-dimensional character as shown in FIG. , It is possible to evaluate the "quality" of characters that was not possible with the conventional method.

【図面の簡単な説明】[Brief description of drawings]

【図1】文字認識装置の全体構成図である。FIG. 1 is an overall configuration diagram of a character recognition device.

【図2】立体文字例の説明図である。FIG. 2 is an explanatory diagram of an example of three-dimensional characters.

【図3】スリット照射例の説明図である。FIG. 3 is an explanatory diagram of an example of slit irradiation.

【図4】立体文字列のYZ断面図である。FIG. 4 is a YZ sectional view of a three-dimensional character string.

【図5】実平面座標と配列の対応図である。FIG. 5 is a correspondence diagram of real plane coordinates and arrays.

【図6】スリット走査方向の説明図である。FIG. 6 is an explanatory diagram of a slit scanning direction.

【図7】文字のドットパターン例である。FIG. 7 is an example of a dot pattern of a character.

【図8】従来例の捜査方法の説明図である。FIG. 8 is an explanatory diagram of a conventional investigation method.

【図9】従来法による立体文字の読取例の説明図であ
る。
FIG. 9 is an explanatory diagram of an example of reading a three-dimensional character by a conventional method.

【図10】立体文字の規格に関する説明図である。FIG. 10 is an explanatory diagram related to a standard for three-dimensional characters.

【符号の説明】[Explanation of symbols]

〔1〕レンジファインダー 〔2〕レーザースリット光 〔3〕文字認識対象物 〔4〕XYテーブル 〔5〕モータドライバ 〔7〕画像処理装置 〔8〕CRT [1] Range finder [2] Laser slit light [3] Character recognition target [4] XY table [5] Motor driver [7] Image processing device [8] CRT

〔9〕パソコン 〔10〕キーボード[9] PC [10] Keyboard

Claims (1)

【特許請求の範囲】[Claims] 【請求項1】 XYテーブル上に文字認識対象物を載置
し、レンジファインダーよりレーザースリット光を照射
し、そのスリット画像はレンジファインダーに内蔵した
カメラで取り込み後、画像処理装置に記憶し、パソコン
本体に記憶されたプログラムにより、モータのドライバ
にモータの回転角を指示し、XYテールを移動させ、画
像処理装置にそのつど画像を取り込み、光切断法を用い
て取り込まれたスリット画像より文字の高さを計測しそ
の情報を基に2次元の文字パターンを構成させ、文字を
読み取る立体文字読取方法。
1. A character recognition target object is placed on an XY table, a laser slit light is emitted from a range finder, and the slit image is captured by a camera incorporated in the range finder, and then stored in an image processing device, and is stored in a personal computer. The program stored in the main body instructs the motor driver to indicate the rotation angle of the motor, moves the XY tail, and captures an image each time in the image processing device. A three-dimensional character reading method that measures the height, forms a two-dimensional character pattern based on that information, and reads characters.
JP4107894A 1992-04-27 1992-04-27 Stereo character reading method using light-section method Pending JPH05303662A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP4107894A JPH05303662A (en) 1992-04-27 1992-04-27 Stereo character reading method using light-section method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP4107894A JPH05303662A (en) 1992-04-27 1992-04-27 Stereo character reading method using light-section method

Publications (1)

Publication Number Publication Date
JPH05303662A true JPH05303662A (en) 1993-11-16

Family

ID=14470769

Family Applications (1)

Application Number Title Priority Date Filing Date
JP4107894A Pending JPH05303662A (en) 1992-04-27 1992-04-27 Stereo character reading method using light-section method

Country Status (1)

Country Link
JP (1) JPH05303662A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07270142A (en) * 1994-03-30 1995-10-20 Yamada Mach Tool Kk Method and apparatus for electronic rubbing of engraved character
JP2007219943A (en) * 2006-02-17 2007-08-30 Arefu Net:Kk Casted character recognition device
JP2009301411A (en) * 2008-06-16 2009-12-24 Kobe Steel Ltd Image processing method and image processing device for sampling embossed characters

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07270142A (en) * 1994-03-30 1995-10-20 Yamada Mach Tool Kk Method and apparatus for electronic rubbing of engraved character
JP2007219943A (en) * 2006-02-17 2007-08-30 Arefu Net:Kk Casted character recognition device
JP2009301411A (en) * 2008-06-16 2009-12-24 Kobe Steel Ltd Image processing method and image processing device for sampling embossed characters

Similar Documents

Publication Publication Date Title
KR100792106B1 (en) Captured image projection apparatus and captured image correction method
JP4445454B2 (en) Face center position detection device, face center position detection method, and program
JP3143819B2 (en) Eyelid opening detector
JPH11244261A (en) Iris recognition method and device thereof, data conversion method and device thereof
CN109345597B (en) Camera calibration image acquisition method and device based on augmented reality
JP2893078B2 (en) Shading correction method and device
JP2007025902A (en) Image processor and image processing method
US6915022B2 (en) Image preprocessing method capable of increasing the accuracy of face detection
JP6904624B1 (en) Club head measurement mark and image processing device
JP6898150B2 (en) Pore detection method and pore detection device
JP2017012384A (en) Wrinkle state analysis device and wrinkle state analysis method
JPH05303662A (en) Stereo character reading method using light-section method
EP0265769A1 (en) Method and apparatus for measuring with an optical cutting beam
JP2519445B2 (en) Work line tracking method
JP3186308B2 (en) Method and apparatus for monitoring sintering state of ceramic substrate
JPH06109437A (en) Measuring apparatus of three-dimensional shape
JPH0933227A (en) Discrimination method for three-dimensional shape
JP4454075B2 (en) Pattern matching method
JP2733170B2 (en) 3D shape measuring device
JP4852454B2 (en) Eye tilt detection device and program
JPH0560518A (en) Three-dimensional coordinate measurement device
JPH0534117A (en) Image processing method
JP2521156Y2 (en) Position measurement target
JPH1091785A (en) Check method and device using pattern matching
JP4465911B2 (en) Image processing apparatus and method