JPS58222379A - Processing system of correction of character recognition - Google Patents

Processing system of correction of character recognition

Info

Publication number
JPS58222379A
JPS58222379A JP57104744A JP10474482A JPS58222379A JP S58222379 A JPS58222379 A JP S58222379A JP 57104744 A JP57104744 A JP 57104744A JP 10474482 A JP10474482 A JP 10474482A JP S58222379 A JPS58222379 A JP S58222379A
Authority
JP
Japan
Prior art keywords
character
reading
dictionary
input
section
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP57104744A
Other languages
Japanese (ja)
Other versions
JPH0444313B2 (en
Inventor
Eiichiro Yamamoto
山本 栄一郎
Hiroshi Kamata
洋 鎌田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Priority to JP57104744A priority Critical patent/JPS58222379A/en
Publication of JPS58222379A publication Critical patent/JPS58222379A/en
Publication of JPH0444313B2 publication Critical patent/JPH0444313B2/ja
Granted legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Character Discrimination (AREA)

Abstract

PURPOSE:To attain ease of corrective operation of character read in error, by providing a reading dictionary storing information of reading corresponding to a category registered in a recognizing dictionary for matching the reading and the inputted reading. CONSTITUTION:Graphic information of a character read at an observing section 1 is given to a characteristic extracting section 2, where the character of the letter is extracted and the result is informed to a recognizing section 4. The recognizing section 4 reads out the standard characteristic from the recognizing dictionary where the standard characteristics are stored in advance at each category, the distance between it and the characteristic of the inputted character extracted at the extracting section 2 is operated, and characters having the distance within a prescribed range are selected and stored in a buffer 5. A correction control section 6 displays the character of the category of the highest rank stored in the buffer 5 on a display section 8. The operator observes the display section 8, designates character read in error with a cursor and inputs the correct reading of inputted characters from an input section 9. Character codes corresponding to the reading of characters are stored in the reading dictionary 7, and the control section 6 picks up the character codes of the designated reading from the dictionary 7.

Description

【発明の詳細な説明】 (1)発明の技術分野 本発明は文字認識修正処理方式、特に例えば光学的手段
によって入力した文字を認識する文字認識装置において
1表示した認識結果が誤っている場合に、指定された文
字に対する正しい読みを入力することによって候補カテ
ゴリの上位のものから順次その読みに一致する文字を表
示し、それを選別することによってリジェクト・誤読文
字の修正が可能とされた文字認識修正処理方式に関する
ものである。
DETAILED DESCRIPTION OF THE INVENTION (1) Technical Field of the Invention The present invention relates to a character recognition correction processing method, particularly to a character recognition device that recognizes characters input by optical means, when the displayed recognition result is incorrect. , a character recognition system that allows you to input the correct pronunciation of a specified character, display characters that match the pronunciation in order from the top of the candidate category, and select them to correct rejected or misread characters. This relates to a correction processing method.

(2)従来技術と問題点 一般に文字認識装置においては、観測部から入力した文
字の特徴を抽出し、その特徴と認識辞書に予め登録され
た各カテゴリの標準特徴との距離を計算して、距離の小
さいものを認識結果とする。この認識辞書との照合によ
って、例えば2000カテゴリを20個位の候補カテゴ
リに絞る2Jt正し△ い入力文字がその候補に含まれる割合が99−以上にな
る例もある。しかし、標準特徴との距離の最小のものを
1つだけ選別した場合には、特に手書き漢字のような場
合、正答率は落ちることとなる。
(2) Prior art and problems In general, character recognition devices extract features of characters input from an observation unit, calculate the distance between those features and standard features of each category registered in advance in a recognition dictionary, and then The recognition result is the one with the smallest distance. By checking with this recognition dictionary, for example, 2000 categories are narrowed down to about 20 candidate categories.2Jt Correct Δ In some cases, the ratio of correct input characters being included in the candidates becomes 99- or more. However, if only one feature with the smallest distance from the standard feature is selected, the correct answer rate will drop, especially in the case of handwritten kanji.

表示された認識結果が誤っている場合には、操作者は何
らかの手段によって修正しなければならないが、従来、
この修正のために、例えば認識結果のすべての候補カテ
ゴリをディスプレイ上に表示して、いわゆるメニュ一方
式によシ、ライトペンまたはキーボードから正しい文字
をポイントして修正するような方式、あるいは候補カテ
ゴリとは無関係にキーボード等で正しいカテゴリを指定
する方式が用いられている。
If the displayed recognition result is incorrect, the operator must correct it by some means, but conventionally,
For this correction, for example, all candidate categories of the recognition results are displayed on the display and corrected by using a so-called menu method, by pointing at the correct character with a light pen or keyboard, or by correcting the candidate categories. A method is used in which the correct category is specified using a keyboard, etc., regardless of the category.

このような従来の方式によれば、漢字のように字種の多
い場合には、チェツノ:′:すべき候補のカテゴリが多
く修正操作が極めて煩雑になるという欠点があった。例
えば、「情」という漢字と、「惰」という漢字とは区別
して認識することが難しく、これらが候補カテゴリとし
てディスプレイ上に表示された場合、人間の目にとって
も判別がなかなか難しく、修正のための作業時間が長く
かかることとなる。
This conventional method has the disadvantage that when there are many types of characters, such as kanji, there are many candidate categories for ``Chetsuno:':'', making the correction operation extremely complicated. For example, it is difficult to distinguish and recognize the kanji ``jo'' and the kanji ``inada.'' The work will take a long time.

(3)発明の目的と構成 本発明は上記問題点の解決を図シ、候補カテゴリを減小
させ、修正操作を容易にすることを目的としている。そ
のため、本発明は文字の読みを入力して、文字の読みの
辞書と認識の候補カテゴリとを利用することによって、
候補カテゴリを特定し、リジェクト・誤読文字の修正作
業が容易になるようにしたものである。すなわち、本発
明の文字認識修正処理方式は、少なくとも文字図形を入
力する観測部と、該観測部によって入力された文字図形
から入力文字の特徴を抽出する特徴抽出部と、予め各カ
テゴリ毎に標準特徴が格納されている認識辞書と、・:
・上記特徴抽出部が抽出した特徴:′・・。
(3) Object and Structure of the Invention The present invention aims to solve the above problems, reduce the number of candidate categories, and facilitate modification operations. Therefore, the present invention inputs character readings and uses a dictionary of character readings and candidate categories for recognition.
This method identifies candidate categories and makes it easier to correct rejected and misread characters. In other words, the character recognition correction processing method of the present invention includes at least an observation unit that inputs character figures, a feature extraction unit that extracts features of input characters from the character figures input by the observation unit, and a standard feature for each category in advance. A recognition dictionary that stores features and...
・Features extracted by the above feature extraction unit: ′...

と上記認識辞書から読み出した標準特徴とを対照すると
とKよシ入力文字のいくつかの候補カテゴリを選別する
識別部と、g識結果を表示する表示部とをそなえた文字
認識装置において、上記識別部が選別した候補カテゴリ
を蓄積する手段と、上記認識辞書に登録されたカテゴリ
に対応される文字の読みの情報を記憶している読み辞書
と、上記入力文字の正しい読みを入力する入力手段と、
該入力手段から入力した読みおよび上記読み辞書の読み
のマツチングをとることにより上記候補カテゴリを特定
する修正手段とをそなえ、表示された認識結果のりジエ
クト・誤読文字の修正が可能とされたことを特徴として
いる。以下図面を参照しつつ実施例にもとづいて説明す
る。
and the standard features read from the above recognition dictionary.In a character recognition device equipped with an identification unit that selects several candidate categories of input characters, and a display unit that displays the identification results, the above-mentioned means for accumulating the candidate categories selected by the identification section; a reading dictionary storing information on the readings of characters corresponding to the categories registered in the recognition dictionary; and an input means for inputting the correct readings of the input characters. and,
The present invention is equipped with a correction means for specifying the candidate category by matching the pronunciation inputted from the input means and the pronunciation in the pronunciation dictionary, and it is possible to correct the displayed recognition results and misread characters. It is a feature. Embodiments will be described below with reference to the drawings.

(4)発明の実施例 第1図は本発明の一実施例の処理概念を説明するための
説明図、第2図は本発明の一実施例構成ブロック図、第
3図は本発明の一実施例による文字の修正態様説明図を
示す。
(4) Embodiment of the invention FIG. 1 is an explanatory diagram for explaining the processing concept of an embodiment of the invention, FIG. 2 is a block diagram of the configuration of an embodiment of the invention, and FIG. An explanatory diagram of a character correction mode according to an embodiment is shown.

図中、1は文字図形を光学的手段によって入力する観測
部、2は入力文字の特徴を抽出する特徴抽出部、3は各
カテゴリ毎に標準特徴が登録されている認識辞書、4は
候補カテゴリを選別する識別部、5Fi識別部4が選別
した候補カテゴリを蓄積fる候補カテゴリバッファ、6
tj認識結果を修正する制御を行なう修正制御部、7は
文字の読みの辞書、8Fi例えばCRTディスプレイ等
の認識結果の表示部、9Fi修正情報を入力する入力部
を表わす。
In the figure, 1 is an observation unit that inputs character shapes by optical means, 2 is a feature extraction unit that extracts features of input characters, 3 is a recognition dictionary in which standard features are registered for each category, and 4 is a candidate category. a candidate category buffer for accumulating the candidate categories selected by the Fi identification unit 4, 6;
7 represents a correction control section for performing control to correct the tj recognition results; 7 represents a dictionary of character readings; a display section for displaying recognition results such as an 8Fi CRT display; and an input section for inputting 9Fi correction information.

まず第1図によって、本発明による処理の概要を簡単に
説明する。例えば、第1図図示の如く、正しくは「話」
という文字が入力されたとする。
First, an overview of the processing according to the present invention will be briefly explained with reference to FIG. For example, as shown in Figure 1, the correct word is "story".
Suppose that the characters are input.

そのとき、候補カテゴリが「認、読、話、説、誤」と5
文字あシ、その中の「認」が標準特徴との距離が最も小
さいとすれば、とシあえず候補認識結果として「認」の
文字を表示する。オペレータは修正する場合、入力文字
の読み「ワ」を入力すればよい。候補カテゴリの中の文
字「話」が特定されて取シ出され、修正されることとな
る。例えば同音異字がある場合であっても、候補カテゴ
リの数が減小するので修正が容易となるが、さらに音読
みと訓読みとを併用することができるようにしてもよい
At that time, the candidate categories are "acknowledgement, reading, story, theory, error" and 5
If the distance between the character board and the character ``recognized'' from the standard feature is the smallest, then the character ``recognized'' is displayed as a candidate recognition result. When the operator wants to make a correction, he or she only needs to input the reading of the input character, ``wa''. The character ``story'' in the candidate category will be identified, extracted, and modified. For example, even if there are homonyms, the number of candidate categories is reduced, making correction easier, but it may also be possible to use on-yomi and kun-yomi together.

第2図において、観測部IFi文字の書かれた用紙を走
査し、読取シ光電変換して、その入力情報を特徴抽出部
2に転送する。特徴抽出部2は入力された文字図形の情
報から文字の特徴を抽出する。
In FIG. 2, a sheet of paper with characters IFi written on it is scanned, read and photoelectrically converted, and the input information is transferred to the feature extraction section 2. In FIG. The feature extraction unit 2 extracts character features from input character/graphic information.

この特徴の抽出は、必ずしも1種類に限られるわけでは
なく、通常様々の観点から複数の特徴が抽出される。こ
の抽出結果は識別部4に通知される。
Extraction of this feature is not necessarily limited to one type, but usually a plurality of features are extracted from various viewpoints. This extraction result is notified to the identification unit 4.

一方、認識辞書3には、予め各カテゴリ毎に標準特徴が
格納されており、識別部4は認識辞書3から読み出した
標準特徴と、特徴抽出部2が抽出した入力文字の特徴と
の距離を演算することによって、その距離が所定の範囲
内にあるものを候補カテゴリとして選択する。すなわち
、入力文字に近い候補カテゴリが複数個選択されること
となる。
On the other hand, the recognition dictionary 3 stores standard features for each category in advance, and the identification unit 4 calculates the distance between the standard features read from the recognition dictionary 3 and the features of the input character extracted by the feature extraction unit 2. By performing the calculation, those whose distances are within a predetermined range are selected as candidate categories. That is, a plurality of candidate categories close to the input character are selected.

そして選択した候補カテゴリを候補カテゴリバッファ5
に格納し、蓄積する。
Then, the selected candidate category is stored in the candidate category buffer 5.
Store and accumulate.

例えば、候補力テゴリバッゲア5としては、表示部8に
よって表示される1画面分の文字の候補がすべて格納で
きる程度の容量が用意される。修正制御部6は、特に指
示がない場合には候補カテゴリバッファ5に蓄積された
最も上位のカテゴリの文字、すなわち上記特徴量の距離
が小さいものを表示部8に表示する。
For example, the candidate capacity category baggear 5 is prepared with a capacity large enough to store all character candidates for one screen displayed by the display section 8. If there is no particular instruction, the modification control unit 6 displays, on the display unit 8, the character of the highest category accumulated in the candidate category buffer 5, that is, the character with the shortest distance between the feature amounts.

オペレータは表示部8を見て、ライトベンあるいはカー
ソルを用いることによ)、リジェクト文字あるいは誤読
文字を指定し、入力部9から入力文字の正しい読みを入
力する。読み辞書7には、文字の読みに対応する文字コ
ードが格納されておシ、修正制御部6は読み辞書7から
指定された読みの文字コードを拾い出す。例えば、読み
が「ワ」の場合には、「我、環、輪、倭、和、話」等の
文字コードが拾い出されることになる。この読みの候補
のうち、候補カテゴリバッファ5に蓄積されているもの
があれば、候補カテゴリバッファ5に蓄積されている順
に、表示部8の指定された文字位置に表示する。もし、
これ以外に認識の候補の中に読みの候補が:含まれてい
れば、入力部9の特定のキーを操作することによって、
2番目の候補が表示される。このようにして正解の文字
が表示されるまで操作を繰シ返して、修正が終了する。
The operator looks at the display section 8, specifies a reject character or a misread character (by using a light bar or a cursor), and inputs the correct pronunciation of the input character from the input section 9. The reading dictionary 7 stores character codes corresponding to the readings of characters, and the correction control unit 6 picks up the character code of the specified reading from the reading dictionary 7. For example, if the reading is "wa", character codes such as "wa, ring, ring, wa, sum, story" will be picked up. If any of these pronunciation candidates are stored in the candidate category buffer 5, they are displayed at designated character positions on the display section 8 in the order in which they are stored in the candidate category buffer 5. if,
If there are other pronunciation candidates among the recognition candidates, by operating a specific key on the input section 9,
The second candidate will be displayed. In this way, the operation is repeated until the correct letter is displayed, and the correction is completed.

続みの候補によって、認識の候補が限定されるので迅速
に修正することが可能になる。
Continuation candidates limit recognition candidates, making it possible to make quick corrections.

第3図において、符号1oはディスプレイの表示画面、
11はキーボード、12けカーソルを表わす。
In FIG. 3, reference numeral 1o indicates the display screen of the display;
11 represents a keyboard and a 12-digit cursor.

例えば、第3図囚図示の如く、最初の認識結果が、ディ
スプレイの表示両面10に表示されたとする。
For example, assume that the first recognition result is displayed on both display surfaces 10 of the display as shown in FIG.

オペレータは、例えばキーボード1oのカーソル移動キ
ーを押下することによって、第3図(B1図示の如く、
カーソル12を修正したい文字位置に合わせる。そこで
、例えばキーボード11から「ワ」のカナ文字キーを押
下して読みを入力すると、第3図(C1図示の如(、候
補カテゴリの中から「話」を選択して、ディスプレイの
表示画面1oに表示し、認識結果が修正できることとな
る。
For example, by pressing the cursor movement key on the keyboard 1o, the operator selects the
Move cursor 12 to the character position you want to correct. For example, if you press the kana character key for "wa" on the keyboard 11 to input the pronunciation, select "story" from the candidate categories and enter the reading on the display screen 1o. The recognition result can be corrected.

なお、上記第2図で説明した実施例においては、認識辞
書3と読み辞書7とを別に設けている−が、[識辞書3
の各カテゴリ毎に標準特徴とともにその読み情報をも格
納するようにし、認識辞i13と読み辞書7とを一体化
して構成してもよく、この場合も本発明に含まれること
は言うまでもない。
In the embodiment described in FIG. 2 above, the recognition dictionary 3 and the reading dictionary 7 are separately provided.
The recognition dictionary i13 and the reading dictionary 7 may be configured so that the reading information is stored together with the standard features for each category, and it goes without saying that this case is also included in the present invention.

(5)  発明の効果 以上訝明した如く本発明によれば、リジェクト・誤読文
字を、読みを入力することによシ修正できるので、文字
修正の操作を容易にすることができる。特に1文字認識
装置が誤って認識するような文字は候補カテゴリの数が
多く、文字構成も難解な場合が多いので、人間が目でみ
ても直ちには判別が困難である。このときに、本発明に
よれば視覚と無関係な読みによって、候補カテゴリの1
つを特定し、あるいは少数に限定できるので、実際の使
用上の効果は非常に大きい。
(5) Effects of the Invention As mentioned above, according to the present invention, rejected or misread characters can be corrected by inputting their pronunciations, thereby facilitating character correction operations. In particular, characters that are incorrectly recognized by a single-character recognition device have a large number of candidate categories and often have difficult-to-understand character structures, so it is difficult for humans to immediately distinguish them even when looking at the characters. At this time, according to the present invention, one of the candidate categories is
Since it is possible to specify one or limit it to a small number, the effect in actual use is very large.

【図面の簡単な説明】[Brief explanation of the drawing]

第1図は本発明の一実施例の処理概念を説明するための
説明図、第2図は本発明の一実施例構成ブロック図、第
3図は本発明の一実施例による文字の修正態様説明図を
示す。 図中、1は観測部、2は特徴抽出部、3は認識辞書、4
6R別部、5Vi候補カテゴリバッファ、6は修正制御
部、7は絖み辞書、8は表示部、9は入力部を表わす。 特許出願人  富士通株式会社
FIG. 1 is an explanatory diagram for explaining the processing concept of an embodiment of the present invention, FIG. 2 is a block diagram of the configuration of an embodiment of the present invention, and FIG. 3 is a character correction mode according to an embodiment of the present invention. An explanatory diagram is shown. In the figure, 1 is an observation unit, 2 is a feature extraction unit, 3 is a recognition dictionary, and 4
6R separate section, 5Vi candidate category buffer, 6 a correction control section, 7 an indentation dictionary, 8 a display section, and 9 an input section. Patent applicant Fujitsu Limited

Claims (1)

【特許請求の範囲】 少なくとも文字図形を入力する観測部と、該観測部によ
って入力された文字図形から入力文字の特徴を抽出する
特徴抽出部と、予め各カテゴリ毎に標準特徴が格納され
ている認識辞書と、上記特徴抽出部が抽出した特徴と上
記認識辞書から読み出した標準特徴とを対照することに
よ少入力文字のいくつかの候補カテゴリを選別する識別
部と、認識結果を表示する表示部とをそなえた文字認識
装置において、上記識別部が選別した候補カテゴリを蓄
積する手段と、上記認識辞書に登録されたカテゴリに対
応される文字の読みの情報を記憶している読み辞声と、
上記入力文字の正しい竺みを入力する入力手段と、該入
力手段から入力した読みおよび上記読み辞書の読みのマ
ツチングをとることによυ上記候補カテゴリを特定する
修正手段とをそなえ、表示された認識結果のりジエクト
・。 誤読文字の修正が可能とされたことを特徴とする文字認
識修正処理方式。
[Claims] At least an observation unit that inputs character figures, a feature extraction unit that extracts features of input characters from the character figures input by the observation unit, and standard features stored in advance for each category. a recognition dictionary; an identification unit that selects several candidate categories for a small number of input characters by comparing the features extracted by the feature extraction unit with the standard features read from the recognition dictionary; and a display that displays recognition results. A character recognition device comprising: a means for accumulating candidate categories selected by the identification section; and a pronunciation voice storing information on the pronunciation of characters corresponding to the categories registered in the recognition dictionary; ,
An input means for inputting the correct stroke of the input character, and a correction means for specifying the candidate category by matching the reading input from the input means with the reading in the reading dictionary, and displayed. Recognition result paste. A character recognition correction processing method characterized by being able to correct misread characters.
JP57104744A 1982-06-18 1982-06-18 Processing system of correction of character recognition Granted JPS58222379A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP57104744A JPS58222379A (en) 1982-06-18 1982-06-18 Processing system of correction of character recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP57104744A JPS58222379A (en) 1982-06-18 1982-06-18 Processing system of correction of character recognition

Publications (2)

Publication Number Publication Date
JPS58222379A true JPS58222379A (en) 1983-12-24
JPH0444313B2 JPH0444313B2 (en) 1992-07-21

Family

ID=14388998

Family Applications (1)

Application Number Title Priority Date Filing Date
JP57104744A Granted JPS58222379A (en) 1982-06-18 1982-06-18 Processing system of correction of character recognition

Country Status (1)

Country Link
JP (1) JPS58222379A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5347733A (en) * 1976-10-14 1978-04-28 Fujitsu Ltd Recognizing device for hand-written kana and chinese characters
JPS569873A (en) * 1979-07-02 1981-01-31 Mitsubishi Electric Corp Character coder
JPS5699573A (en) * 1980-01-09 1981-08-10 Hitachi Ltd Kanji (chinese character) distinction system using katakana (square form of japanese syllabary)

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5347733A (en) * 1976-10-14 1978-04-28 Fujitsu Ltd Recognizing device for hand-written kana and chinese characters
JPS569873A (en) * 1979-07-02 1981-01-31 Mitsubishi Electric Corp Character coder
JPS5699573A (en) * 1980-01-09 1981-08-10 Hitachi Ltd Kanji (chinese character) distinction system using katakana (square form of japanese syllabary)

Also Published As

Publication number Publication date
JPH0444313B2 (en) 1992-07-21

Similar Documents

Publication Publication Date Title
US5022081A (en) Information recognition system
KR930004060B1 (en) Method for designating a recognition mode in a hand-written character/graphic recognizer
US5119437A (en) Tabular document reader service
JP2740335B2 (en) Table reader with automatic cell attribute determination function
JPS58222379A (en) Processing system of correction of character recognition
KR950001061B1 (en) Correcting apparatus for recognizing document
JPS60217483A (en) Recognizer of character
JPS61150081A (en) Character recognizing device
JPS61272882A (en) Information recognizing device
JPH0562008A (en) Character recognition method
JPH04138583A (en) Character recognizing device
JPS63184861A (en) Documentation and editing device
JP2731394B2 (en) Character input device
JPH0612520A (en) Confirming and correcting system for character recognizing device
JPH06333083A (en) Optical character reader
JPS61226883A (en) Character recognizing device
JPH0363882A (en) Image processing device
JPH11282965A (en) Character recognizing device and computer readable storage medium recording character recognition program
JPH06131503A (en) Character recognizing processor
JP2886690B2 (en) Character recognition method for optical character reader
JPH07192081A (en) Handwritten character input device
JPH05120472A (en) Character recognizing device
JPS58163072A (en) Character correcting system
JP2907947B2 (en) Optical character reading system
JPS63271588A (en) Character recognition device