JP2001084236A - Pattern identification device and learning processing procedure thereof - Google Patents

Pattern identification device and learning processing procedure thereof

Info

Publication number
JP2001084236A
JP2001084236A JP26289499A JP26289499A JP2001084236A JP 2001084236 A JP2001084236 A JP 2001084236A JP 26289499 A JP26289499 A JP 26289499A JP 26289499 A JP26289499 A JP 26289499A JP 2001084236 A JP2001084236 A JP 2001084236A
Authority
JP
Japan
Prior art keywords
processing
stage
learning
pattern
parallel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP26289499A
Other languages
Japanese (ja)
Other versions
JP4494561B2 (en
Inventor
Toru Nakagawa
徹 中川
Hajime Kitagawa
一 北川
Keiichi Horikawa
圭一 堀川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Daicel Corp
Original Assignee
Daicel Chemical Industries Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Daicel Chemical Industries Ltd filed Critical Daicel Chemical Industries Ltd
Priority to JP26289499A priority Critical patent/JP4494561B2/en
Publication of JP2001084236A publication Critical patent/JP2001084236A/en
Application granted granted Critical
Publication of JP4494561B2 publication Critical patent/JP4494561B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

PROBLEM TO BE SOLVED: To easily execute high-speed and highly reliable pattern identification processing and the learning processing by realizing the high-speed learning processing and identification processing with parallel processing in the individual stages of basic processing units which are connected in multiple stages, prescribed pipeline processing between the individual stages and a new basic processing unit, which is dynamically added/learnt at the end of the final stage. SOLUTION: RANN1, RANN2,..., RANNn represent n-pieces of randomizing numbers ANN prepared in the basic processing unit (stage1) in the first stage, TH1, TH2,..., THn represent n-pieces of preprocessing circuits executing threshold processing in Stage1 in the first stage and an integrated arithmetic circuit which are connected to them at their post stage represents a circuit executing an AND operation, and they execute the integrated processing of the first stage. Then, the synthetic processing of a p-stage is similarly executed. When identification is ultimately successful at one stage, the decided result is sent to a synthetic judgment part, an OR operation is executed, and final output is obtained.

Description

【発明の詳細な説明】DETAILED DESCRIPTION OF THE INVENTION

【0001】[0001]

【発明の属する技術分野】本発明は、情報処理分野で用
いられる人工ニューラルネットワーク(ANN)型のパ
ターン識別装置(パターン予測装置を含む)、及びその
学習処理手順に関するものである。より具体的には、既
に学習済みのパターンに類似したパターンが本装置に入
力された場合には、ANNが持つ高い汎化能力と並列処
理能力によって短時間の一定時間以内に自動識別し、一
方、全くの未知パターンが入力された場合には、論理積
演算によって一旦不明(該当なし)と判定後、その不明
パターンをパイプライン状に配置された後段へと順次引
き渡し、識別処理を継続して遂行することで、全体とし
て高い識別能力と信頼性とを同時に実現するパターン識
別装置、及び同装置における学習と識別処理を並列多段
型の処理によって高速化するためのANN構成方式とそ
の学習処理手順に関するものである。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an artificial neural network (ANN) type pattern discriminating apparatus (including a pattern predicting apparatus) used in the field of information processing, and a learning procedure thereof. More specifically, when a pattern similar to the already-learned pattern is input to the apparatus, the ANN is automatically identified within a short period of time by the high generalization ability and parallel processing ability of the ANN. If a completely unknown pattern is input, it is once determined to be unknown (not applicable) by a logical product operation, and then the unknown pattern is sequentially transferred to the subsequent stage arranged in a pipeline, and identification processing is continued. A pattern identification device that simultaneously achieves high identification capability and reliability as a whole, and an ANN configuration method and a learning processing procedure for accelerating learning and identification processing in the device by parallel multi-stage processing It is about.

【0002】なお、本明細書の中でいう「パターン」と
は、後述の実施例に示した手書文字のような2値化画像
に限らず、紙幣または硬貨画像などのような多値のグレ
ースケールやカラー画像はもとより、X線CT画像や各
種音響信号などの各種スペクトラムからなる一般の信号
を含んでいる。
The term "pattern" as used in the present specification is not limited to a binary image such as a handwritten character shown in an embodiment described later, but may be a multi-valued image such as a bill or a coin image. In addition to grayscale and color images, general signals including various spectra such as X-ray CT images and various acoustic signals are included.

【0003】[0003]

【従来の技術】近年、人工ニューラル・ネットワーク
(ANN)技術に基づく情報処理手法が普及し、産業界
においてもその応用事例が数多く見られるようになっ
た。その上、単なる試行的応用例から脱却し、本格的な
量産型工業製品の制御部分へ組み込もうとする例も増え
ている。しかしながら、現状のANN技術には、学習時
点で関連情報を教えていなかった全くの未知の入力に対
してはその出力が予測できないという致命的な欠陥があ
り、そのままではPL法に代表される製造物の責任問題
をクリアすることができない。
2. Description of the Related Art In recent years, information processing methods based on artificial neural network (ANN) technology have become widespread, and many application examples have been seen in the industrial world. In addition, more and more cases are trying to break away from mere trial applications and incorporate them into the control section of full-scale mass-produced industrial products. However, the current ANN technology has a fatal defect that the output cannot be predicted for a completely unknown input for which related information was not taught at the time of learning. I can't clear the liability problem.

【0004】ANN技術に基づく製品を安全なものとす
るためには、まず第一に、ANNの処理結果に対する信
頼性を向上させる必要があり、これを達成しうる概念
に、ソフトウェア工学における「複数版ソフトウェアに
おける失敗の合致モデル:B. Littlewood and D. R. Mi
ller, 1989, “Conceptual modeling of coincident fa
ilures in multiversion software,” IEEE trans. on
Software Engineering,15, 1596-1614」と呼ばれるもの
がある。実際に、この概念をANN処理系へ適用したと
みなせる技術も複数存在する(例えば、「東洋電機製造
株式会社、小河敏幸、1992、“統合ニューラルネッ
トワーク及びその学習方式”、特開平4−328669
号公報」や「堀川圭一、中川徹、北川一、1998、
“複数の乱数化ANNを用いたパターン識別における信
頼性の向上”、電子情報通信学会総合大会、D-2-1
3」)。
In order to secure a product based on the ANN technology, first, it is necessary to improve the reliability of the processing result of the ANN. Matching Model of Failure in Version Software: B. Littlewood and DR Mi
ller, 1989, “Conceptual modeling of coincident fa
ilures in multiversion software, ”IEEE trans. on
Software Engineering, 15, 1596-1614 ". Actually, there are a plurality of techniques that can be considered as applying this concept to the ANN processing system (for example, “Toyo Denki Seizo Co., Ltd., Toshiyuki Ogawa, 1992,“ Integrated neural network and its learning method ”, Japanese Unexamined Patent Publication No. 4-328669).
Gazette, "Keiichi Horikawa, Toru Nakagawa, Hajime Kitagawa, 1998,
"Improvement of reliability in pattern identification using multiple randomized ANNs" IEICE General Conference, D-2-1
3 ").

【0005】以下、上記技術における共通部分を要約す
る。まず、異なる乱数系列でANNの結合係数やしきい
値を初期化して複数版のANNを生成し、これらを既知
の学習セット(すなわち、学習入力パターンと教師出力
パターンからなる組)でANN学習させる。またこの時
の学習法には、最急降下法を基にした一般的なBP学習
法またはその修正学習法が用いられる。次いで、これら
複数版の学習済みANNへ識別したいパターンを入力
し、それらの出力間で論理積結合等を生成して統合判定
を行い、ANNの処理結果に対する信頼性の向上を図っ
ている。
[0005] The common parts of the above technology are summarized below. First, a plurality of ANNs are generated by initializing ANN coupling coefficients and thresholds with different random number sequences, and these are subjected to ANN learning using a known learning set (that is, a set including a learning input pattern and a teacher output pattern). . Further, as the learning method at this time, a general BP learning method based on the steepest descent method or its modified learning method is used. Next, a pattern to be identified is input to the plurality of learned ANNs, and a logical product combination or the like is generated between the outputs to perform an integration determination, thereby improving the reliability of the ANN processing result.

【0006】[0006]

【発明が解決しようとする課題】しかしながら、上記の
従来技術においては、高度な学習と統合処理における
処理時間の増大問題、処理結果おける信頼性と正答率
の相反問題、未知パターンに対する学習セットの生成
問題が未解決のまま残されており、これらが信頼性の高
いANN応用製品を開発する際に大きな障害となってい
る。
However, in the above-mentioned prior art, the problem of an increase in processing time in advanced learning and integration processing, the reciprocity of reliability and correct answer rate in processing results, and generation of a learning set for unknown patterns. Problems remain unsolved and these are major obstacles in developing reliable ANN applications.

【0007】は、識別したいパターンの種類やその変
形数あるいは総入力点数などが増加すると、学習すべき
内容がより複雑・高度になって学習が収束しないという
状況に陥ってしまい、その対策としてANNの規模を大
きくすると、今度は学習にかかる時間が非常に長くな
る、あるいは、学習に成功したとしても識別時の統合処
理において時間がかかってしまうといった問題をさす。
[0007] When the type of pattern to be identified, the number of deformations thereof, or the total number of input points, etc., increase, the content to be learned becomes more complicated and sophisticated, and the learning does not converge. If the scale of is increased, then the time required for learning becomes extremely long, or even if the learning is successful, it takes time in the integration process at the time of identification.

【0008】は、処理結果の信頼性を上げたい場合に
は、統合処理において論理積演算を行う必要があるが、
反面、ANNの汎化能力を論理積で切り捨てることにも
なってしまい、結果として正答率が実用に耐えないほど
に低下してしまうという問題をさす。
In order to increase the reliability of the processing result, it is necessary to perform an AND operation in the integration processing.
On the other hand, the generalization ability of the ANN is also cut off by a logical product, and as a result, the correct answer rate is reduced so as not to be practically used.

【0009】は、実際の応用場面では、学習を始める
前に可能性のある全ての入力パターンを予測することが
できず、したがって、予め完全な学習セットを用意する
ことはできないという現実の問題をさす。
However, in an actual application scene, it is not possible to predict all possible input patterns before learning is started, and thus it is impossible to prepare a complete learning set in advance. As expected.

【0010】本発明はこのような現状にかんがみてなさ
れたものであり、上記のような問題を解決して、高速か
つ高信頼なパターン識別処理とそのための学習処理を容
易に行えるようにすることを目的とする。
SUMMARY OF THE INVENTION The present invention has been made in view of the above circumstances, and has been made to solve the above-described problems and to facilitate high-speed and highly reliable pattern identification processing and learning processing therefor. With the goal.

【0011】[0011]

【課題を解決するための手段】本発明者らは、上記の目
的を達成するため鋭意検討した結果、以下に述べる
(i)並列多段型のANN構成法に基づく新規なパター
ン識別装置、及び(ii)このパターン識別装置における
学習処理手順をそれぞれ見出し、上記問題を解決した。
Means for Solving the Problems The present inventors have made intensive studies to achieve the above object, and as a result, the following (i) a novel pattern identification device based on a parallel multistage ANN configuration method, and ii) The learning procedure in this pattern identification device was found, and the above problem was solved.

【0012】すなわち、本発明は、異なる乱数系列で初
期化して並列に学習させた人工ニューラルネットワーク
(以下、「乱数化ANN」と称する)複数個と、各乱数
化ANNの出力側でしきい値判定を並列して実行する前
処理回路複数個と、各前処理回路からの一次判定出力を
論理積演算によって統合処理する統合演算回路とからな
る組を基本処理単位とし、該基本処理単位の複数組を多
段に接続して並列多段型の人工ニューラルネットワーク
処理を行うパイプライン状の処理系と、前記パイプライ
ン状処理系の統合演算回路で得られる各段の判定結果を
論理和演算で総合することで最終的な識別判定を行う総
合判定部とで構成されているパターン識別装置を提供す
る。
That is, according to the present invention, a plurality of artificial neural networks (hereinafter, referred to as "randomized ANNs") initialized with different random number sequences and learned in parallel, and a threshold on the output side of each randomized ANN. A basic processing unit is a set of a plurality of preprocessing circuits that execute determinations in parallel and an integrated operation circuit that integrates and processes the primary determination output from each preprocessing circuit by a logical product operation. A pipeline-like processing system for performing parallel multi-stage artificial neural network processing by connecting sets in multiple stages, and a decision result of each stage obtained by an integrated arithmetic circuit of the pipeline-like processing system are integrated by a logical OR operation. Accordingly, the present invention provides a pattern identification device including a comprehensive determination unit that performs final identification determination.

【0013】本発明は、又、上記のパターン識別装置を
学習させる並列多段型の学習処理手順であって、前段ま
での識別処理で不明(該当なし)と判定された入力パタ
ーンの中から、前段における論理積演算時の投票数(論
理「1」の総数)に基づいて、新たに学習入力パターン
と教師出力パターンの学習セットを抽出し、その学習セ
ットを用いて、次段に追加する乱数化ANN複数個を並
列学習させることを特徴とする並列多段型の人工ニュー
ラルネットワーク(ANN)学習処理手順を提供する。
The present invention also relates to a parallel multi-stage learning processing procedure for learning the above-mentioned pattern identification device, wherein the input pattern determined to be unknown (not applicable) in the identification processing up to the previous stage is selected from the input stage. , A learning set of a learning input pattern and a teacher output pattern is newly extracted based on the number of votes (the total number of logic “1”) at the time of the logical product operation, and a random number is added to the next stage using the learning set. A parallel multistage artificial neural network (ANN) learning processing procedure characterized by learning a plurality of ANNs in parallel is provided.

【0014】このANN学習処理手順の基本的な考え方
は、新たに学習すべき入力パターンと教師出力パターン
の学習セットが生じた時点で、上記パターン識別装置に
おけるパイプライン状処理系の最終段の後尾に新しい段
(基本処理単位)を追加し、この段のみを学習させると
いうものである。全学習過程の始めでは一段分の基本処
理単位しか存在しないので、本発明では、この学習手順
を並列多段型のANN学習処理手順と呼ぶ。
The basic idea of the ANN learning processing procedure is that when a learning set of an input pattern to be newly learned and a teacher output pattern is generated, the last stage of the pipeline processing system in the above-described pattern identification device is used. , A new stage (basic processing unit) is added, and only this stage is learned. Since only one basic processing unit exists at the beginning of the entire learning process, this learning procedure is referred to as a parallel multi-stage ANN learning processing procedure in the present invention.

【0015】なお、本明細書の中でいう「乱数化AN
N」とは、後述の実施例で用いた3層(学習層2層の)
ANNを異なる乱数系列で初期化(乱数化)したものに
限らず、より一般的な多層ANNはもとより、SOM、
Hopfield形ANNなどを乱数化したものを含んでいる。
また、単に「学習」と記載されている場合、BP学習法
に代表される教師付きの学習を意味している。
It should be noted that "randomized AN" in this specification is used.
N "means three layers (two learning layers) used in the examples described later.
Not only the ANN initialized (randomized) with a different random number sequence, but also SOM,
It includes a randomized version of Hopfield type ANN and the like.
In addition, when simply described as “learning”, it means supervised learning represented by the BP learning method.

【0016】[0016]

【作用】本発明に記載した多段接続形の乱数化ANN構
成法により、装置内にパイプライン形式の処理系を構成
し、各段(基本処理単位)内での乱数化ANN複数個に
よる並列処理と、各段間でのある種パイプライン的処理
によって高速な学習・判定処理を達成することができ
る。なお、本発明におけるパイプライン形式の処理系
は、通常のパイプライン処理とは、前段の処理結果が
総合判定部へ送られる点、処理できなかった不明パタ
ーンが次段へそのままパスされて送られる点で、作用上
大きく異なる。さらに、本発明では、論理和演算で多段
にわたる処理結果の総合処理を行うため、一種の多段フ
ィルタ的効果が得られ、高い正答率と信頼性とを同時に
実現することができる。
According to the multi-stage connection type randomized ANN construction method described in the present invention, a pipeline type processing system is constructed in the apparatus, and parallel processing is performed by a plurality of randomized ANNs in each stage (basic processing unit). Thus, high-speed learning / judgment processing can be achieved by a kind of pipeline-like processing between the stages. Note that the pipeline-type processing system of the present invention is different from the normal pipeline processing in that the processing result of the previous stage is sent to the general determination unit, and the unknown pattern that could not be processed is sent to the next stage as it is. In this regard, they differ greatly in operation. Further, in the present invention, since a total processing of processing results over multiple stages is performed by a logical sum operation, a kind of multi-stage filter effect can be obtained, and a high correct answer rate and high reliability can be realized at the same time.

【0017】以下、より具体的に本発明に基づく装置の
作用として説明する。本発明の請求項1の特徴を持つ装
置においては、ある段(基本処理単位)への入力として
既に学習済みのパターンに類似のものが入力された場
合、ANNが本来持っている高い汎化能力と並列接続さ
れた乱数化ANNの並列処理能力によって短時間の一定
時間以内に自動識別することができる。
Hereinafter, the operation of the apparatus according to the present invention will be described more specifically. In the apparatus having the features of the first aspect of the present invention, when a pattern similar to a learned pattern is input as an input to a certain stage (basic processing unit), the high generalization ability inherent to the ANN is provided. , Can be automatically identified within a short period of time by the parallel processing capability of the randomized ANN connected in parallel.

【0018】また、その一方で、同段への入力として全
く未知のパターンが入力された場合には、論理積演算に
よって一旦不明(該当なし)と判定後、その不明パター
ンをパイプライン状に配置された後段へ順次引き渡し、
識別処理を継続して遂行することで、全体として高い識
別能力と信頼性とを同時に実現することができる。この
場合の処理時間は使用した基本処理単位の段数に比例
し、本発明に基づく装置はリアルタイム処理に向いた良
い性質を提供できる。
On the other hand, if a completely unknown pattern is input as an input to the same stage, the unknown pattern is once determined to be unknown (not applicable) by a logical product operation, and the unknown pattern is arranged in a pipeline. Handed over to the subsequent stage,
By continuously performing the identification processing, it is possible to simultaneously achieve high identification ability and reliability as a whole. The processing time in this case is proportional to the number of stages of the basic processing unit used, and the apparatus according to the present invention can provide good properties suitable for real-time processing.

【0019】最終段に至っても識別できない不明のパタ
ーンが残った場合には、本発明の請求項2に示した並列
多段型のANN学習処理手順によって、新たな学習入力
パターンと教師出力パターンからなる学習セットを不明
パターン群の中から生成し、そのセットによって、最終
段の後尾に追加した新しい段を学習させることができ、
高度な学習と統合処理を最小限の処理時間で容易に実現
できる。
When an unknown pattern that cannot be identified remains even at the final stage, a new learning input pattern and a teacher output pattern are formed by the parallel multistage ANN learning processing procedure according to the second aspect of the present invention. A training set is generated from the unknown pattern group, and the set can be used to train a new stage added to the tail of the last stage.
Advanced learning and integrated processing can be easily realized with minimum processing time.

【0020】[0020]

【発明の実施の形態】以下、本発明の実施例を図面に基
づいて具体的に説明する。図1は本発明のパターン識別
装置における認識システムの一例を示す概略図であり、
図中のRANN1、RANN2、…、RANNnは1段目
の基本処理単位内(Stage1)に用意されたn個の乱数化
ANNを、RANN'1、RANN'2、…、RANN'n
2段目(Stage2)の乱数化ANNを、RANN''1、R
ANN''2、…、RANN''3は最終段であるp段目(St
agep)の乱数化ANNをそれぞれ示している。また、図
中のTH1、TH2、…、THnは初段のStage1内でしき
い値処理を行うn個の前処理回路を、それらの後に接続
された統合演算回路は論理積演算を行う回路をそれぞれ
示し、これらが1段目の統合処理を実行する。以下、同
様にして、TH'1、TH'2、…、TH'n、ならびに統合
演算回路'は、それぞれStage2内でしきい値処理を行う
n個の前処理回路、ならびに論理積演算を行う回路であ
り、これらが2段目の統合処理を実行する。最終段のT
H''1、TH''2、…、TH''n、ならびに統合演算回
路''は、それぞれStagep内でしきい値処理を行うn個の
前処理回路、ならびに論理積演算を行う回路であり、こ
れらがp段目の統合処理を実行する。
DESCRIPTION OF THE PREFERRED EMBODIMENTS Embodiments of the present invention will be specifically described below with reference to the drawings. FIG. 1 is a schematic diagram showing an example of a recognition system in a pattern identification device of the present invention.
RANN 1, RANN 2 in FIG, ..., RANN n is the n random numbers of ANN which is prepared in the basic process unit of the first stage (Stage 1), RANN '1 , RANN' 2, ..., RANN ' n is the randomized ANN of the second stage (Stage 2 ), RANN '' 1 , R
ANN ″ 2 ,..., RANN ″ 3 are the last stage, the p-th stage (St
age p ) are shown respectively. Moreover, TH 1, TH 2 in FIG, ..., TH n is the n-number of pre-processing circuit for performing a thresholding in the first stage Stage within 1, integrated computing circuit connected after them is a logical AND operation The circuits to be performed are shown, and these execute the first-stage integration processing. Hereinafter, similarly, TH ′ 1 , TH ′ 2 ,..., TH ′ n , and the integrated operation circuit ′ perform n preprocessing circuits for performing threshold processing in Stage 2 and AND operation, respectively. These circuits execute the second-stage integration processing. T of the last stage
H '' 1 , TH '' 2 ,..., TH '' n , and the integrated operation circuit '' are n pre-processing circuits for performing threshold processing in Stage p , and circuits for performing AND operation, respectively. These execute the p-th integration process.

【0021】そして、図中、各段(Stage1、Stage2
…、Stagep)の左側にある(特に1段目のみ入力データ
と記載してある)端子へは、認識時には識別すべきパタ
ーンが入力され、一方、学習時には学習すべきパターン
が入力される。もし、認識時において識別に失敗したパ
ターンがあれば、そのパターンは図中上部のゲート(G
1、G2、…、またはGp)を介して次段へパスされ、別
の学習セットで予め学習をさせておいた次段で次の識別
処理が実行される。
In the figure, each stage (Stage1, StageTwo,
…, Stagep) On the left side (especially input data only in the first stage)
To the terminal), the pattern to be identified at the time of recognition
Pattern is input, while the pattern to be learned during learning is
Is entered. If the identification fails during recognition,
If there is a turn, the pattern is the gate (G
1, GTwo,… Or Gp) And passed to the next stage, another
The next stage that has been trained in advance with the training set
The processing is executed.

【0022】最終的に、上記いずれかの段において識別
に成功した場合、その判定結果は図中下部の総合判定部
に送られ、そこで論理和演算が施されて、本認識システ
ムの最終出力となる。一方、最終段に至っても識別でき
なかった場合、そのパターンは、新たな学習セットを作
り出すためのデータとして、最終段の右側にある学習候
補一時記憶部に蓄えられる。
Finally, if the discrimination is successful in any of the above stages, the result of the discrimination is sent to the overall discriminator in the lower part of the figure, where a logical OR operation is performed, and the final output of the recognition system is Become. On the other hand, if the pattern cannot be identified even at the final stage, the pattern is stored in the learning candidate temporary storage unit on the right side of the final stage as data for creating a new learning set.

【0023】ここでnとpの値が実用上問題となる。実
験の結果、例えば、日本国硬貨6種と韓国硬貨4種の計
10種の識別を行う実験(事例1)においては、n=
3、p=2で、識別試行1回当りの平均正答率を91%
以上、かつ、その時の誤答率を0%にすることができ、
加えて、その場合における乱数化ANN単体の規模とし
ては、入力点数154、中間層のニューロン数35、出
力層のニューロン数20程度でよいことも分かった。も
う一つの事例である手書き文字認識の実験(事例2)で
は、n=3、p=7で、識別試行1回当たりの平均正答
率を82%以上、かつ、その時の誤答率を3%程度にす
ることができた。この事例2における乱数化ANN単体
の規模は、入力点数が441、中間層のニューロン数が
30、出力層のニューロン数が46である。
Here, the values of n and p pose a practical problem. As a result of the experiment, for example, in an experiment (case 1) in which six types of Japanese coins and four types of Korean coins are identified (case 1), n =
3. With p = 2, the average correct answer rate per one identification trial is 91%.
In addition, the error rate at that time can be reduced to 0%,
In addition, it was found that the size of the randomized ANN alone in this case may be about 154 input points, about 35 neurons in the intermediate layer, and about 20 neurons in the output layer. In another example, an experiment of handwritten character recognition (case 2), when n = 3 and p = 7, the average correct answer rate per one identification trial was 82% or more, and the incorrect answer rate at that time was 3%. Could be on the order. The scale of the randomized ANN in this case 2 is 441 in the number of input points, 30 in the number of neurons in the intermediate layer, and 46 in the number of neurons in the output layer.

【0024】事例1と事例2のいずれの場合でも、識別
判定処理における総計算量は意外に少なく、仮に30フ
レーム/毎秒の連続パターンを識別すると仮定したとし
ても、現在市販されている量産型マイクロプロセッサの
性能値に換算して数十MFLOPSのものが1個あれば
よく、そのANN規模は実際に装置を実現することが可
能な大きさに納まっている。もちろん、並列多段型の構
成を生かした並列処理を複数の量産型マイクロプロセッ
サによって行えば、千フレーム/毎秒以上の高速処理も
達成可能である。
In either case 1 or case 2, the total amount of calculation in the identification determination process is surprisingly small. Even if it is assumed that a continuous pattern of 30 frames / second is identified, a mass-production type microcontroller currently on the market is used. It suffices that there be only one tens of MFLOPS in terms of the performance value of the processor, and the ANN scale is within a size that can actually realize the device. Of course, if a plurality of mass-produced microprocessors perform parallel processing utilizing the configuration of the parallel multistage type, high-speed processing of 1,000 frames / second or more can be achieved.

【0025】次に、図2のフローチャートに従って、並
列多段型の学習処理手順と統合化識別処理手順を説明す
る。 101:処理の開始を示す(ただし、この101以下の
プログラムは、利用者からのブレーク指示が制御卓等を
介して入力されるまで、無限に処理を繰返す)。 102:異なるシード(すなわち、乱数の種)をn個用
いて、RANN'''1、RANN'''2、…、RAN
N'''n、をそれぞれ初期化し、複数の乱数化ANNを生
成後、それらを本認識システム(以下、認識部)の最後
尾に追加する。 103:利用者によって学習候補の一時記憶から創生さ
れた新たな学習セットを、n個の追加されたRAN
N'''1、RANN'''2、…、RANN'''nのそれぞれへ
配布する。 104:ここからRANN'''1、RANN'''2、…、R
ANN'''nが並列して学習処理を開始する。 105:RANN'''1におけるBP学習処理を実行す
る。 106:RANN'''2におけるBP学習処理を実行す
る。 107:RANN'''3におけるBP学習処理を実行す
る。 108:ここで、RANN'''1、RANN'''2、…、R
ANN'''n間で実行完了の同期を取り、104からの並
列化学習処理が終結する。
Next, the parallel multi-stage learning processing procedure and the integrated identification processing procedure will be described with reference to the flowchart of FIG. 101: Indicates the start of processing (however, the program below 101 repeats processing indefinitely until a break instruction from a user is input via a control console or the like). 102: Using n different seeds (that is, random seeds), RANN ′ ″ 1 , RANN ′ ″ 2 ,.
After initializing N ′ ″ n and generating a plurality of randomized ANNs, they are added to the end of the present recognition system (hereinafter, “recognition unit”). 103: The new learning set created from the temporary storage of the learning candidates by the user is stored in n additional RANs.
N ′ ″ 1 , RANN ′ ″ 2 ,..., RANN ′ ″ n . 104: From here, RANN ''' 1 , RANN''' 2 , ..., R
ANN ''' n starts the learning process in parallel. 105: Execute BP learning processing in RANN ''' 1 . 106: Execute the BP learning process in RANN ''' 2 . 107: Execute the BP learning process in RANN ''' 3 . 108: Here, RANN ″ ′ 1 , RANN ′ ″ 2 ,.
The execution completion is synchronized between ANN ''' n and the parallel learning process from 104 is completed.

【0026】109:前段のゲートG(具体的には、こ
のプロセスが1段目の時は入力データ端子、2段目の時
はG1、同様にして、p段目の時はGP-1)からデータを
1つ受け取り、自己のRANN'''1、RANN'''2
…、RANN'''nへ配布する。 110:ここから、RANN'''1、RANN'''2、…、
RANN'''nが並列して識別処理を開始する。 111:RANN'''1の識別処理を実行し、出力端子ご
とに各々のアナログ判定値を出力する。 112:111の各出力値を前処理回路TH'''1でしき
い値判定し、デジタル値に変換する。 113:RANN'''2の識別処理を実行し、出力端子ご
とに各々のアナログ判定値を出力する。 114:113の各出力値を前処理回路TH'''2でしき
い値判定し、デジタル値に変換する。 115:RANN'''nの識別処理を実行し、出力端子ご
とに各々のアナログ判定値を出力する。 116:115の各出力値を前処理回路TH'''nでしき
い値判定し、デジタル値(つまり、論理「1」又は
「φ」)に変換する。 117:ここで、RANN'''1、RANN'''2、…、R
ANN'''n間で実行完了の同期を取り、110からの並
列化識別処理が終結する。
109: Gate G in the previous stage (specifically, the input data terminal when this process is the first stage, G 1 when the process is the second stage, and G P- 1 ) receives one piece of data from its own RANN "' 1 , RANN'" 2 ,
…, Distributed to RANN ''' n . 110: From here, RANN ′ ″ 1 , RANN ′ ″ 2 ,.
RANN ''' n starts identification processing in parallel. 111: Execute the identification process of RNN ''' 1 and output each analog determination value for each output terminal. At 112: 111, each output value is determined as a threshold value by the preprocessing circuit TH ″ ′ 1 and converted into a digital value. 113: Execute the RNN ''' 2 identification process and output each analog determination value for each output terminal. At 114: 113, each output value is subjected to threshold value judgment by the preprocessing circuit TH ″ ′ 2 and converted into a digital value. 115: Perform RAN ''' n identification processing and output each analog determination value for each output terminal. Each output value of 116: 115 is subjected to threshold value judgment by the preprocessing circuit TH ′ ″ n , and is converted into a digital value (that is, logic “1” or “φ”). 117: Here, RANN ″ ′ 1 , RANN ′ ″ 2 ,.
The completion of execution is synchronized between ANN ''' n , and the parallel identification processing from 110 ends.

【0027】118:統合演算回路'''の処理として、
各前処理回路(TH'''1、TH'''2、…、TH'''n)の
出力端子ごとに論理「1」の総数、すなわち投票数を求
める。 119:118で求めた投票数の中で最大のものがn
(つまり乱数化ANNの総数)と等しくなければ、論理
積が成立していないので、121へ飛ぶ。 120:論理積が成立したので、その結果を総合判定部
へ送り出す。 121:自己のプロセスが認識部の最終段ならば123
へ飛ぶ。 122:自己のゲートG(具体的には、このプロセスが
1段目の時はG1、2段目の時はG2、同様にして、p段
目の時はGp)を開き、該当なしと判定された不明パタ
ーンを次段へパスさせる。 123:ここまでの識別で該当なしとなった不明パター
ン及びその投票結果を認識部で共通の学習候補一時記憶
部に格納する。 124:学習候補の総数が予め定めた上限値より少ない
ならば109へ飛ぶ。 125:学習候補の総数が上限値に達したので、新規の
学習セットを創生することを外部(現在は人間)へ依頼
し、自己のプロセス(つまりこのプログラム)をフォー
ク(分岐起動)後、109へ飛ぶ。なお、以上の手順に
おいては、第1段目の学習に用いる学習セットが人手に
よって予め用意されている。
118: As the processing of the integrated arithmetic circuit ''',
The total number of logic "1", that is, the number of votes is obtained for each output terminal of each preprocessing circuit (TH "' 1 , TH'" 2 , ..., TH '" n ). The largest number of votes found at 119: 118 is n
If it is not equal to (that is, the total number of randomized ANNs), the process jumps to 121 because the logical product has not been established. 120: Since the logical product has been established, the result is sent to the comprehensive judgment unit. 121: 123 if its own process is the last stage of the recognition unit
Fly to. 122: Open its own gate G (specifically, G 1 at the first stage, G 2 at the second stage, and G p at the p-th stage). The unknown pattern determined as none is passed to the next stage. 123: The unknown pattern which has not been identified by the above identification and the voting result thereof are stored in the common learning candidate temporary storage unit by the recognition unit. 124: Jump to 109 if the total number of learning candidates is less than a predetermined upper limit. 125: Since the total number of learning candidates has reached the upper limit, the outside (currently a human) is requested to create a new learning set, and after forking (branch activation) its own process (that is, this program), Fly to 109. In the above procedure, a learning set used for the first-stage learning is manually prepared in advance.

【0028】[0028]

【発明の効果】本発明に基づくパターン識別装置では、
多段接続された基本処理単位の各段における並列処理
と、各段間でのある種パイプライン的処理と、最終段の
後尾に動的に追加学習される新たな基本処理単位とによ
って、高速な学習処理と同識別処理を達成する機構を持
ち、一般的に困難とされている「高度な学習と統合処理
における処理時間の増大問題」を解決する一手段を提供
している。この結果得られる能力、すなわち、短時間の
一定時間以内に自動識別できる能力は特にリアルタイム
な識別処理において効果を発揮する。
In the pattern identification device according to the present invention,
High-speed processing is achieved by parallel processing in each stage of the basic processing unit connected in multiple stages, some kind of pipeline-like processing between each stage, and a new basic processing unit that is dynamically additionally learned at the end of the last stage. It has a mechanism to achieve the learning process and the discrimination process, and provides one means for solving the generally difficult "problem of increasing the processing time in advanced learning and integration processes". The ability obtained as a result, that is, the ability to automatically identify within a short period of time is particularly effective in real-time identification processing.

【0029】また、課題で述べた「処理結果における信
頼性と正答率の相反問題」に対しては、提案した装置内
における多段の統合処理によって、一種の多段フィルタ
的性質を得、その段数を増加させることにより、高い正
答率と信頼性とを同時に実現している。また、使用する
段数を調整することで正答率と信頼性を実用域に留めつ
つ装置コストを押さえられる点も実用上有益である。
As for the "reciprocity problem between reliability and correct answer rate in the processing result" described in the subject, a kind of multi-stage filter property is obtained by multi-stage integration processing in the proposed apparatus, and the number of stages is reduced. By increasing the number, a high correct answer rate and high reliability are realized at the same time. Adjusting the number of stages to be used is practically useful in that the cost of the apparatus can be reduced while keeping the correct answer rate and reliability in the practical range.

【0030】さらに、もう一つの問題である「未知パタ
ーンに対する学習セットの生成問題」に対しては、ある
時点で既に分かっている範囲の学習入力パターンと教師
出力パターンのセットのみで学習を開始できる点、及
び、不明(該当なし)と判定されたパターンを再利用し
て新たな学習が容易に追加できる点において実用上の効
果が大きい。
Further, for another problem, "a problem of generating a learning set for an unknown pattern", learning can be started only with a set of a learning input pattern and a teacher output pattern within a range which is already known at a certain point in time. The practical effect is large in that the point and the pattern determined to be unknown (not applicable) can be reused to easily add new learning.

【0031】特に本発明は、全くの未知の入力パターン
を含んだ実際の入力パターンを高速度にて自動分類する
パターン識別装置に利用した場合の効果が高く、本発明
の特徴である並列多段型の乱数化ANNに識別すべき入
力パターンを順次通すことで、動画等の連続したパター
ン列も高速度にて識別処理できる。
The present invention is particularly effective when used in a pattern discriminating apparatus for automatically classifying an actual input pattern including a completely unknown input pattern at a high speed, and is a feature of the present invention. By sequentially passing an input pattern to be identified through the randomized ANN, a continuous pattern sequence such as a moving image can be identified at a high speed.

【図面の簡単な説明】[Brief description of the drawings]

【図1】本発明のパターン識別装置における認識システ
ムの一例を示し、一段当りの乱数化ANN数がn、総段
数がpの場合のシステム概略図である。
FIG. 1 shows an example of a recognition system in a pattern identification device of the present invention, and is a system schematic diagram when the number of randomized ANNs per stage is n and the total number of stages is p.

【図2】本発明における並列多段型のANN学習処理手
順と統合化識別処理手順を説明するフローチャートであ
る。
FIG. 2 is a flowchart illustrating a parallel multi-stage ANN learning processing procedure and an integrated identification processing procedure according to the present invention.

───────────────────────────────────────────────────── フロントページの続き (72)発明者 北川 一 京都府京都市左京区岩倉花園町6番地の9 (72)発明者 堀川 圭一 兵庫県姫路市網干区新在家940 Fターム(参考) 5D015 JJ00 5L096 BA13 BA14 BA15 BA16 BA17 GA21 HA11 LA13  ────────────────────────────────────────────────── ─── Continuing on the front page (72) Inventor Kazu Kitagawa 9-6, Iwakura Hanazonocho, Sakyo-ku, Kyoto-shi, Kyoto (72) Inventor Keiichi Horikawa 940 F-term, Reference: 5D015 JJ00 5L096 BA13 BA14 BA15 BA16 BA17 GA21 HA11 LA13

Claims (2)

【特許請求の範囲】[Claims] 【請求項1】 異なる乱数系列で初期化して並列に学習
させた人工ニューラルネットワーク(以下、「乱数化A
NN」と称する)複数個と、各乱数化ANNの出力側で
しきい値判定を並列して実行する前処理回路複数個と、
各前処理回路からの一次判定出力を論理積演算によって
統合処理する統合演算回路とからなる組を基本処理単位
とし、該基本処理単位の複数組を多段に接続して並列多
段型の人工ニューラルネットワーク処理を行うパイプラ
イン状の処理系と、前記パイプライン状処理系の統合演
算回路で得られる各段の判定結果を論理和演算で総合す
ることで最終的な識別判定を行う総合判定部とで構成さ
れているパターン識別装置。
An artificial neural network (hereinafter referred to as "randomization A") initialized with different random number sequences and trained in parallel.
NN "), and a plurality of preprocessing circuits for executing threshold determination in parallel at the output side of each randomized ANN;
A set consisting of an integrated operation circuit for integrating the primary judgment output from each preprocessing circuit by an AND operation is used as a basic processing unit, and a plurality of sets of the basic processing units are connected in multiple stages to form a parallel multistage artificial neural network. A pipeline-type processing system that performs processing, and a comprehensive determination unit that performs a final identification determination by integrating the determination results of each stage obtained by the integrated arithmetic circuit of the pipeline-type processing system by a logical OR operation. The configured pattern identification device.
【請求項2】 請求項1記載のパターン識別装置を学習
させる並列多段型の学習処理手順であって、前段までの
識別処理で不明(該当なし)と判定された入力パターン
の中から、前段における論理積演算時の投票数(論理
「1」の総数)に基づいて、新たに学習入力パターンと
教師出力パターンの学習セットを抽出し、その学習セッ
トを用いて、次段に追加する乱数化ANN複数個を並列
学習させることを特徴とする並列多段型の人工ニューラ
ルネットワーク学習処理手順。
2. A parallel multi-stage learning processing procedure for learning the pattern identification device according to claim 1, wherein the input pattern in the preceding stage is selected from input patterns determined as unknown (not applicable) in the identification processing up to the previous stage. A learning set of a learning input pattern and a teacher output pattern is newly extracted based on the number of votes (the total number of logic “1”) at the time of the logical product operation, and the learning set is used to add a randomized ANN to the next stage. A parallel multistage artificial neural network learning processing procedure characterized by learning a plurality of parallel neural networks.
JP26289499A 1999-09-17 1999-09-17 Pattern identification device Expired - Fee Related JP4494561B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP26289499A JP4494561B2 (en) 1999-09-17 1999-09-17 Pattern identification device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP26289499A JP4494561B2 (en) 1999-09-17 1999-09-17 Pattern identification device

Publications (2)

Publication Number Publication Date
JP2001084236A true JP2001084236A (en) 2001-03-30
JP4494561B2 JP4494561B2 (en) 2010-06-30

Family

ID=17382103

Family Applications (1)

Application Number Title Priority Date Filing Date
JP26289499A Expired - Fee Related JP4494561B2 (en) 1999-09-17 1999-09-17 Pattern identification device

Country Status (1)

Country Link
JP (1) JP4494561B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017538137A (en) * 2014-12-15 2017-12-21 バイドゥ・ユーエスエイ・リミテッド・ライアビリティ・カンパニーBaidu USA LLC System and method for audio transcription
CN113168466A (en) * 2018-12-14 2021-07-23 三菱电机株式会社 Learning recognition device, learning recognition method, and learning recognition program

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05101187A (en) * 1991-10-09 1993-04-23 Kawasaki Steel Corp Image recognition device and its learning method
JPH05258114A (en) * 1992-03-11 1993-10-08 Toshiba Corp Character recognition device
JPH09152480A (en) * 1995-11-30 1997-06-10 Mitsubishi Electric Corp Automatic target recognition apparatus

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017538137A (en) * 2014-12-15 2017-12-21 バイドゥ・ユーエスエイ・リミテッド・ライアビリティ・カンパニーBaidu USA LLC System and method for audio transcription
US11562733B2 (en) 2014-12-15 2023-01-24 Baidu Usa Llc Deep learning models for speech recognition
CN113168466A (en) * 2018-12-14 2021-07-23 三菱电机株式会社 Learning recognition device, learning recognition method, and learning recognition program

Also Published As

Publication number Publication date
JP4494561B2 (en) 2010-06-30

Similar Documents

Publication Publication Date Title
Akhtar et al. Multi-task learning for multi-modal emotion recognition and sentiment analysis
Agarwal et al. Towards causal vqa: Revealing and reducing spurious correlations by invariant and covariant semantic editing
Neil et al. Learning to be efficient: Algorithms for training low-latency, low-compute deep spiking neural networks
Liu et al. Fair loss: Margin-aware reinforcement learning for deep face recognition
Fu et al. Fast crowd density estimation with convolutional neural networks
US20190228313A1 (en) Computer Vision Systems and Methods for Unsupervised Representation Learning by Sorting Sequences
KR102239714B1 (en) Neural network training method and apparatus, data processing apparatus
Ke et al. Leveraging structural context models and ranking score fusion for human interaction prediction
JPH04288662A (en) Learning method for neural network system
Romero et al. Multi-view dynamic facial action unit detection
CN111653275B (en) Method and device for constructing voice recognition model based on LSTM-CTC tail convolution and voice recognition method
KR20190099930A (en) Method and apparatus for controlling data input and output of fully connected network
TW201633181A (en) Event-driven temporal convolution for asynchronous pulse-modulated sampled signals
CN112804558B (en) Video splitting method, device and equipment
Gupta et al. Dynamic population-based meta-learning for multi-agent communication with natural language
Schuman et al. Spatiotemporal classification using neuroscience-inspired dynamic architectures
CN110942774A (en) Man-machine interaction system, and dialogue method, medium and equipment thereof
Garg et al. Neural network captcha crackers
Yang et al. DDPG with meta-learning-based experience replay separation for robot trajectory planning
Cordeiro et al. A minimal training strategy to play flappy bird indefinitely with NEAT
CN112274935B (en) AI model training method, application method computer device and storage medium
WO2021180243A1 (en) Machine learning-based method for optimizing image information recognition, and device
JP2001084236A (en) Pattern identification device and learning processing procedure thereof
CN115292455B (en) Training method and device of image-text matching model
Hallundbæk Óstergaard et al. Co-evolving complex robot behavior

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20060317

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20090414

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20090611

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20091117

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20100115

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20100330

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20100408

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130416

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130416

Year of fee payment: 3

S533 Written request for registration of change of name

Free format text: JAPANESE INTERMEDIATE CODE: R313533

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130416

Year of fee payment: 3

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140416

Year of fee payment: 4

LAPS Cancellation because of no payment of annual fees