JPH08115310A - Generation method of error signal for efficient learning of neuronetwork of multilayer perceptron - Google Patents

Generation method of error signal for efficient learning of neuronetwork of multilayer perceptron

Info

Publication number
JPH08115310A
JPH08115310A JP6313861A JP31386194A JPH08115310A JP H08115310 A JPH08115310 A JP H08115310A JP 6313861 A JP6313861 A JP 6313861A JP 31386194 A JP31386194 A JP 31386194A JP H08115310 A JPH08115310 A JP H08115310A
Authority
JP
Japan
Prior art keywords
error signal
learning
output
value
multilayer perceptron
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP6313861A
Other languages
Japanese (ja)
Other versions
JP2607351B2 (en
Inventor
相▲勲▼ ▲呉▼
Sokun Go
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
KANKOKU DENSHI TSUSHIN KENKYUSHO
Electronics and Telecommunications Research Institute ETRI
Original Assignee
KANKOKU DENSHI TSUSHIN KENKYUSHO
Electronics and Telecommunications Research Institute ETRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by KANKOKU DENSHI TSUSHIN KENKYUSHO, Electronics and Telecommunications Research Institute ETRI filed Critical KANKOKU DENSHI TSUSHIN KENKYUSHO
Publication of JPH08115310A publication Critical patent/JPH08115310A/en
Application granted granted Critical
Publication of JP2607351B2 publication Critical patent/JP2607351B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)
  • Feedback Control In General (AREA)

Abstract

PURPOSE: To obtain a new error function for improving the algorithm performance for learning the reverse propagation of a multilayered perceptron. CONSTITUTION: When a learning pattern x=(x1 , x2 ,..., xNO) is inputted to a multilayered perceptron composed of L layers, calculation is made in all directions (S1), the output error signal of an output layer is calculated (S2), the error signal of a lower layer is calculated through reverse propagation (S3), and the weighted value of respective hierarchies is changed (S4). This proposed error function reduces the occurrence of such a phenomenon that an output node is inappropriately saturated from the learning process of a neuro network by generating a strong error signal when the difference between the target value and output value of the output layer becomes larger. When the target value approaches the output value, the error function prevents the neuro network from excessively learning the learning pattern by generating a weak error signal.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】本発明はパターン認識問題の学習
に広く使用される多層パーセプトロンの神経回路網(mu
lti-layer perceptron neural networks)モデルの効率
的な学習方法に関するものである。
BACKGROUND OF THE INVENTION This invention relates to a multilayer perceptron neural network (mu) which is widely used for learning pattern recognition problems.
lti-layer perceptron neural networks).

【0002】[0002]

【従来の技術】多層パーセプトロンを学習させるとき、
学習時間が長時間の間所要されるが、幾つかのパターン
に対しては全く学習されない現象が発生する場合もあ
る。
2. Description of the Related Art When learning a multilayer perceptron,
Although the learning time is required for a long time, a phenomenon that some patterns are not learned at all may occur.

【0003】[0003]

【発明が解決しようとする課題】本発明は上記のような
問題点を解消するためになされたもので、多層パーセプ
トロンを利用したパターン認識問題の学習時間を短縮し
て迅速化を図り、学習パターンに対してもありのまま学
習できることを目的とする。
SUMMARY OF THE INVENTION The present invention has been made to solve the above problems, and shortens the learning time of the pattern recognition problem using the multi-layer perceptron to accelerate the learning pattern. The purpose is to be able to learn as it is.

【0004】[0004]

【課題を解決するための手段】本発明に係る請求項1記
載の多層パーセプトロンの神経回路網の効率的な学習の
ための誤差信号の発生方法は、生命体の情報処理を模倣
した神経回路網モデルの一つとして、神経細胞(neuro
n)を意味するノードと、各ノードを連結する連接部加
重値が階層的に構成されている多層パーセプトロンから
誤差信号を発生させる方法において、前記多層パーセプ
トロンの逆伝播学習時に出力ノードが不適切に飽和され
る場合は強い誤差信号を発生させ、前記出力ノードが適
切に飽和される場合は弱い誤差信号を発生させることを
特徴とする。
According to the present invention, there is provided a method for generating an error signal for efficient learning of a neural network of a multilayer perceptron according to the present invention. As one of the models, neurons (neuro
n) and a method of generating an error signal from a multi-layer perceptron in which connection weights connecting each node are hierarchically configured, the output node is improperly learned during back propagation learning of the multi-layer perceptron. A strong error signal is generated when the output node is saturated, and a weak error signal is generated when the output node is appropriately saturated.

【0005】[0005]

【作用】本発明に係る請求項1記載の多層パーセプトロ
ンの神経回路網の効率的な学習のための誤差信号の発生
方法によれば、逆伝播学習時に出力ノードが不適切に飽
和される場合は強い誤差信号を発生させ、前記出力ノー
ドが適切に飽和される場合は弱い誤差信号を発生させる
ので、出力ノードが不適切に飽和される現象が減り、神
経網が学習パターンを過度に学習することが防止され
る。
According to the method of generating an error signal for efficient learning of a neural network of a multilayer perceptron according to the first aspect of the present invention, when an output node is inappropriately saturated during back propagation learning, Since a strong error signal is generated and a weak error signal is generated when the output node is properly saturated, the phenomenon that the output node is inappropriately saturated is reduced, and the neural network excessively learns the learning pattern. Is prevented.

【0006】[0006]

【実施例】本発明の説明のために次のように用語を定義
する。
DESCRIPTION OF THE PREFERRED EMBODIMENTS For the purpose of describing the present invention, terms are defined as follows.

【0007】まず、“多層パーセプトロン”とは、生命
体の情報処理を模倣した神経回路網モデルの一つであ
り、図1に図示のように、神経細胞(neuron)を意味す
るノードとノードを連結する連接部加重値(synapse we
ight value)が階層的に構成されている。
First, the “multilayer perceptron” is one of the neural network models imitating the information processing of living organisms. As shown in FIG. Weights of connecting parts to be connected (synapse we
ight value) are arranged hierarchically.

【0008】この多層パーセプトロンの各ノードはその
状態が下層ノードの状態値とその連結加重値の“加重値
の合計”を入力として受け入れて、図2のようにシグモ
イド変換した値を出力する。
Each node of the multi-layer perceptron receives as input the "sum of weighted values" of the state value of the lower layer node and its connected weighted value, and outputs a sigmoid transformed value as shown in FIG.

【0009】シグモイド関数は傾きが小さい両側面の飽
和領域と傾きが大きい中央の活性領域と分けられる。
The sigmoid function is divided into a saturated region on both sides having a small slope and a central active region having a large slope.

【0010】“学習パターン”とは、パターン認識問題
を学習させるために任意に収集したパターンである。
A "learning pattern" is a pattern arbitrarily collected for learning a pattern recognition problem.

【0011】“試験パターン”とは、パターン認識問題
の学習程度を試験する基準とするために任意に収集した
パターンである。
The "test pattern" is a pattern arbitrarily collected in order to test the degree of learning of the pattern recognition problem.

【0012】これらのパターンは多数個の“集団”と分
けることができ、“パターン認識”とは、入力されたパ
ターンがどの集団に属するかを決定するものである。
These patterns can be divided into a large number of "groups", and "pattern recognition" is to determine which group the input pattern belongs to.

【0013】最終階層ノードの状態が入力パターンが属
する集団を示す。
The state of the final hierarchical node indicates the group to which the input pattern belongs.

【0014】“逆伝播学習(Back Propagating Trainin
g)”とは、この多層パーセプトロンを学習させる方法
として、学習パターンを入力させた後に、最終階層ノー
ドの出力値が所望の目標値となるように誤差信号により
最終階層ノードに連結された加重値を変更させており、
その下層のノードは上の階層から逆伝播された誤差信号
により連結加重値を変更させる方法である。
"Back Propagating Trainin
g) ”is a method for learning this multi-layer perceptron. After inputting the learning pattern, the weighted value connected to the final layer node by the error signal so that the output value of the final layer node becomes the desired target value. Is being changed,
In the lower layer, the connection weight is changed by an error signal back-propagated from the upper layer.

【0015】“誤差関数”とは、逆伝播学習から誤差信
号を如何に発生させるかを決定する関数である。
The "error function" is a function for determining how to generate an error signal from back propagation learning.

【0016】“ノードの飽和とは、ノードの加重値の合
計の入力値がシグモイド関数の傾きが小さい領域に位置
することを指称する。
"Saturation of a node" means that the total input value of the weight values of the node is located in a region where the slope of the sigmoid function is small.

【0017】ノードが目標値と同じ飽和領域に位置する
と“適切な飽和”、反対側の飽和領域に位置すると“不
適切な飽和”という。
If the node is located in the same saturation region as the target value, it is called "appropriate saturation", and if it is located in the opposite saturation region, it is called "inappropriate saturation".

【0018】多層パーセプトロンの“逆伝播学習アルゴ
リズム”の具体的な内容は図3のように示される。
The specific contents of the "back propagation learning algorithm" of the multilayer perceptron are shown in FIG.

【0019】学習パターンx=[x1,x2,…,xN0
が入力されると、L層からなる多層パーセプトロンは、
ステップS1において、全ての方向の計算によって1層
のj番目のノード状態が
Learning pattern x = [x 1 , x 2 ,..., X N0 ]
Is input, the multilayer perceptron composed of L layers becomes
In step S1, the j-th node state of the first layer is calculated by calculation in all directions.

【0020】[0020]

【数1】 [Equation 1]

【0021】のように決定される。Is determined as follows.

【0022】ここで、Here,

【0023】[0023]

【数2】 [Equation 2]

【0024】であり、wji (l)はxi (l-1)とxj (l)の間
の連結加重値、wjo (l)はxj (l)のbiasを示す。
Where w ji (l) is the connection weight between x i (l-1) and x j (l) , and w jo (l) is the bias of x j (l) .

【0025】このように最終階層ノードの状態xK (L)
求められると、多層パーセプトロンの誤差関数は入力パ
ターンに対する目標パターンt=[t1,t2,…,
NL]との関係によって
When the state x K (L) of the last hierarchical node is obtained in this manner, the error function of the multilayer perceptron is calculated as the target pattern t = [t 1 , t 2 ,.
t NL ]

【0026】[0026]

【数3】 (Equation 3)

【0027】と定義され、この誤差関数値を減らすよう
に誤差信号が発生され、この誤差信号により各加重値が
変更される。
An error signal is generated so as to reduce the error function value, and each weight value is changed by the error signal.

【0028】即ち、ステップS2において出力層の誤差
信号は、
That is, in step S2, the error signal of the output layer is:

【0029】[0029]

【数4】 [Equation 4]

【0030】と計算される。Is calculated.

【0031】次にステップS3において下層の誤差信号
は逆伝播によって、
Next, in step S3, the error signal in the lower layer is

【0032】[0032]

【数5】 (Equation 5)

【0033】と計算される。Is calculated.

【0034】次にステップS4において各階層の加重値
は、
Next, in step S4, the weight value of each layer is

【0035】[0035]

【数6】 (Equation 6)

【0036】により変更されて一つのパターンに対して
学習が行なわれる。
The learning is performed for one pattern after being changed.

【0037】この過程をすべての学習パターンに対して
一回遂行したことをsweepという単位で表示する。
The fact that this process has been performed once for all learning patterns is displayed in units of sweep.

【0038】上述の逆伝播アルゴリズムから、誤差信号
δK (L)は目標値と実際値の差異にシグモイド活性化関数
の傾きが乗算された形態である。
From the above-described backpropagation algorithm, the error signal δ K (L) has a form in which the difference between the target value and the actual value is multiplied by the slope of the sigmoid activation function.

【0039】もし、xK (L)が-1或いは+1に近接の値であ
ると、傾きに対する項のため、δK (L)は極めて小さい値
になる。
If x K (L) is a value close to -1 or +1 δ K (L) will be a very small value due to the term for the slope.

【0040】即ち、tk = 1であり、xK (L)が-1に近似し
ている場合、或いはその反対の場合に、xK (L)は連結さ
れた加重値を調整するのに充分に強い誤差信号を発生さ
せない。
That is, if tk = 1 and x K (L) is close to -1, or vice versa, then x K (L) is sufficient to adjust the concatenated weights. Does not generate a strong error signal.

【0041】このような出力ノードの不適切な飽和が逆
伝播学習からEmの最小化を遅延させ、あるパターンの学
習を妨害する。
Such inappropriate saturation of the output node delays the minimization of Em from the backpropagation learning and hinders the learning of a certain pattern.

【0042】本発明は学習のための誤差関数を、The present invention provides an error function for learning,

【0043】[0043]

【数7】 (Equation 7)

【0044】と変えており、この誤差関数を利用して出
力ノードの誤差信号が
Using this error function, the error signal at the output node is

【0045】[0045]

【数8】 (Equation 8)

【0046】となるようにしたものである。The following is set.

【0047】図4はtk = 1の場合にxK (L)による誤差信
号を比較したものであり、符号CEで示す曲線は従来の
誤差関数により得られる従来の誤差信号(Cnventionl E
rrorSignal)を表し、符号PEで示す曲線は本発明で提
案された誤差関数により得られる提案誤差信号(Propos
ed Error Signal)を表す。
FIG. 4 is a comparison of the error signal due to x K (L) when tk = 1, and the curve indicated by the symbol CE is a conventional error signal (Cnventionl E) obtained by the conventional error function.
rrorSignal) and a curve indicated by a symbol PE is a proposed error signal (Propos) obtained by the error function proposed by the present invention.
ed Error Signal).

【0048】なお、学習のための他の数式は誤差関数Em
を利用した従来の逆伝播アルゴリズムと同一である。
The other equations for learning are the error function Em
This is the same as the conventional back-propagation algorithm using.

【0049】[0049]

【発明の効果】本発明に係る請求項1記載の多層パーセ
プトロンの神経回路網の効率的な学習のための誤差信号
の発生方法によれば、提案された誤差関数を利用した逆
伝播アルゴリズムは、出力層の目標値が出力値との差異
が大差になると強い誤差信号を発生させ、出力ノードが
不適切に飽和される現象を減らし、出力層の目標値が出
力値と近接の値になると弱い誤差信号を発生させて神経
網が学習パターンを過度に学習することを防止するの
で、学習時間を短縮して迅速化を図り、学習パターンに
対してもありのまま学習することができる。
According to the method for generating an error signal for efficient learning of the neural network of the multilayer perceptron according to the first aspect of the present invention, the back propagation algorithm using the proposed error function is: If the difference between the target value of the output layer and the output value is large, a strong error signal is generated, the phenomenon that the output node is inappropriately saturated is reduced, and if the target value of the output layer is close to the output value, the signal is weak. Since an error signal is generated to prevent the neural network from excessively learning the learning pattern, the learning time can be shortened and speeded up, and the learning pattern can be learned as it is.

【図面の簡単な説明】[Brief description of drawings]

【図1】 多層パーセプトロンの神経回路網の構造を示
す図である。
FIG. 1 is a diagram showing the structure of a neural network of a multilayer perceptron.

【図2】 シグモイドの活性化関数を示す図である。FIG. 2 is a diagram showing an activation function of a sigmoid.

【図3】 多層パーセプトロンの一般的な逆伝播の学習
方法を示すフローチャートである。
FIG. 3 is a flowchart showing a general method of learning back propagation in a multilayer perceptron.

【図4】 効率的な学習のための誤差信号を示す図であ
る。
FIG. 4 is a diagram showing an error signal for efficient learning.

【符号の説明】[Explanation of symbols]

CE 従来の誤差信号(Cnventionl Error Signal)、
PE 提案誤差信号(Proposed Error Signal)。
CE Conventional error signal (Cnventionl Error Signal),
PE Proposed Error Signal.

Claims (1)

【特許請求の範囲】[Claims] 【請求項1】 生命体の情報処理を模倣した神経回路網
モデルの一つとして、神経細胞(neuron)を意味するノ
ードと、各ノードを連結する連接部加重値が階層的に構
成されている多層パーセプトロンから誤差信号を発生さ
せる方法において、 前記多層パーセプトロンの逆伝播学習時に出力ノードが
不適切に飽和される場合には強い誤差信号を発生させ、 前記出力ノードが適切に飽和される場合には弱い誤差信
号を発生させることを特徴とする多層パーセプトロンの
神経回路網の効率的な学習のための誤差信号の発生方
法。
1. As one of neural network models that mimic information processing of living organisms, a node that represents a neuron and a connection part weight value that connects each node are hierarchically configured. In a method of generating an error signal from a multilayer perceptron, a strong error signal is generated when an output node is inappropriately saturated during back propagation learning of the multilayer perceptron, and when the output node is appropriately saturated. A method of generating an error signal for efficient learning of a neural network of a multilayer perceptron, which is characterized by generating a weak error signal.
JP6313861A 1994-09-30 1994-12-16 Error Signal Generation Method for Efficient Learning of Multilayer Perceptron Neural Network Expired - Fee Related JP2607351B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1019940025170A KR0141341B1 (en) 1994-09-30 1994-09-30 Error signal generation method for efficient learning of multilayer perceptron neural network
KR94-25170 1994-09-30

Publications (2)

Publication Number Publication Date
JPH08115310A true JPH08115310A (en) 1996-05-07
JP2607351B2 JP2607351B2 (en) 1997-05-07

Family

ID=19394262

Family Applications (1)

Application Number Title Priority Date Filing Date
JP6313861A Expired - Fee Related JP2607351B2 (en) 1994-09-30 1994-12-16 Error Signal Generation Method for Efficient Learning of Multilayer Perceptron Neural Network

Country Status (2)

Country Link
JP (1) JP2607351B2 (en)
KR (1) KR0141341B1 (en)

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NEURAL NETWORKS=1994 *

Also Published As

Publication number Publication date
JP2607351B2 (en) 1997-05-07
KR960012131A (en) 1996-04-20
KR0141341B1 (en) 1998-07-01

Similar Documents

Publication Publication Date Title
Jang et al. Neuro-fuzzy modeling and control
US5095443A (en) Plural neural network system having a successive approximation learning method
KR100243353B1 (en) Neural network system adapted for non-linear processing
Klöppel Neural networks as a new method for EEG analysis: a basic introduction
Fortuna et al. Improving back-propagation learning using auxiliary neural networks
JP2907486B2 (en) Neural network device
JPH08115310A (en) Generation method of error signal for efficient learning of neuronetwork of multilayer perceptron
JP2897220B2 (en) Signal processing device
JP2699447B2 (en) Signal processing device
Igelnik et al. Additional perspectives on feedforward neural-nets and the functional-link
JP2606317B2 (en) Learning processing device
FI103304B (en) Associative neuron
JP3172164B2 (en) Group-based sequential learning method of connection in neural network
JPH04186402A (en) Learning system in fuzzy inference
JPH0981535A (en) Learning method for neural network
Baltacıoğlu et al. Is Artificial Neural Network Suitable for Damage Level Determination of Rc-Structures?
KR100241359B1 (en) Adaptive learning rate and limited error signal
JP3292495B2 (en) Neuro-fuzzy fusion system
JPH0696046A (en) Learning processor of neural network
Sahithya et al. Digital Design of Radial Basis Function Neural Network and Recurrent Neural Network
JPH0573522A (en) Neural network and its structuring method, and process control system using neural network
Mordjaoui et al. Neuro-fuzzy modeling for dynamic ferromagnetic hysteresis
Pham et al. A supervised neural network for dynamic systems identification
Touzet et al. Application of connectionist models to fuzzy inference systems
JPH04323763A (en) Neural network learning method

Legal Events

Date Code Title Description
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 19961217

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20080213

Year of fee payment: 11

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090213

Year of fee payment: 12

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100213

Year of fee payment: 13

LAPS Cancellation because of no payment of annual fees