200924343 九、發明說明: 【發明所屬之技術領域】 本發明涉及-種酬級電力之裝置,尤妓—種在電 釋晶片上雜綱無效電力值之__神經_與其酬 無效電力值的方法。 【先前技術】 近年來隨著經濟的發展以及人民生活水準的提昇,台灣 舰的電力需求大幅增加,很多大型用戶加人電力系統,因為 这些大型用戶所需的耗電量極大,因此其負載特性對電力系統 的供電品質以及能源效轉有重A的影響,許多用戶以為只有 用電量消耗極大的工廠才需要專門配置電力監控纽,但事實 上’電力對任何-家工廠都非常重要,穩定的電力系統才足以 維持生產線的正常運行’而一套完整的電力監控系統除了能 在仏電狀況下本握電力系統之外,最重要的是能在電力異 常時確切掌釘摘顧、财齡轉,轴大幅提昇供電品 質,因此電力貞_測不論是在電力輸送作業、儲存能源調 配、線上調控或緊急事故處理中均扮演著極為重要的角色。所 以,許多專家學者曾使用不同的方法,以致力於電力負載預測 此4題諸如時間序列法、專家系統、灰色理論和類神經網 ^Artifieml Neuml NetWQ1<ks)。細,由於進行電力負载預測 200924343 時,所需考慮的相關影 然能利用相關影響變數建立神經網路雖 個相網路訓練資料,卻不能對各 實功车t 職進行調整。此相往錢_皆著重於 革的雜,反觀域辨的_卩未❹臟,而目前電 對触功率的預測大多依照功率因素的經驗值。根據 效功率預·結果來對系統的提前分析或控制,對於系統 的穩定度來說非常重要。 【發明内容】 、、馨於上述之發明背景巾,為了符合產業上某些利益之需 、'本Ug、種在電驛晶片上架構預測無效電力值之前饋 式類神經網路與其預測無效電力值的方法。 本發明之一目的係提供一種利用類神經網路來預測無效 電力之電釋晶片。本發明在電驛晶片上雜出前饋式類神經網 路之架構’用明難_之無效t力值,所使用之活化函數 為S-字型函數,包含應用比較、判斷、四捨五入以及查表之四 大模組。主要是針對歷史的無效電力資料(分為預測前之3小 時、4小時以及5小時的輸入量當作輸入資料)來預測未來的無 效電力。 200924343 【實施方式】 本發月在此所探拍方向為—種利用類神經網路來預測 無效電力之鱗晶片。為了能徹底地瞭解本發明嘴在下列的 描述中提出詳盡的步驟及其組成。軸地,本發_施行並未 限定於無效電力_之技藝者所熟料特殊細節。另一方面, 眾所周知敝成或步驟並未描述於崎巾,㈣免造成本發明 不必要之_。本發_難實__細減如下,然而除 了這些詳細贿之外,本發明射叫泛地騎在其他的實施 例中,且本發日㈣範圍林限定,如之後的專娜圍為準。 本發明姻_晶料構㈣餽式_、_路,主要架 構机程本I明包含—2個輪人,—層隱藏層(包含3個隱藏神 經元)以及1個輸出的前媿式類神經網路做為解說之例子,而 設計步驟大致分為以下幾個: 立⑻參考第-圖所示,其係為一電驛晶片之權重值設定示 忍圖。本發明將離線所算得的輸入層22與隱藏層Μ間的第一 權重值2奶輸入類神經網路,在本例中,輸入層a具有兩個 輸入值222。接著再同時加入第一加法器2421與第一權重乘 法器2423加以計算,而因此例子中有3個隱 藏層神經元242, 因此會有3個隱藏神經元之輸&。因此,各輸入值222會分別 &過第-權絲法n 2421與減的第—權重值逝相乘以產 生—第一加權值。接下來,每一個神經元242的各個第一加權 200924343 值會被第一加法器加總以產生一第一加總值。 (b) 參考第二圖所示’其係為一電釋晶片之偏權值2424 設定示意圖。在設定完輸入層22與隱藏層24間的第一權重值 2222後,接著設定第一偏權重值2424。因此,各第一加總值 會分別被一第一偏權加法器2425與相應的第一偏權重值2424 相加以產生一第一偏權值。 (c) 參考第三圖所示,其係為一電驛晶片之活化函數設計 示意圖。本發明利用功能方塊設計一個活化函數器2427,此 活化函數器2427中包含四個功能方塊,稍後會另外對這四個 功能方塊說明其設計流程以及用途。 (d) 參考第四圖所示,其係為一電驛晶片之穩藏層24之 活化函數器2427之連接示賴。因此,各第-偏權值會被-活化函數H 2427接收,各活化函數器期絲職收的第一 偏權值產生一輸出值。 (e) 參考第五圖所示,其係為一電驛晶片之隱藏層%與 輸出層26間的第二權重值262設定示意圖。因此,活化函數 器所輸出的輸出值會被一第二權重乘法器Ml以一第二權重 值262相乘以產生一第二加權值。 〇 /考第’、圖所示’其係為-完整之前饋式類神經網路 ’構丁思圖。各第二加權值會被—第二加法器加加總以產生 〜值接下來第二加總值會被一第二偏權力口法器265 200924343 以-第二偏權重值264相加以產生一第二偏權值,該第二偏權 值即為該前饋式類神經網路的輸出。 在此前饋式類神經網路之設計當中,活化函數部分的設 計是屬於比較複雜的。因此以下本發明將對步驟(c)中活化函數 方塊做一個比較仔細的介紹。參考第七圖所示,其係為一電驛 晶片之活化函數設計示意圖。由第七圖中可以發現,本發明中 之活化函數巾有比較(第-魏方塊24272)、_(第二功能方 塊勘句、四捨五入(第王功能方塊勘6)以及查表(第四功能 方塊24278)之四大魏方塊’以下再進—步對這_功能方塊 分別做介紹。當輸入的數值進到活化函數之中時,首先判斷是 否在範圍當中。而本發明❹的活化函數為§_字型函數 (sigmoid — )。參考第八圖所示,其係為一活化函數為 s-字型函數(sigm〇idfunction)公式示意圖。由第八圖中之公式 我們可以得知,不論輸人數值n為多少,a的細均在〇到】 之間。換言之,隱藏層24之神經元242具有一活化函數,此 活化函數為=——1__。 1 + exp(-A) 而本發明所設定的範圍在7到-7之間,其原因為當輸入 數值大於7以上時,經過活化函數的轉換後,數值將非常接近 1,而當輸入值小於-7時,轉換後的執會相當靠近〇,因此本 發明將範®設定在7到·7之間。而小數點本發明採關小數點 以下2位,所以共有1400點。由第七圖可得知,由輸入至輸 200924343 出只有2條路徑可以走’為第一功能方塊24272—第四功能方 塊24278或是第一功能方塊24272-^第二功能方塊24274--^•第 三功能方塊24276—第四功能方塊24278。若η在範圍之外, 其路徑即為第一功能方塊24272->第四功能方塊24278,反 之,若η在範圍之内,其路徑即為第一功能方塊24272->第二 功能方塊24274—第三功能方塊24276—第四功能方塊 24278。以下就四種功能方塊做詳細解釋: 第一功能方塊24272:參考第九圖所示,其係為一極大以 及極小值比較方塊之設計圖。首先,當輸入值進到活化函數當 中時,電驛晶片設計將由第一功能方塊24272判斷η是大於7、 小於-7或者是介於7至-7之間,輸入值要乘上1〇〇的原因是 比較方塊無法比較小數點,因此本發明將輸入值乘上1〇〇,而 極大值與極小值亦同時乘上100,其判斷結果直接輸入第四功 能方塊24278。 第二功能方塊24274:參考第十圖所示,其係為一判斷方 塊設計圖。當收到第一功能方塊24272的訊號時,第四功能方 塊24278將會判斷輸入值是否在範圍之中,如果在範圍中,則 改走第二條路徑(第一功能方塊24272—第二功能方塊24274— 第三功能方塊24276^第四功能方塊24278)。若大於上限或是 小於下限’走的路徑即為(第一功能方塊24272—第四功能方塊 24278) ’大於上限的輸出值將為1,而小於下限的,輸出值將 11 200924343 為〇。 第三功能方塊期6:參考第十一圖所示,其係為一四检 五入方塊之設計圖。若η在範圍内,走的路徑即為第一功能方 塊24272ϋ能方塊24274—第三功能方塊2427“第四 功能方塊24278,而第二功能方塊24274的作用是將小數點以 下第3個位數做四捨五入的動作。由第十一圖之中可看出大致 上分為上下2個部份,上面的部份處理小數點以下第3個位 數,判斷其值大於5或是小於4,而下面的部份則是保留小數 點以下第2個位數以上之數值,並判斷是否進位。因此在本範 例中’第二多工ϋ的選擇訊號有上面部份的電路所提供,為一 2位元的集成總線,當第0位元為丄且第}位元為〇時,數值 為1,輸出進位的值,相反地,當第丨位元為丨且第〇位元為 0時’數值為2,輸出未進位的值。 … 第四功能方塊24278 :參考第十二圖所示,其係為一杳表 方塊設計圖。在此本發明絲的方法來料触函數^因 此本發明建立-個表格,分耽_輸人與輪出(a)對應 值,而方塊三24276即有如此之功能。經過四捨五入後的數值 進入第三魏视期6,觀贿_的以,將數轉換 成對應個數’將值輸入N1之中,而N1即為本發明中所建立 之表格。最後再進入第四功能方塊24278, 塊2僵雜_。 雜將紅功能方 12 200924343 一在本範例中,第一功能方塊24272的輸出結果i可以是 個一位元的集成總線,因此當第〇位元為!且其他位元為〇 時’數值為1,第一多工器以常數值1輸出;當第1位元為1 且其,位7L為0時,數值為2,第—多工器以常數值〇輸出; T及當第2位元為丨且其他位元為G時,數值為4,第一多工 器輸出第三功能方塊24270的輸出。 熟知相關技術者可推知,前饋式類神經網路可包含:一 輸入層隱藏層與-輸出層。輸入層可包含複數個輸入值, 而隱藏層可包含複數個神經元,每—倾經元接收該複數個輸 入值,並且每—㈣經元輸出—輸出值,最後輸出層依據這些 輸出值輸出一結果。此外,在隱藏層中的每一個神經元可包含 一活化函數(或稱為轉換函數)。 據此,本發明之一具體實施例係一種預測無效電力值之 刖饋式類神經網路,參考第六圖所示,包含··一輸入層22、 一隱藏層24與一輸出層26。輸入層22可包含複數個輸入值 222。此外,隱藏層24,包含複數個神經元242,每一個神經 元242接收該複數個輸入值,並且每一個神經元242包含:複 數個第一權重乘法器2421、一第一加法器2423、一第一偏權 加法2425、一活化函數器2427。每一個第一權重乘法器2421 分別包含一第一權重值2422,並且每一個第一權重乘法器 2421分別接收該複數個輸入值222中的一個輸入值222,其中 13 200924343 每一個第一權重乘法器2421分別將第一權重值2222與輸入值 222相乘後產生一第一加權值。同一神經元的所有第一加權值 由第一加法器2423所接收’第一加法器2423將所有第一權重 值加總產生一第一加總值。另外,該第一偏權加法器2425包 含一第一偏權重值2424,並且接收第一加總值,其中第一偏 權加法器2425將第一偏權重值2424與第一加總值相加後產生 一第一偏權值,之後,活化函數器2427依據第一偏權值產生 一輸出值。此外,輸出層26包含複數個第二權重乘法器261、 第一加法器263、一第一偏權加法器265。,每一個第二權 重乘法器261分別包含一第二權重值262,並且每一個第二權 重乘法器261分別接收上述複數個輸出值中的一個輸出值,其 中每一個第二權重乘法器261分別將第二權重值262與輸出值 相乘後產生一第二加權值。第二加權值由第二加法器263所接 收,第二加法器263將所有第二加權值加總產生一第二加總 值最後,第一偏權加法器265接收第二加總值,並且第二偏 權加法器265包含一第二偏權重值264,其中第二偏權加法器 將第二偏權重值264與第二加總值相加後產生一第二偏權 值’此第二偏權值為本預測無效電力值之前饋式類神經網路之 輸出。 另外’上述之活化函數器2427依據第一加總值以一活化 函數產生輸出值,此活化函數為/(功。據此,在本發 14 200924343 明之一較佳範例中,上述之活化函數器包含:一判斷器(如上 述第一功能方塊24272) ’該判斷器依據一範圍判斷該第一加總 值以產生一判斷值;一索引值產生器(如上述第二功能方塊 24274),該索引值產生器依據該第一加總值產生一索引值;一 查表檢索器(如上述第三功能方塊24276),該查表檢索器包含 一查表,该查表檢索器依據該檢索值檢索該查表以產生一檢索 值,其中該檢索值介於該範圍;一選擇器(如上述第四功能方 塊24278) ’該選擇器依據該判斷值由丨、該檢索值與〇中選出 一值作為該輸出值。其中上述之範圍係介於7至_7之間,因此 當判斷值表示第-加總值介於7至_7之間時,該選擇器依據該 判斷值以該檢索值作為該輸出值,#躺值表示第-加總值大 於7日夺’該選擇H依據該躺似丨作為該輸出值,並且當判 斷值表不第—加總值小於_7時,該麵驗據制斷值以〇 作為該輸出值。此外,上述之索引值產生器可岐將該第一加 …值四捨五人至小數點以下第2位以產生該索引值。 因此,上述之複數個輸入值為相應於連續複數個時間區 :的複數筆電力值,而上述之第二加總值係—預測區間的一預 ’則值,該預測區間與該連續複數個時間區間相鄰。所以當時間 ,間以小時為單位時,上述複數筆電力值的筆數可以是3筆、 筆5筆、7筆...等等,而預測值為上述複數筆電力值所相 應之連續複數個小時之後的下—小時的預測值。 15 200924343 4口上述,本發明之另-具體實施例係—種以前饋式類 神經網路酬無效電力㈣方法,如軒三騎示。首先,如 i. 步驟所示,提供-電力錄列,該電力值數列包含連續 複數筆電力值,其中每—筆電力值與—時間晴目應,並且該 連續複數筆電力值依據相應的時間區晴序。接下來,如步驟 1320所示’依據電力值數舰生複數筆訓練資料,每-筆訓 練資料包含電力值數列中的連續複數筆第—電力值與一訓練 結果。例如,一筆訓練資料可以包含3筆、4筆、5筆、7筆 等等的電力值,以及類神經網路經過訓練後所應該輸出的訓練 結果。之後,如步驟1330與步驟134〇所示,提供一前饋式類 神_路,並且依據該複數筆訓練資料訓練前饋式類神經網路 以產生前饋式類神經網路的複數個權重值,該前饋式類神經網 路在經過_數筆訓練資料訓練後成為—預測無效電力值的 前饋式類神經網路’其中每—筆訓練資料的連續複數筆第一電 無效電力值的前饋式類神經網路時,預測無效電 _路會輸㈣筆靖_的該訓練結 签楚雪Γ ’如步驟1350與步驟1360所示,提供連續複數 筆第-電力值,社叫概料二電力 效電力值的前饋式類神經網路以產生一預測值。預貝L、 、…細技*者可輕易推知,_鋼路可透過訓練資 料(如本發财的連續複數個第—電力值)加·丨練,藉以調整 200924343 各神經元的權重值,使得類神經網路的輪出能趨近預期的訓練 結果’相關細節係習知之公開技術,本發明在此不加以資述。 在本發明之—範例中,上述之前饋式類神經網路至少包 含一隱藏層,此隱藏層包含複數個神經元,每一個神經元具有 -活化函數。在本發狀—範财,此活化函數為—&字型函 數(sigmoid function),如 /(a) = 。 l + exp(-A·) 〇 此外’其巾上述之連續複數筆第1力值分別相應於複 數個連續的時間區間,並且該訓練結果為相應—預測時間區間 的預測值,此__區間為與上述魏個連續的時間區間所 相鄰的下-個時間區間。例如,上述之時間區間為一小時,因 此上述之連魏數筆第-電力值f力值為連 續複數個小時的電力值,而前饋式類神經網路所輸出的結果為 下一小時的預測值。 例如,前饋式類神經網路為上述預測無效電力值之前饋 式類神經網路’因此,上述之複數個輸入值為該連續複數筆第 電力值並且該第一加總值為該訓練結果。同樣地,上述之 複數個輸人值為該連續複數筆第二電力值,並且該第二加總值 為該預測值。 顯然地’健上面實施例巾的描述,本發明可能有許多 的修正與差異。因此需要在其附加的_要求項之範圍内加以 理解’除了上述詳細的描述外’本發明還可以廣泛地在其他的 17 200924343 實施例中施行。上述僅為本發明之較佳實施例而已,並非用以 限定本發明之申請專利範圍;凡其它未脫離本發明所揭示之精 =所完成的等效改變或修飾,均應包含在下述申請專利範圍 200924343 【圖式簡單說明】 第一圖係為一電驛晶片之權重值設定示意圖; 第二圖係為一電驛晶片之偏權值設定示意圖; 第三圖係為一電驛晶片之活化函數設計示意圖; 第四圖係為一電釋晶片之穩藏層與活化函數之連接示意圖; 第五圖係為一電驛晶片之隱藏層與輸出層間權重值設定示 意圖; 第六圖係為一完整之前饋式類神經網路架構示意圖; 第七圖係為一電驛晶片之活化函數設計示意圖; 第八圖係為一活化函數為s-字型函數(sigmoid function ) 公式示意圖; 第九圖係為一極大以及極小值比較方塊之設計圖; 第十圖係為一判斷方塊設計圖; 第十一圖係為一四捨五入方塊之設計圖;以及 第十二圖係為一查表方塊設計圖。 第十三圖係為本發明之以前饋式類神經網路預測無效電力 值的流程示意圖。 200924343 【主要元件符號說明】 22 輸入層 222 輸入值 24 隱藏層 242 神經元 2421 第一加權乘法器 2422 第一權重值 2423 第一加法器 2424 第一偏權重值 2425 第一偏權加法器 2427 活化函數器 24272 第一功能方塊 24274 第二功能方塊 24276 第三功能方塊 24278 第四功能方塊 26 輸出層 261 第二加權乘法器 262 第二加權值 263 第二加法器 264 第二偏權值重值 265 第二偏權加法器 20200924343 IX. Description of the invention: [Technical field of the invention] The present invention relates to a device for paying for a power level, and a method for arranging an invalid power value of an illuminant on an electro-discharge wafer . [Prior Art] In recent years, with the development of the economy and the improvement of people's living standards, the power demand of Taiwan ships has increased significantly. Many large users have added power systems because these large users require a lot of power, so their load characteristics. For the power supply quality of power systems and the impact of energy efficiency, many users think that only factories that consume a lot of electricity need to configure power monitoring, but in fact 'electricity is very important to any factory. The power system is enough to maintain the normal operation of the production line' and a complete power monitoring system can not only pick up the power system in the case of power failure, but also the most important thing is to be able to pick up the money when the power is abnormal. Turning, the shaft greatly improves the quality of power supply, so the power 贞 _ test plays an extremely important role in power transmission operations, storage energy deployment, online regulation or emergency handling. Therefore, many experts and scholars have used different methods to work on electric load forecasting. These four questions are time series method, expert system, grey theory and neural network ^Artifieml Neuml NetWQ1<ks). Fine, due to the power load forecast 200924343, the relevant considerations need to be able to use the relevant influence variables to establish neural network data, but can not adjust the actual work. This relative money _ focuses on the miscellaneous of the leather, and the _ 卩 卩 卩 卩 , , , , , , , , , , , , , , , , , , , , , , , , , , Advance analysis or control of the system based on the efficiency power pre-results is very important for system stability. SUMMARY OF THE INVENTION The present invention is in accordance with the above-mentioned invention background towel, in order to meet the needs of certain interests in the industry, 'this Ug, the type of the neural network before the prediction of the invalid power value on the electronic silicon wafer and its predicted invalid power The method of value. It is an object of the present invention to provide an electro-discharge wafer that utilizes a neural network to predict ineffective power. The invention is characterized in that the structure of the feedforward neural network is mixed on the electric raft wafer, and the activation function used is an S-shaped function, including application comparison, judgment, rounding and table lookup. The four modules. It mainly predicts future ineffective power for historically ineffective power data (divided into 3 hours, 4 hours, and 5 hours of input as predictions). 200924343 [Embodiment] The direction of the shooting in this month is a scale chip that uses a neural network to predict invalid power. In order to thoroughly understand the mouth of the present invention, detailed steps and compositions thereof are set forth in the following description. Axial ground, this hair _ implementation is not limited to the special details of the clinker of the ineffective electric power _. On the other hand, it is well known that the steps or steps are not described in the Kawasaki, and (4) it is not necessary to make the invention unnecessary. The present invention is as follows. However, in addition to these detailed bribes, the present invention is generally used in other embodiments, and the scope of the present day (4) is limited, as follows. . The invention has the following principles: the four-wheeled person, the hidden layer (including three hidden neurons) The neural network is used as an example of the explanation, and the design steps are roughly divided into the following: (8) Referring to the first figure, it is a weight map setting of a power chip. The present invention inputs the first weight value between the input layer 22 and the hidden layer, which is calculated offline, into the neural network. In this example, the input layer a has two input values 222. Then, the first adder 2421 and the first weight multiplier 2423 are simultaneously added for calculation, and thus there are 3 hidden layer neurons 242 in the example, so there are 3 hidden neurons in the & Therefore, each input value 222 is multiplied by the first-weight method n 2421 and the subtracted first-weight value to generate a first weighting value. Next, each of the first weighted 200924343 values for each neuron 242 is summed by the first adder to produce a first summed value. (b) Refer to the second figure, which is a schematic diagram of the bias value 2424 of an electro-discharge wafer. After the first weight value 2222 between the input layer 22 and the hidden layer 24 is set, the first partial weight value 2424 is then set. Therefore, each of the first summed values is respectively added by a first partial weight adder 2425 and the corresponding first partial weight value 2424 to generate a first bias value. (c) Referring to the third figure, it is a schematic diagram of the activation function design of an electrowinning wafer. The present invention utilizes a functional block to design an activation function 2427. The activation function 2427 contains four functional blocks, which will be described later in addition to the design flow and use of the four functional blocks. (d) Referring to the fourth figure, it is a connection diagram of the activation function 2427 of the stabilization layer 24 of an electromotive wafer. Therefore, each of the first-bias weights is received by the -activation function H 2427, and the first bias value of each activation function term produces an output value. (e) Referring to the fifth figure, it is a schematic diagram of a second weight value 262 between the hidden layer % of the power chip and the output layer 26. Therefore, the output value output by the activation function is multiplied by a second weight multiplier M1 by a second weight value 262 to produce a second weight value. 〇 /考第', the picture shown in the figure is a complete - feed-forward neural network ‘construction. Each of the second weighting values is added by the second adder to generate a value. The second summed value is then added by a second partial power port 265 200924343 with the second partial weight value 264 to generate a value. The second bias value, which is the output of the feedforward neural network. In the design of the feed-forward neural network, the design of the activation function part is relatively complicated. Therefore, the following invention will provide a more detailed introduction to the activation function block in step (c). Referring to the seventh figure, it is a schematic diagram of the activation function design of an electro-defective wafer. It can be found from the seventh figure that the activation function towel of the present invention has a comparison (the first - square block 24272), _ (the second functional block survey sentence, rounding (the king function block diagram 6), and the look-up table (fourth function) Block 24278) The four major squares are described below. The input function is introduced separately. When the input value enters the activation function, it is first judged whether it is in the range. The activation function of the invention is § _ font function (sigmoid — ). Referring to the eighth figure, it is a schematic diagram of an activation function s-shaped function (sigm〇idfunction). From the formula in the eighth figure, we can know, regardless of What is the input value n, and the fineness of a is between 。. In other words, the neuron 242 of the hidden layer 24 has an activation function, and the activation function is =——1__. 1 + exp(-A) The range set by the invention is between 7 and -7. The reason is that when the input value is greater than 7 or more, after the conversion of the activation function, the value will be very close to 1, and when the input value is less than -7, the converted value is executed. Will be quite close to 〇, so the invention sets Fan® at 7 ·Between the 7th and the decimal point, the invention adopts 2 digits below the decimal point, so there are 1400 points. It can be seen from the seventh figure that only 2 paths can be taken from the input to the input 200924343' as the first function block 24272 - a fourth function block 24278 or a first function block 24272 - a second function block 24274 - a third function block 24276 - a fourth function block 24278. If η is outside the range, the path is the first function Block 24272-> fourth function block 24278, conversely, if η is within range, the path is first function block 24272-> second function block 24274 - third function block 24276 - fourth function block 24278. The following four functional blocks are explained in detail: First function block 24272: Referring to the ninth figure, it is a design diagram of a maximum and minimum value comparison block. First, when the input value enters the activation function, the electricity is The 驿 wafer design will be judged by the first functional block 24272 that η is greater than 7, less than -7, or between 7 and -7, and the input value is multiplied by 1 是 because the comparison block cannot compare the decimal point, thus the present invention Multiply the input value The upper value and the minimum value are also multiplied by 100 at the same time, and the judgment result is directly input into the fourth function block 24278. The second function block 24274: Referring to the tenth figure, it is a judgment block design diagram. When the signal of the first function block 24272 is received, the fourth function block 24278 will determine whether the input value is in the range, and if it is in the range, then the second path is changed (the first function block 24272 - the second function) Block 24274 - third function block 24276 - fourth function block 24278). If the path is greater than the upper limit or less than the lower limit, the path is (first function block 24272 - fourth function block 24278). The output value greater than the upper limit will be 1, and if it is less than the lower limit, the output value will be 11 200924343. The third function block period 6: Referring to the eleventh figure, it is a design diagram of the four-in-one check-in block. If η is in the range, the path is the first function block 24272, the block 24274, the third function block 2427, the fourth function block 24278, and the second function block 24274 is used to set the third digit below the decimal point. Doing the rounding action. It can be seen from the eleventh figure that it is roughly divided into two parts, the upper part deals with the third digit below the decimal point, and the value is judged to be greater than 5 or less than 4, and The following part is to retain the value above the second digit of the decimal point and judge whether it is carried. Therefore, in this example, the selection signal of the second multi-work is provided by the circuit of the above part, which is a 2 The integrated bus of the bit, when the 0th bit is 丄 and the 0th bit is 〇, the value is 1, and the value of the carry is output. Conversely, when the third bit is 丨 and the third bit is 0' The value is 2, and the value of the non-carry is output. ... The fourth function block 24278: Referring to the twelfth figure, it is a table block design diagram. Here, the method of the present invention is a touch function. Create a table, split _ input and turn out (a) corresponding values, and Three 24276 has such a function. After the rounded value enters the third Wei Vision period 6, the value of the bribe _, the number is converted into the corresponding number 'put the value into N1, and N1 is the invention The table is created. Finally, the fourth function block 24278 is entered, and the block 2 is complication _. The red function party 12 200924343 In this example, the output result i of the first function block 24272 can be a one-bit integrated bus. Therefore, when the third bit is ! and the other bits are ', the value is 1, the first multiplexer outputs a constant value of 1; when the first bit is 1 and its bit 7L is 0, the value is 2. The first multiplexer outputs a constant value ;; T and when the second bit is 丨 and the other bits are G, the value is 4, and the first multiplexer outputs the output of the third functional block 24270. It can be inferred that the feedforward neural network can include: an input layer hidden layer and an output layer. The input layer can include a plurality of input values, and the hidden layer can include a plurality of neurons, each of which receives the a plurality of input values, and each - (four) through the yuan output - output value, and finally lose The layer outputs a result based on the output values. Further, each of the neurons in the hidden layer may include an activation function (or a transfer function). Accordingly, one embodiment of the present invention is a predictive power value. The feed-forward neural network, as shown in the sixth figure, includes an input layer 22, a hidden layer 24 and an output layer 26. The input layer 22 can include a plurality of input values 222. In addition, the hidden layer 24, A plurality of neurons 242 are included, each of the neurons 242 receiving the plurality of input values, and each of the neurons 242 includes: a plurality of first weight multipliers 2421, a first adder 2423, and a first bias addition 2425 An activation function 2427. Each of the first weight multipliers 2421 includes a first weight value 2422, and each of the first weight multipliers 2421 receives one of the plurality of input values 222, respectively, 13 200924343, each of the first weight multiplications The processor 2421 multiplies the first weight value 2222 by the input value 222 to generate a first weighting value. All first weighting values for the same neuron are received by the first adder 2423. The first adder 2423 sums all of the first weighting values to produce a first summed value. In addition, the first partial weight adder 2425 includes a first partial weight value 2424 and receives a first total value, wherein the first partial weight adder 2425 adds the first partial weight value 2424 to the first total value. A first bias value is then generated, after which the activation function 2427 generates an output value based on the first bias value. In addition, the output layer 26 includes a plurality of second weight multipliers 261, a first adder 263, and a first partial weight adder 265. Each of the second weight multipliers 261 includes a second weight value 262, and each of the second weight multipliers 261 receives one of the plurality of output values, wherein each of the second weight multipliers 261 respectively Multiplying the second weight value 262 by the output value produces a second weight value. The second weighting value is received by the second adder 263, the second adder 263 sums all the second weighting values to produce a second summed value, and the first biasing weight adder 265 receives the second summing value, and The second partial weight adder 265 includes a second partial weight value 264, wherein the second partial weight adder adds the second partial weight value 264 to the second added value to generate a second partial weight value 'this second The bias value is the output of the feed-like neural network before the predicted invalid power value. In addition, the above-mentioned activation function device 2427 generates an output value according to the first summation value by an activation function, and the activation function is / (the work. According to a preferred example of the present invention, the above-mentioned activation function device Including: a determiner (such as the first function block 24272 described above) 'the determiner determines the first summed value according to a range to generate a judgment value; an index value generator (such as the second function block 24274 described above), The index value generator generates an index value according to the first summed value; a lookup table retriever (such as the third function block 24276 described above), the lookup table retriever includes a lookup table, and the lookup table retriever is based on the retrieved value Retrieving the lookup table to generate a search value, wherein the search value is in the range; a selector (such as the fourth function block 24278 described above), the selector selects one of the search value and the search value according to the judgment value. The value is taken as the output value, wherein the above range is between 7 and _7, so when the judgment value indicates that the first plus total value is between 7 and _7, the selector uses the search value according to the search value. Value as the output value, #lie value Show that the total value is greater than 7 days. The selection H is based on the lying value as the output value, and when the judgment value table is not the first - the total value is less than _7, the face verification value is determined as 〇 In addition, the index value generator may circulate the first plus value to the second digit below the decimal point to generate the index value. Therefore, the plurality of input values described above correspond to consecutive complex numbers. a time zone: a plurality of power values, and the second total value is a pre-value of the prediction interval, the prediction interval being adjacent to the continuous plurality of time intervals, so when the time is between hours In units, the number of the above plurality of power values may be three, five pens, seven pens, etc., and the predicted value is the next hour after the continuous plurality of hours corresponding to the plurality of power values. Predicted value. 15 200924343 4. In the above, another embodiment of the present invention is a method for the feed-forward-like neural network to pay power (four), such as Xuan Sanqi. First, as shown in step i. Power recording, the power value series contains consecutive multiple pens The force value, wherein each of the pen power values and the time is clear, and the continuous plurality of power values are in accordance with the corresponding time zone. Next, as shown in step 1320, the ship's training data is based on the power value. Each training data includes a continuous plurality of power-values and a training result in the power value series. For example, a training data may include power values of three, four, five, seven, and the like, and The training result that should be output after the neural network is trained. Then, as shown in step 1330 and step 134, a feedforward type _ _ road is provided, and the feedforward neural network is trained according to the plurality of training materials. To generate a plurality of weight values of the feedforward neural network, the feedforward neural network becomes a feedforward neural network for predicting invalid power values after training through the training data. When the pen training data is continuously fed into the feedforward neural network of the first electric invalid power value, the predicted invalid power _ road will be lost (four) pen jing _ the training check Chu Xueyu 'step 1350 and step 1360 Show A first plurality of consecutive T - power value, called the community shall feedforward neural network feeding two power efficient power value to generate a predicted value. Pre-shell L, , ..., skill can easily infer that _ steel road can be adjusted through the training materials (such as the continuous multiple of the first power - power value), in order to adjust the weight value of each neuron in 200924343, The round-out of the neural network can be brought closer to the expected training results. The relevant details are known in the art, and the invention is not described herein. In an embodiment of the invention, the feedforward neural network comprises at least one hidden layer, the hidden layer comprising a plurality of neurons, each having an activation function. In this hair style, the activation function is a -& sigmoid function, such as /(a) = . l + exp(-A·) 〇 In addition, the first force value of the continuous plural number of the above-mentioned towel corresponds to a plurality of consecutive time intervals, and the training result is the predicted value of the corresponding-predicted time interval, and the __ interval It is the next time interval adjacent to the above-mentioned consecutive time intervals. For example, the above time interval is one hour, so the above-mentioned Wei-number pen power-value f-force value is a continuous multi-hour power value, and the feedforward-type neural network outputs the result as the next hour. Predictive value. For example, the feedforward-like neural network is the feed-forward neural network for predicting the invalid power value. Therefore, the plurality of input values are the continuous plurality of power values and the first summed value is the training result. . Similarly, the plurality of input values are the second plurality of power values of the continuous plurality of pens, and the second summed value is the predicted value. Obviously, the invention may have many modifications and differences in the description of the above embodiments. It is therefore necessary to understand within the scope of its appended claims. The invention may be practiced extensively in other embodiments of the invention. The above are only the preferred embodiments of the present invention, and are not intended to limit the scope of the claims of the present invention; any other equivalent changes or modifications that are not eliminated from the invention disclosed herein should be included in the following claims. Scope 200924343 [Simple diagram of the diagram] The first diagram is a weight value setting diagram of an electro-ceramic wafer; the second diagram is a schematic diagram of the deviation weight setting of an electro-ceramic wafer; the third diagram is the activation of an electro-deuterium wafer Schematic diagram of function design; the fourth diagram is a schematic diagram of the connection between the stable layer and the activation function of an electro-release wafer; the fifth diagram is a schematic diagram of the weight value setting between the hidden layer and the output layer of an electro-defective wafer; Schematic diagram of the complete feed-forward neural network architecture; the seventh diagram is a schematic diagram of the activation function of an electro-optical wafer; the eighth diagram is a schematic diagram of an activation function s-moid function (sigmoid function); The design is a maximum and minimum comparison block; the tenth is a decision block design; the eleventh is a rounded square design; Twelve is a look-up table block diagram-based design. The thirteenth figure is a flow chart showing the predicted ineffective power value of the feed-forward neural network of the present invention. 200924343 [Key component symbol description] 22 Input layer 222 Input value 24 Hidden layer 242 Neuron 2421 First weight multiplier 2422 First weight value 2423 First adder 2424 First partial weight value 2425 First partial weight adder 2427 Activation Function 24272 first function block 24274 second function block 24276 third function block 24278 fourth function block 26 output layer 261 second weight multiplier 262 second weight value 263 second adder 264 second partial weight value 265 Second partial weight adder 20