CN103488086B - The pesticide waste liquid incinerator furnace temperature optimization system of optimum FUZZY NETWORK and method - Google Patents

The pesticide waste liquid incinerator furnace temperature optimization system of optimum FUZZY NETWORK and method Download PDF

Info

Publication number
CN103488086B
CN103488086B CN201310436863.9A CN201310436863A CN103488086B CN 103488086 B CN103488086 B CN 103488086B CN 201310436863 A CN201310436863 A CN 201310436863A CN 103488086 B CN103488086 B CN 103488086B
Authority
CN
China
Prior art keywords
mrow
msub
munderover
msup
math
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310436863.9A
Other languages
Chinese (zh)
Other versions
CN103488086A (en
Inventor
刘兴高
李见会
张明明
孙优贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201310436863.9A priority Critical patent/CN103488086B/en
Publication of CN103488086A publication Critical patent/CN103488086A/en
Application granted granted Critical
Publication of CN103488086B publication Critical patent/CN103488086B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Feedback Control In General (AREA)

Abstract

The invention discloses a kind of pesticide waste liquid incinerator furnace temperature optimization system of optimum FUZZY NETWORK.The method first carries out obfuscation to input variable, utilizes the fuzzy rule in FUZZY NETWORK to derive subsequently, and last support vector machine is used for the linear dimensions in Optimization of Fuzzy neural network.In the method, standardization module, for carrying out standardization to training sample; FUZZY NETWORK module, for soft sensor modeling; Support vector machine optimizes the linear dimensions that module is used in Optimization of Fuzzy mixed-media network modules mixed-media; Result display module, for optimizing furnace temperature predicted value that module obtains and make the performance variable value of furnace temperature the best pass to DCS system from support vector machine.The present invention realizes furnace temperature and accurately controls, avoids occurring that furnace temperature is too low or too high.

Description

System and method for optimizing temperature of pesticide waste liquid incinerator by using optimal fuzzy network
Technical Field
The invention relates to the field of pesticide production waste liquid incineration, in particular to a pesticide waste liquid incinerator temperature optimization system and method based on an optimal fuzzy network.
Background
With the rapid development of the pesticide industry, the environmental pollution problem of the emissions has attracted high attention from governments and corresponding environmental protection departments of various countries. The research and the solution of the standard-reaching discharge control and the harmless minimization treatment of the pesticide organic waste liquid not only become the difficulty and the hot point of scientific research of various countries, but also are the scientific proposition of the national urgent need related to the sustainable development of society.
The incineration method is the most effective and thorough method for treating pesticide residue and waste residue at present and is the most common method for application. The temperature of the incinerator must be kept at a proper temperature in the incineration process, and the excessively low incinerator temperature is not beneficial to the decomposition of toxic and harmful components in the waste; the overhigh furnace temperature not only increases the fuel consumption and the equipment operation cost, but also easily damages the inner wall of the hearth and shortens the service life of the equipment. In addition, excessive temperatures may increase the amount of metal volatilization and nitrogen oxide formation in the waste. Particularly for chlorine-containing wastewater, the corrosion of the inner wall can be reduced by proper furnace temperature. However, factors influencing the furnace temperature in the actual incineration process are complex and changeable, and the phenomenon that the furnace temperature is too low or too high is easy to occur.
Zadeh first proposed the concept of a Fuzzy set in 1965 by american mathematician l. Fuzzy logic then begins to replace the classical logic that persists in that everything can be represented in terms of binary terms in a way that it more closely resembles the question and semantic statement of everyday people. In 1987, Bart Kosko's rate first performed a more systematic study of the combination of fuzzy theory and neural networks. In the following time, the theory and application of the fuzzy network are developed rapidly, and the proposal of various new fuzzy network models and the research of the adaptive learning algorithm not only accelerate the perfection of the fuzzy neural theory, but also are widely applied in practice.
The support vector machine, introduced by Vapnik in 1998, converts the original optimal classification problem into a dual optimization problem by using a structure risk minimization method in statistical theory learning instead of a general empirical structure minimization method, thereby having good popularization capability and being widely applied to pattern recognition, fitting and classification problems. In this scheme, a support vector machine is used to optimize the linear parameters in the fuzzy network model.
Disclosure of Invention
In order to overcome the defects that the furnace temperature of the conventional incinerator is difficult to control and is easy to be too low or too high, the invention provides the pesticide waste liquid incinerator temperature optimization system and method for realizing accurate control of the furnace temperature and avoiding too low or too high furnace temperature.
The technical scheme adopted by the invention for solving the technical problems is as follows:
the pesticide waste liquid incinerator temperature optimization system of the optimal fuzzy network comprises an incinerator, an intelligent instrument, a DCS (distributed control system), a data interface and an upper computer, wherein the DCS comprises a control station and a database; on-spot intelligent instrument and DCS headtotail, the DCS system is connected with the host computer, the host computer include:
the standardization processing module is used for preprocessing the model training samples input from the DCS database, centralizing the training samples, namely subtracting the average value of the samples, and then standardizing the training samples:
calculating an average value: <math> <mrow> <mover> <mi>TX</mi> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>TX</mi> <mi>i</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </math>
calculating the variance: <math> <mrow> <msubsup> <mi>&sigma;</mi> <mi>x</mi> <mn>2</mn> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>TX</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <mi>TX</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
and (3) standardization: <math> <mrow> <mi>X</mi> <mo>=</mo> <mfrac> <mrow> <mi>TX</mi> <mo>-</mo> <mover> <mi>TX</mi> <mo>&OverBar;</mo> </mover> </mrow> <msub> <mi>&sigma;</mi> <mi>x</mi> </msub> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, TXiThe ith training sample is data of key variables, furnace temperature and operating variables for optimizing the furnace temperature during normal production collected from the DCS database, N is the number of training samples,is the mean of the training samples, and X is the normalized training sample. SigmaxRepresenting the standard deviation, σ, of the training samples2 xRepresenting the variance of the training samples.
And the fuzzy network module is used for carrying out fuzzy reasoning and establishing a fuzzy rule on the input variable transmitted from the data preprocessing module. And carrying out fuzzy classification on the preprocessed training sample X transmitted from the data preprocessing module to obtain the center and the width of each fuzzy cluster in the fuzzy rule base. Let the p-th normalized training sample Xp=[Xp1,…,Xpn]Where n is the number of input variables.
Let the fuzzy network have R fuzzy rules, in order to obtain each fuzzy rule for the normalized training sample XpEach input variable X ofpjJ =1, …, n, whose membership to the i-th fuzzy rule is to be found by the following fuzzy equation:
<math> <mrow> <msub> <mi>M</mi> <mi>ij</mi> </msub> <mo>=</mo> <mi>exp</mi> <mo>{</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>-</mo> <msub> <mi>m</mi> <mi>ij</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msubsup> <mi>&sigma;</mi> <mi>ij</mi> <mn>2</mn> </msubsup> </mfrac> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein M isijRepresenting an input variable XpjDegree of membership, m, to the ith fuzzy ruleijAnd σijAnd respectively representing the center and the width of the jth Gaussian member function of the ith fuzzy rule, and obtaining the center and the width by fuzzy clustering.
Normalized training sample XpFitness to fuzzy rule i is mu(i)(Xp) Then μ(i)(Xp) Can be determined by the following formula:
<math> <mrow> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Pi;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>M</mi> <mi>ij</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mo>{</mo> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mfrac> <msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>-</mo> <msub> <mi>m</mi> <mi>ij</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msubsup> <mi>&sigma;</mi> <mi>ij</mi> <mn>2</mn> </msubsup> </mfrac> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
Mijrepresenting an input variable XpjDegree of membership, m, to the ith fuzzy ruleijAnd σijRespectively representing the center and width of the jth gaussian member function of the ith fuzzy rule.
After the fitness of the input training sample to each rule is obtained, the fuzzy network deduces the fuzzy rule output to obtain the final analytic solution. In a common fuzzy network structure, the process of deriving each fuzzy rule can be expressed as follows: firstly, the linear product sum of all input variables in the training sample is obtained, and then the linear product sum is used to match the fitness mu of the rule(i)(Xp) And multiplying to obtain the final output of each fuzzy rule. The derived output of the fuzzy rule i can be expressed as follows:
<math> <mrow> <msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>+</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mn>0</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>+</mo> <mi>b</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <mo>[</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>+</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mn>0</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mo>+</mo> <mi>b</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow> </math>
in the formula (f)(i)For the output of the ith fuzzy rule,is the predicted output of the fuzzy network model on the p-th training sample, aijJ =1, …, n is the linear coefficient of the jth variable in the ith fuzzy rule, ai0Is a constant term of the linear product sum of the input variables in the ith fuzzy rule, and b is an output offset.
In formula (7), the determination of parameters in the input variable linear product sum is a main problem used in the use of a fuzzy network, here, an original fuzzy rule derivation output form is converted into a support vector machine optimization problem, and then a support vector machine is used for linear optimization, wherein the conversion process is as follows:
<math> <mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>+</mo> <mi>b</mi> </mrow> </math>
<math> <mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <mo>[</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>+</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mn>0</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mo>+</mo> <mi>b</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>&times;</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>+</mo> <mi>b</mi> </mrow> </math>
wherein Xp0Is a constant term and is constant equal to 1. Order to
<math> <mrow> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mo>[</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>&times;</mo> <msub> <mi>X</mi> <mrow> <mi>p</mi> <mn>0</mn> </mrow> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pn</mi> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>R</mi> <mo>)</mo> </mrow> </msup> <mo>&times;</mo> <msub> <mi>X</mi> <mrow> <mi>p</mi> <mn>0</mn> </mrow> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>R</mi> <mo>)</mo> </mrow> </msup> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pn</mi> </msub> <mo>]</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow> </math>
Wherein,the conversion form of the original training sample is represented, namely, the original training sample is converted into the form of the formula as above, and the form is used as the training sample of the support vector machine:
<math> <mrow> <mi>S</mi> <mo>=</mo> <mo>{</mo> <mrow> <mo>(</mo> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>y</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mrow> <mo>(</mo> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mrow> <mo>(</mo> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>y</mi> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein y is1,…,yNThe target output of the training sample is taken, S is taken as a new input training sample set, and then the original problem can be converted into the following dual optimization problem of the support vector machine:
<math> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mi>&omega;</mi> <mo>,</mo> <mi>b</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&gamma;</mi> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </msubsup> <msub> <mi>L</mi> <mi>&epsiv;</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>p</mi> </msub> <mo>,</mo> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>+</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msup> <mi>&omega;</mi> <mi>T</mi> </msup> <mi>&omega;</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein y ispIs inputting a training sample XpTarget output of f (X)p) Is corresponding to XpModel output of, L (yp,f(Xp) Is an input training sample XpCorresponding target output ypAnd the model output f (X)p) The first order insensitive function when the error margin of the optimization problem is. Omega is the normal vector of the hyperplane of the support vector machine, f (X)p) Is corresponding to Xpγ is the penalty factor of the support vector machine, the superscript T represents the transpose of the matrix, R (ω, b) is the objective function of the optimization problem, N is the number of training samples, L is the number of training samples (yp,f(Xp) Expression) as follows:
where is the error margin of the optimization problem, ypIs inputting a training sample XpTarget output of f (X)p) Is corresponding to XpThen, a support vector machine is used for obtaining the prediction output of the fuzzy rule optimal derivation linear parameters and the dual optimization problem of the fuzzy network:
<math> <mrow> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>&alpha;</mi> <mi>k</mi> <mo>*</mo> </msubsup> <mo>-</mo> <msub> <mi>&alpha;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <msub> <mi>X</mi> <mi>kj</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>&Element;</mo> <mi>SV</mi> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>&alpha;</mi> <mi>k</mi> <mo>*</mo> </msubsup> <mo>-</mo> <msub> <mi>&alpha;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <msub> <mi>X</mi> <mi>kj</mi> </msub> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mi>R</mi> <mo>;</mo> <mi>j</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mi>n</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>&alpha;</mi> <mi>k</mi> <mo>*</mo> </msubsup> <mo>-</mo> <msub> <mi>&alpha;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>&lt;</mo> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>,</mo> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>></mo> <mo>+</mo> <mi>b</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>14</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein alpha isk(k =1, …, N) is yp-f(Xp) Lagrange multipliers corresponding to greater than 0 and less than 0,i.e. corresponding to the p-th normalized training sample XpAnd an operating variable value for optimizing the furnace temperature.
As a preferred solution: the host computer still include: and the model updating module is used for acquiring field intelligent instrument signals according to a set sampling time interval, comparing the obtained actually-measured furnace temperature with a system forecast value, and adding new data which enables the furnace temperature to be optimal when the furnace temperature is normally produced in the DCS database into the training sample data to update the soft measurement model if the relative error is more than 10% or the furnace temperature exceeds the upper and lower normal production limit ranges.
Further, the host computer still include: the result display module is used for transmitting the obtained furnace temperature forecast value and the operation variable value which enables the furnace temperature to be optimal to the DCS, displaying the values at a control station of the DCS and transmitting the values to a field operation station for displaying through the DCS and a field bus; at the same time, the DCS system automatically executes the furnace temperature optimization operation by using the obtained value of the operation variable that optimizes the furnace temperature as a new operation variable set value.
And the signal acquisition module is used for acquiring data from the database according to the set time interval of each sampling.
Still further, the key variables include the flow of waste liquid into the incinerator, the flow of air into the incinerator, and the flow of fuel into the incinerator; the manipulated variables include air flow into the incinerator and fuel flow into the incinerator.
The method for optimizing the furnace temperature of the pesticide waste liquid incinerator by the system for optimizing the furnace temperature of the pesticide waste liquid incinerator with the optimal fuzzy network comprises the following specific implementation steps of:
1) determining used key variables, collecting data of the variables in normal production from a DCS (distributed control system) database as an input matrix of a training sample TX, and collecting corresponding furnace temperature and operation variable data for optimizing the furnace temperature as an output matrix Y;
2) preprocessing a model training sample input from a DCS database, centralizing the training sample, namely subtracting the average value of the sample, and then normalizing the training sample so that the average value is 0 and the variance is 1. The processing is accomplished using the following mathematical process:
2.1) calculating the mean value: <math> <mrow> <mover> <mi>TX</mi> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>TX</mi> <mi>i</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </math>
2.2) calculating the variance: <math> <mrow> <msubsup> <mi>&sigma;</mi> <mi>x</mi> <mn>2</mn> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>TX</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <mi>TX</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
2.3) standardization: <math> <mrow> <mi>X</mi> <mo>=</mo> <mfrac> <mrow> <mi>TX</mi> <mo>-</mo> <mover> <mi>TX</mi> <mo>&OverBar;</mo> </mover> </mrow> <msub> <mi>&sigma;</mi> <mi>x</mi> </msub> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, TXiThe ith training sample is data of key variables, furnace temperature and operating variables for optimizing the furnace temperature during normal production collected from the DCS database, N is the number of training samples,is the mean of the training samples, and X is the normalized training sample. SigmaxRepresenting the standard deviation, σ, of the training samples2 xRepresenting the variance of the training samples.
3) And carrying out fuzzy reasoning and establishing a fuzzy rule on the input variable transmitted from the data preprocessing module. And carrying out fuzzy classification on the preprocessed training sample X transmitted from the data preprocessing module to obtain the center and the width of each fuzzy cluster in the fuzzy rule base. Let the p-th normalized training sample Xp=[Xp1,…,Xpn]Where n is the number of input variables.
Let the fuzzy network have R fuzzy rules, in order to obtain each fuzzy rule for the normalized training sample XpEach input variable X ofpjJ =1, …, n, whose membership to the i-th fuzzy rule is to be found by the following fuzzy equation:
<math> <mrow> <msub> <mi>M</mi> <mi>ij</mi> </msub> <mo>=</mo> <mi>exp</mi> <mo>{</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>-</mo> <msub> <mi>m</mi> <mi>ij</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msubsup> <mi>&sigma;</mi> <mi>ij</mi> <mn>2</mn> </msubsup> </mfrac> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein M isijRepresenting an input variable XpjDegree of membership, m, to the ith fuzzy ruleijAnd σijAnd respectively representing the center and the width of the jth Gaussian member function of the ith fuzzy rule, and obtaining the center and the width by fuzzy clustering.
Normalized training sample XpFitness to fuzzy rule i is mu(i)(Xp) Then μ(i)(Xp) Can be determined by the following formula:
<math> <mrow> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Pi;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>M</mi> <mi>ij</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mo>{</mo> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mfrac> <msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>-</mo> <msub> <mi>m</mi> <mi>ij</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msubsup> <mi>&sigma;</mi> <mi>ij</mi> <mn>2</mn> </msubsup> </mfrac> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
Mijrepresenting an input variable XpjDegree of membership, m, to the ith fuzzy ruleijAnd σijRespectively representing the center and width of the jth gaussian member function of the ith fuzzy rule.
After the fitness of the input training sample to each rule is obtained, the fuzzy network deduces the fuzzy rule output to obtain the final analytic solution. In a common fuzzy network structure, the process of deriving each fuzzy rule can be expressed as follows: headFirst, the linear product sum of all input variables in the training sample is obtained, and then the linear product sum is used to match the fitness mu of the rule(i)(Xp) And multiplying to obtain the final output of each fuzzy rule. The derived output of the fuzzy rule i can be expressed as follows:
<math> <mrow> <msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>+</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mn>0</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>+</mo> <mi>b</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <mo>[</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>+</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mn>0</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mo>+</mo> <mi>b</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow> </math>
in the formula (f)(i)For the output of the ith fuzzy rule,is the predicted output of the fuzzy network model on the p-th training sample, aijJ =1, …, n is the linear coefficient of the jth variable in the ith fuzzy rule, ai0Is a constant term of the linear product sum of the input variables in the ith fuzzy rule, and b is an output offset.
4) In the formula (7), the determination of the parameters in the linear product sum of the input variables is a main problem used in the use of the fuzzy network, here, the original fuzzy rule derivation output form is converted into the optimization problem of the support vector machine, and then the linear optimization is performed by using the support vector machine, wherein the conversion process is as follows:
<math> <mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>+</mo> <mi>b</mi> </mrow> </math>
<math> <mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <mo>[</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>+</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mn>0</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mo>+</mo> <mi>b</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>&times;</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>+</mo> <mi>b</mi> </mrow> </math>
wherein Xp0Is a constant term and is constant equal to 1. Order to
<math> <mrow> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mo>[</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>&times;</mo> <msub> <mi>X</mi> <mrow> <mi>p</mi> <mn>0</mn> </mrow> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pn</mi> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>R</mi> <mo>)</mo> </mrow> </msup> <mo>&times;</mo> <msub> <mi>X</mi> <mrow> <mi>p</mi> <mn>0</mn> </mrow> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>R</mi> <mo>)</mo> </mrow> </msup> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pn</mi> </msub> <mo>]</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow> </math>
Wherein,the conversion form of the original training sample is represented, namely, the original training sample is converted into the form of the formula as above, and the form is used as the training sample of the support vector machine:
<math> <mrow> <mi>S</mi> <mo>=</mo> <mo>{</mo> <mrow> <mo>(</mo> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>y</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mrow> <mo>(</mo> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mrow> <mo>(</mo> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>y</mi> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein y is1,…,yNThe target output of the training sample is taken, S is taken as a new input training sample set, and then the original problem can be converted into the following dual optimization problem of the support vector machine:
<math> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mi>&omega;</mi> <mo>,</mo> <mi>b</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&gamma;</mi> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </msubsup> <msub> <mi>L</mi> <mi>&epsiv;</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>p</mi> </msub> <mo>,</mo> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>+</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msup> <mi>&omega;</mi> <mi>T</mi> </msup> <mi>&omega;</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein y ispIs inputting a training sample XpTarget output of f (X)p) Is corresponding to XpModel output of, L (yp,f(Xp) Is an input training sample XpCorresponding target output ypAnd the model output f (X)p) The first order insensitive function when the error margin of the optimization problem is. Omega is the normal vector of the hyperplane of the support vector machine, f (X)p) Is corresponding to Xpγ is the penalty factor of the support vector machine, the superscript T represents the transpose of the matrix, R (ω, b) is the objective function of the optimization problem, N is the number of training samples, L is the number of training samples (yp,f(Xp) Expression) as follows:
where is the error margin of the optimization problem, ypIs inputting a training sample XpTarget output of f (X)p) Is corresponding to XpThen, a support vector machine is used for obtaining the prediction output of the fuzzy rule optimal derivation linear parameters and the dual optimization problem of the fuzzy network:
<math> <mrow> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>&alpha;</mi> <mi>k</mi> <mo>*</mo> </msubsup> <mo>-</mo> <msub> <mi>&alpha;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <msub> <mi>X</mi> <mi>kj</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>&Element;</mo> <mi>SV</mi> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>&alpha;</mi> <mi>k</mi> <mo>*</mo> </msubsup> <mo>-</mo> <msub> <mi>&alpha;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <msub> <mi>X</mi> <mi>kj</mi> </msub> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mi>R</mi> <mo>;</mo> <mi>j</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mi>n</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>&alpha;</mi> <mi>k</mi> <mo>*</mo> </msubsup> <mo>-</mo> <msub> <mi>&alpha;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>&lt;</mo> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>,</mo> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>></mo> <mo>+</mo> <mi>b</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>14</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein alpha isk(k =1, …, N) is yp-f(Xp) Lagrange multipliers corresponding to greater than 0 and less than 0,i.e. corresponding to the p-th normalized training sample XpAnd an operating variable value for optimizing the furnace temperature.
As a preferred solution: the method further comprises the following steps: 5) acquiring on-site intelligent instrument signals according to a set sampling time interval, comparing the obtained actually-measured furnace temperature with a system forecast value, and if the relative error is more than 10% or the furnace temperature exceeds the upper and lower production normal limit ranges, adding new data which enables the furnace temperature to be optimal when the furnace temperature is normally produced in the DCS database into training sample data, and updating the soft measurement model.
6) Further, calculating to obtain an optimal operating variable value in the step 4), transmitting the furnace temperature forecast value and the operating variable value which enables the furnace temperature to be optimal to the DCS, displaying the values at a control station of the DCS, and transmitting the values to a field operating station through the DCS and a field bus for displaying; at the same time, the DCS system automatically executes the furnace temperature optimization operation by using the obtained value of the operation variable that optimizes the furnace temperature as a new operation variable set value.
Still further, the key variables include the flow of waste liquid into the incinerator, the flow of air into the incinerator, and the flow of fuel into the incinerator; the manipulated variables include air flow into the incinerator and fuel flow into the incinerator.
The technical conception of the invention is as follows: the invention discloses a pesticide waste liquid incinerator temperature optimization system and method based on an optimal fuzzy network, and an operation variable value enabling the incinerator temperature to be optimal is found.
The invention has the following beneficial effects: 1. establishing an online soft measurement model of a quantitative relation between a system key variable and furnace temperature; 2. the operating conditions that optimize the furnace temperature are quickly found.
Drawings
FIG. 1 is a hardware block diagram of the system proposed by the present invention;
fig. 2 is a functional structure diagram of the upper computer according to the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The examples are intended to illustrate the invention, but not to limit the invention, and any modifications and variations of the invention within the spirit and scope of the claims are intended to fall within the scope of the invention.
Example 1
Referring to fig. 1 and 2, the system for optimizing the temperature of a pesticide waste liquid incinerator by using an optimal fuzzy network comprises a field intelligent instrument 2 connected with an incinerator object 1, a DCS system and an upper computer 6, wherein the DCS system comprises a data interface 3, a control station 4 and a database 5, the field intelligent instrument 2 is connected with the data interface 3, the data interface is connected with the control station 4, the database 5 and the upper computer 6, and the upper computer 6 comprises:
a normalization processing module 7, configured to pre-process the model training samples input from the DCS database, centralize the training samples, that is, subtract the average value of the samples, and then normalize them:
calculating an average value: <math> <mrow> <mover> <mi>TX</mi> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>TX</mi> <mi>i</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </math>
calculating the variance: <math> <mrow> <msubsup> <mi>&sigma;</mi> <mi>x</mi> <mn>2</mn> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>TX</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <mi>TX</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
and (3) standardization: <math> <mrow> <mi>X</mi> <mo>=</mo> <mfrac> <mrow> <mi>TX</mi> <mo>-</mo> <mover> <mi>TX</mi> <mo>&OverBar;</mo> </mover> </mrow> <msub> <mi>&sigma;</mi> <mi>x</mi> </msub> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, TXiThe ith training sample is data of key variables, furnace temperature and operating variables for optimizing the furnace temperature during normal production collected from the DCS database, N is the number of training samples,is the mean of the training samples, and X is the normalized training sample. SigmaxRepresenting the standard deviation, σ, of the training samples2 xRepresenting the variance of the training samples.
And the fuzzy network module 8 is used for carrying out fuzzy reasoning and establishing a fuzzy rule on the input variable transmitted from the data preprocessing module. And carrying out fuzzy classification on the preprocessed training sample X transmitted from the data preprocessing module to obtain the center and the width of each fuzzy cluster in the fuzzy rule base. Let the p-th normalized training sample Xp=[Xp1,…,Xpn]Where n is the number of input variables.
Let the fuzzy network have R fuzzy rules, in order to obtain each fuzzy rule for the normalized training sample XpEach input variable X ofpjJ =1, …, n, whose membership to the i-th fuzzy rule is to be found by the following fuzzy equation:
<math> <mrow> <msub> <mi>M</mi> <mi>ij</mi> </msub> <mo>=</mo> <mi>exp</mi> <mo>{</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>-</mo> <msub> <mi>m</mi> <mi>ij</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msubsup> <mi>&sigma;</mi> <mi>ij</mi> <mn>2</mn> </msubsup> </mfrac> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein M isijRepresenting an input variable XpjDegree of membership, m, to the ith fuzzy ruleijAnd σijThe center and width of the jth Gaussian member function respectively representing the ith fuzzy ruleAnd (5) fuzzy clustering.
Normalized training sample XpFitness to fuzzy rule i is mu(i)(Xp) Then μ(i)(Xp) Can be determined by the following formula:
<math> <mrow> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Pi;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>M</mi> <mi>ij</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mo>{</mo> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mfrac> <msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>-</mo> <msub> <mi>m</mi> <mi>ij</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msubsup> <mi>&sigma;</mi> <mi>ij</mi> <mn>2</mn> </msubsup> </mfrac> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
Mijrepresenting an input variable XpjDegree of membership, m, to the ith fuzzy ruleijAnd σijRespectively representing the center and width of the jth gaussian member function of the ith fuzzy rule.
After the fitness of the input training sample to each rule is obtained, the fuzzy network deduces the fuzzy rule output to obtain the final analytic solution. In a common fuzzy network structure, the process of deriving each fuzzy rule can be expressed as follows: firstly, the linear product sum of all input variables in the training sample is obtained, and then the linear product sum is used to match the fitness mu of the rule(i)(Xp) And multiplying to obtain the final output of each fuzzy rule. The derived output of the fuzzy rule i can be expressed as follows:
<math> <mrow> <msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>+</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mn>0</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>+</mo> <mi>b</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <mo>[</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>+</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mn>0</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mo>+</mo> <mi>b</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow> </math>
in the formula (f)(i)For the output of the ith fuzzy rule,is the predicted output of the fuzzy network model on the p-th training sample, aijJ =1, …, n is the linear coefficient of the jth variable in the ith fuzzy rule, ai0Is a constant term of the linear product sum of the input variables in the ith fuzzy rule, and b is an output offset.
In formula (7), the determination of the parameters in the input variable linear product sum is a main problem used in the use of the fuzzy network, here, the original fuzzy rule derivation output form is converted into the support vector machine optimization problem, and then the support vector machine is used for linear optimization, and the conversion process is as follows:
<math> <mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>+</mo> <mi>b</mi> </mrow> </math>
<math> <mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <mo>[</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>+</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mn>0</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mo>+</mo> <mi>b</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>&times;</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>+</mo> <mi>b</mi> </mrow> </math>
wherein Xp0Is a constant term and is constant equal to 1. Order to
<math> <mrow> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mo>[</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>&times;</mo> <msub> <mi>X</mi> <mrow> <mi>p</mi> <mn>0</mn> </mrow> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pn</mi> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>R</mi> <mo>)</mo> </mrow> </msup> <mo>&times;</mo> <msub> <mi>X</mi> <mrow> <mi>p</mi> <mn>0</mn> </mrow> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>R</mi> <mo>)</mo> </mrow> </msup> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pn</mi> </msub> <mo>]</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow> </math>
Wherein,the conversion form of the original training sample is represented, namely, the original training sample is converted into the form of the formula as above, and the form is used as the training sample of the support vector machine:
<math> <mrow> <mi>S</mi> <mo>=</mo> <mo>{</mo> <mrow> <mo>(</mo> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>y</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mrow> <mo>(</mo> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mrow> <mo>(</mo> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>y</mi> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein y is1,…,yNThe target output of the training sample is taken, S is taken as a new input training sample set, and then the original problem can be converted into the following dual optimization problem of the support vector machine:
<math> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mi>&omega;</mi> <mo>,</mo> <mi>b</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&gamma;</mi> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </msubsup> <msub> <mi>L</mi> <mi>&epsiv;</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>p</mi> </msub> <mo>,</mo> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>+</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msup> <mi>&omega;</mi> <mi>T</mi> </msup> <mi>&omega;</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein y ispIs inputting a training sample XpTarget output of f (X)p) Is corresponding to XpModel output of, L (yp,f(Xp) Is an input training sample XpCorresponding target output ypAnd the model output f (X)p) The first order insensitive function when the error margin of the optimization problem is. Omega is the normal vector of the hyperplane of the support vector machine, f (X)p) Is corresponding to Xpγ is the penalty factor of the support vector machine, the superscript T represents the transpose of the matrix, R (ω, b) is the objective function of the optimization problem, N is the number of training samples, L is the number of training samples (yp,f(Xp) Expression) as follows:
where is the error margin of the optimization problem, ypIs inputting a training sample XpTarget output of f (X)p) Is corresponding to XpThen, a support vector machine is used for obtaining the prediction output of the fuzzy rule optimal derivation linear parameters and the dual optimization problem of the fuzzy network:
<math> <mrow> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>&alpha;</mi> <mi>k</mi> <mo>*</mo> </msubsup> <mo>-</mo> <msub> <mi>&alpha;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <msub> <mi>X</mi> <mi>kj</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>&Element;</mo> <mi>SV</mi> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>&alpha;</mi> <mi>k</mi> <mo>*</mo> </msubsup> <mo>-</mo> <msub> <mi>&alpha;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <msub> <mi>X</mi> <mi>kj</mi> </msub> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mi>R</mi> <mo>;</mo> <mi>j</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mi>n</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>&alpha;</mi> <mi>k</mi> <mo>*</mo> </msubsup> <mo>-</mo> <msub> <mi>&alpha;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>&lt;</mo> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>,</mo> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>></mo> <mo>+</mo> <mi>b</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>14</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein alpha isk(k =1, …, N) is yp-f(Xp) Lagrange multipliers corresponding to greater than 0 and less than 0,i.e. corresponding to the p-th normalized training sample XpAnd an operating variable value for optimizing the furnace temperature.
The upper computer 6 further comprises: and the signal acquisition module 11 is used for acquiring data from the database according to a set time interval of each sampling.
The upper computer 6 further comprises: and the model updating module 12 is used for acquiring field intelligent instrument signals according to a set sampling time interval, comparing the obtained actually-measured furnace temperature with a system forecast value, and adding new data which enables the furnace temperature to be optimal when the furnace temperature is normally produced in the DCS database into the training sample data to update the soft measurement model if the relative error is more than 10% or the furnace temperature exceeds the upper and lower normal production limit ranges.
The key variables include the flow of waste liquid into the incinerator, the flow of air into the incinerator and the flow of fuel into the incinerator; the manipulated variables include air flow into the incinerator and fuel flow into the incinerator.
The system also comprises a DCS (distributed control system), wherein the DCS is composed of a data interface 3, a control station 4 and a database 5; the intelligent instrument 2, the DCS system and the upper computer 6 are sequentially connected through a field bus; the upper computer 6 further comprises a result display module 10, which is used for transmitting the calculation optimal result to the DCS system, displaying the process state at the control station of the DCS, and transmitting the process state information to the field operation station for displaying through the DCS system and the field bus.
When the waste liquid incineration process is provided with the DCS system, the detection and storage of the real-time dynamic data of the sample are mainly completed on the upper computer by utilizing the real-time and historical databases of the DCS system to obtain the furnace temperature forecast value and the operation variable value for optimizing the furnace temperature.
When the waste liquid incineration process is not provided with the DCS system, the data memory is adopted to replace the data storage function of a real-time and historical database of the DCS system, and the functional system for obtaining the furnace temperature forecast value and the operation variable value for optimizing the furnace temperature is manufactured into an independent complete system-on-chip which comprises an I/O element, a data memory, a program memory, an arithmetic unit and a display module and does not depend on the DCS system, so that the system-on-chip can be independently used regardless of whether the incineration process is provided with the DCS or not, and is more beneficial to popularization and use.
Example 2
Referring to fig. 1 and 2, the method for optimizing the temperature of the pesticide waste liquid incinerator by using the optimal fuzzy network specifically comprises the following implementation steps:
1) determining used key variables, collecting data of the variables in normal production from a DCS (distributed control system) database as an input matrix of a training sample TX, and collecting corresponding furnace temperature and operation variable data for optimizing the furnace temperature as an output matrix Y;
2) preprocessing a model training sample input from a DCS database, centralizing the training sample, namely subtracting the average value of the sample, and then normalizing the training sample so that the average value is 0 and the variance is 1. The processing is accomplished using the following mathematical process:
2.1) calculating the mean value: <math> <mrow> <mover> <mi>TX</mi> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>TX</mi> <mi>i</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </math>
2.2) calculating the variance: <math> <mrow> <msubsup> <mi>&sigma;</mi> <mi>x</mi> <mn>2</mn> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>TX</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <mi>TX</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
2.3) standardization: <math> <mrow> <mi>X</mi> <mo>=</mo> <mfrac> <mrow> <mi>TX</mi> <mo>-</mo> <mover> <mi>TX</mi> <mo>&OverBar;</mo> </mover> </mrow> <msub> <mi>&sigma;</mi> <mi>x</mi> </msub> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, TXiThe ith training sample is data of key variables, furnace temperature and operation variables for optimizing the furnace temperature during normal production collected from the DCS database, NIn order to train the number of samples,is the mean of the training samples, and X is the normalized training sample. SigmaxRepresenting the standard deviation, σ, of the training samples2 xRepresenting the variance of the training samples.
3) And carrying out fuzzy reasoning and establishing a fuzzy rule on the input variable transmitted from the data preprocessing module. And carrying out fuzzy classification on the preprocessed training sample X transmitted from the data preprocessing module to obtain the center and the width of each fuzzy cluster in the fuzzy rule base. Let the p-th normalized training sample Xp=[Xp1,…,Xpn]Where n is the number of input variables.
Let the fuzzy network have R fuzzy rules, in order to obtain each fuzzy rule for the normalized training sample XpEach input variable X ofpjJ =1, …, n, whose membership to the i-th fuzzy rule is to be found by the following fuzzy equation:
<math> <mrow> <msub> <mi>M</mi> <mi>ij</mi> </msub> <mo>=</mo> <mi>exp</mi> <mo>{</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>-</mo> <msub> <mi>m</mi> <mi>ij</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msubsup> <mi>&sigma;</mi> <mi>ij</mi> <mn>2</mn> </msubsup> </mfrac> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein M isijRepresenting an input variable XpjDegree of membership, m, to the ith fuzzy ruleijAnd σijAnd respectively representing the center and the width of the jth Gaussian member function of the ith fuzzy rule, and obtaining the center and the width by fuzzy clustering.
Normalized training sample XpFitness to fuzzy rule i is mu(i)(Xp) Then μ(i)(Xp) Can be determined by the following formula:
<math> <mrow> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Pi;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>M</mi> <mi>ij</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mo>{</mo> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mfrac> <msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>-</mo> <msub> <mi>m</mi> <mi>ij</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msubsup> <mi>&sigma;</mi> <mi>ij</mi> <mn>2</mn> </msubsup> </mfrac> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
Mijrepresenting an input variable XpjDegree of membership, m, to the ith fuzzy ruleijAnd σijIndividual watchThe center and width of the jth gaussian member function of the ith fuzzy rule are shown.
After the fitness of the input training sample to each rule is obtained, the fuzzy network deduces the fuzzy rule output to obtain the final analytic solution. In a common fuzzy network structure, the process of deriving each fuzzy rule can be expressed as follows: firstly, the linear product sum of all input variables in the training sample is obtained, and then the linear product sum is used to match the fitness mu of the rule(i)(Xp) And multiplying to obtain the final output of each fuzzy rule. The derived output of the fuzzy rule i can be expressed as follows:
<math> <mrow> <msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>+</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mn>0</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>+</mo> <mi>b</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <mo>[</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>+</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mn>0</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mo>+</mo> <mi>b</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow> </math>
in the formula (f)(i)For the output of the ith fuzzy rule,is the predicted output of the fuzzy network model on the p-th training sample, aijJ =1, …, n is the linear coefficient of the jth variable in the ith fuzzy rule, ai0Is a constant term of the linear product sum of the input variables in the ith fuzzy rule, and b is an output offset.
4) In the formula (7), the determination of the parameters in the linear product sum of the input variables is a main problem used in the use of the fuzzy network, here, the original fuzzy rule derivation output form is converted into the optimization problem of the support vector machine, and then the linear optimization is performed by using the support vector machine, wherein the conversion process is as follows:
<math> <mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>+</mo> <mi>b</mi> </mrow> </math>
<math> <mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <mo>[</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>+</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mn>0</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>]</mo> <mo>+</mo> <mi>b</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>&times;</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pj</mi> </msub> <mo>+</mo> <mi>b</mi> </mrow> </math>
wherein Xp0Is a constant term and is constant equal to 1. Order to
<math> <mrow> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mo>[</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>&times;</mo> <msub> <mi>X</mi> <mrow> <mi>p</mi> <mn>0</mn> </mrow> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pn</mi> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>R</mi> <mo>)</mo> </mrow> </msup> <mo>&times;</mo> <msub> <mi>X</mi> <mrow> <mi>p</mi> <mn>0</mn> </mrow> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>R</mi> <mo>)</mo> </mrow> </msup> <mo>&times;</mo> <msub> <mi>X</mi> <mi>pn</mi> </msub> <mo>]</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow> </math>
Wherein,the conversion form of the original training sample is represented, namely, the original training sample is converted into the form of the formula as above, and the form is used as the training sample of the support vector machine:
<math> <mrow> <mi>S</mi> <mo>=</mo> <mo>{</mo> <mrow> <mo>(</mo> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>y</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mrow> <mo>(</mo> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mrow> <mo>(</mo> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>y</mi> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein y is1,…,yNThe target output of the training sample is taken, S is taken as a new input training sample set, and then the original problem can be converted into the following dual optimization problem of the support vector machine:
<math> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mi>&omega;</mi> <mo>,</mo> <mi>b</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&gamma;</mi> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </msubsup> <msub> <mi>L</mi> <mi>&epsiv;</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>p</mi> </msub> <mo>,</mo> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>+</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msup> <mi>&omega;</mi> <mi>T</mi> </msup> <mi>&omega;</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein y ispIs inputting a training sample XpTarget output of f (X)p) Is corresponding to XpModel output of, L (yp,f(Xp) Is an input training sample XpCorresponding target output ypAnd the model output f (X)p) The first order insensitive function when the error margin of the optimization problem is. Omega is the normal vector of the hyperplane of the support vector machine, f (X)p) Is corresponding to Xpγ is the penalty factor of the support vector machine, the superscript T represents the transpose of the matrix, R (ω, b) is the objective function of the optimization problem, N is the number of training samples, L is the number of training samples (yp,f(Xp) Expression) as follows:
where is the error margin of the optimization problem, ypIs inputting a training sample XpTarget output of f (X)p) Is corresponding to XpThen, a support vector machine is used for obtaining the prediction output of the fuzzy rule optimal derivation linear parameters and the dual optimization problem of the fuzzy network:
<math> <mrow> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>&alpha;</mi> <mi>k</mi> <mo>*</mo> </msubsup> <mo>-</mo> <msub> <mi>&alpha;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <msub> <mi>X</mi> <mi>kj</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>&Element;</mo> <mi>SV</mi> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>&alpha;</mi> <mi>k</mi> <mo>*</mo> </msubsup> <mo>-</mo> <msub> <mi>&alpha;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <msub> <mi>X</mi> <mi>kj</mi> </msub> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mi>R</mi> <mo>;</mo> <mi>j</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mi>n</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>&alpha;</mi> <mi>k</mi> <mo>*</mo> </msubsup> <mo>-</mo> <msub> <mi>&alpha;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>&lt;</mo> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>,</mo> <mover> <mi>&phi;</mi> <mo>&RightArrow;</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>></mo> <mo>+</mo> <mi>b</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>14</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein alpha isk(k =1, …, N) is yp-f(Xp) Lagrange multipliers corresponding to greater than 0 and less than 0,i.e. corresponding to the p-th normalized training sample XpAnd an operating variable value for optimizing the furnace temperature.
The method further comprises the following steps: 5) acquiring on-site intelligent instrument signals according to a set sampling time interval, comparing the obtained actually-measured furnace temperature with a system forecast value, and if the relative error is more than 10% or the furnace temperature exceeds the upper and lower production normal limit ranges, adding new data which enables the furnace temperature to be optimal when the furnace temperature is normally produced in the DCS database into training sample data, and updating the soft measurement model.
Calculating to obtain an optimal operation variable value in the step 4), transmitting the obtained furnace temperature forecast value and the operation variable value which enables the furnace temperature to be optimal to the DCS, displaying the values at a control station of the DCS, and transmitting the values to a field operation station for displaying through the DCS and a field bus; at the same time, the DCS system automatically executes the furnace temperature optimization operation by using the obtained value of the operation variable that optimizes the furnace temperature as a new operation variable set value.
The key variables include the flow of waste liquid into the incinerator, the flow of air into the incinerator and the flow of fuel into the incinerator; the manipulated variables include air flow into the incinerator and fuel flow into the incinerator.

Claims (2)

1. A pesticide waste liquid incinerator temperature optimization system of an optimal fuzzy network comprises an incinerator, a field intelligent instrument, a DCS (distributed control system), a data interface and an upper computer, wherein the DCS comprises a control station and a database; on-spot intelligent instrument and DCS headtotail, the DCS system is connected with the host computer, its characterized in that: the host computer include:
the standardization processing module is used for preprocessing the model training samples input from the DCS database, centralizing the training samples, namely subtracting the average value of the samples, and then standardizing the training samples:
calculating an average value: <math> <mrow> <mover> <mrow> <mi>T</mi> <mi>X</mi> </mrow> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>TX</mi> <mi>i</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </math>
calculating the variance: <math> <mrow> <msubsup> <mi>&sigma;</mi> <mi>x</mi> <mn>2</mn> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </mfrac> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>TX</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <mrow> <mi>T</mi> <mi>X</mi> </mrow> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
and (3) standardization: <math> <mrow> <mi>X</mi> <mo>=</mo> <mfrac> <mrow> <mi>T</mi> <mi>X</mi> <mo>-</mo> <mover> <mrow> <mi>T</mi> <mi>X</mi> </mrow> <mo>&OverBar;</mo> </mover> </mrow> <msub> <mi>&sigma;</mi> <mi>x</mi> </msub> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein TX is a training sample, TXiThe ith training sample is data of key variables, furnace temperature and operating variables for optimizing the furnace temperature during normal production collected from the DCS database, N is the number of training samples,is the mean value of the training sample, and X is the training sample after standardization; sigmaxRepresenting the standard deviation, σ, of the training samples2 xRepresenting the variance of the training samples;
the fuzzy network module is used for carrying out fuzzy reasoning and establishing a fuzzy rule on the input variable transmitted from the standardized processing module; carrying out fuzzy classification on the preprocessed training sample X transmitted from the standardized processing module to obtain the center and the width of each fuzzy cluster in a fuzzy rule base; let the p-th normalized training sample Xp=[Xp1,...,Xpn]Where n is the number of input variables;
let the fuzzy neural network have R fuzzy rules, in order to obtain each fuzzy rule for the normalized training sample XpEach input variable X ofpjJ is 1, …, n, and the following blurring equation will find its membership to the ith fuzzy rule:
<math> <mrow> <msub> <mi>M</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <mi>exp</mi> <mo>{</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mrow> <mi>p</mi> <mi>j</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>m</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msubsup> <mi>&sigma;</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mn>2</mn> </msubsup> </mfrac> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein M isijRepresenting an input variable XpjDegree of membership, m, to the ith fuzzy ruleijAnd σijRespectively representing the center and the width of a jth Gaussian member function of the ith fuzzy rule, and obtaining the center and the width by fuzzy clustering;
normalized training sample XpFitness to fuzzy rule i is mu(i)(Xp) Then μ(i)(Xp) Can be determined by the following formula:
<math> <mrow> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&Pi;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>M</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mo>{</mo> <mo>-</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mfrac> <msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mrow> <mi>p</mi> <mi>j</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>m</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msubsup> <mi>&sigma;</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mn>2</mn> </msubsup> </mfrac> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
in the formula, MijRepresenting an input variable XpjDegree of membership, m, to the ith fuzzy ruleijAnd σijRespectively representing the center and the width of a jth Gaussian member function of the ith fuzzy rule;
after the fitness of the input training sample to each rule is obtained, the fuzzy neural network deduces the output of the fuzzy rule to obtain the final analytic solution; in a commonly used fuzzy neural network structure, the process of deriving each fuzzy rule can be expressed as follows: firstly, the linear product sum of all input variables in the training sample is obtained, and then the linear product sum is used to match the fitness mu of the rule(i)(Xp) Multiplying to obtain the final output of each fuzzy rule; the derived output of the fuzzy rule i can be expressed as follows:
<math> <mrow> <msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>&times;</mo> <msub> <mi>X</mi> <mrow> <mi>p</mi> <mi>j</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mn>0</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>+</mo> <mi>b</mi> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <mo>&lsqb;</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>&times;</mo> <msub> <mi>X</mi> <mrow> <mi>p</mi> <mi>j</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mn>0</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>&rsqb;</mo> <mo>+</mo> <mi>b</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow> </math>
in the formula (f)(i)For the output of the ith fuzzy rule,is the predicted output of the fuzzy neural network model to the p-th training sample, aijJ is 1, …, n is the linear coefficient of the jth variable in the ith fuzzy rule, ai0Is the constant term of the input variable linear product sum in the ith fuzzy rule, and b is the output offset;
in formula (7), the determination of parameters in the input variable linear product sum is a main problem used in the use of the fuzzy neural network, here, the original fuzzy rule derivation output form is converted into the support vector machine optimization problem, and then the support vector machine is used for linear optimization, wherein the conversion process is as follows:
<math> <mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>+</mo> <mi>b</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <mrow> <mo>&lsqb;</mo> <mrow> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <mrow> <munderover> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mrow> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>&times;</mo> <msub> <mi>X</mi> <mrow> <mi>p</mi> <mi>j</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mn>0</mn> </mrow> </msub> </mrow> </mrow> <mo>)</mo> </mrow> </mrow> <mo>&rsqb;</mo> </mrow> <mo>+</mo> <mi>b</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <mrow> <munderover> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>n</mi> </munderover> <mrow> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>&times;</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msub> <mi>X</mi> <mrow> <mi>p</mi> <mi>j</mi> </mrow> </msub> <mo>+</mo> <mi>b</mi> </mrow> </mrow> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein Xp0Is a constant term and is constant equal to 1; order to
Wherein,the conversion form of the original training sample is represented, namely, the original training sample is converted into the form of the formula as above, and the form is used as the training sample of the support vector machine:
wherein y is1,…,yNThe target output of the training sample is taken, S is taken as a new input training sample set, and then the original problem can be converted into the following dual optimization problem of the support vector machine:
<math> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mi>&omega;</mi> <mo>,</mo> <mi>b</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&gamma;</mi> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </msubsup> <msub> <mi>L</mi> <mi>&epsiv;</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>p</mi> </msub> <mo>,</mo> <mi>f</mi> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> <mo>+</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msup> <mi>&omega;</mi> <mi>T</mi> </msup> <mi>&omega;</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein y ispIs inputting a training sample XpTarget output of f (X)p) Is corresponding to XpModel output of, L (yp,f(Xp) Is an input training sample XpCorresponding target output ypAnd the model output f (X)p) A first order insensitive function when the error tolerance of the optimization problem is; omega is the hyperplane normal vector of the support vector machine, gamma is the penalty factor of the support vector machine, superscript T represents the transposition of the matrix, R (omega, b) is the objective function of the optimization problem, N is the number of training samples, L (yp,f(Xp) Expression) as follows:
where is the error margin of the optimization problem, ypIs inputting a training sample XpTarget output of f (X)p) Is corresponding to XpThen, a support vector machine is used for obtaining the prediction output of the fuzzy rule optimal derivation linear parameters and the dual optimization problem of the fuzzy neural network:
<math> <mrow> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>&alpha;</mi> <mi>k</mi> <mo>*</mo> </msubsup> <mo>-</mo> <msub> <mi>&alpha;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <msub> <mi>X</mi> <mrow> <mi>k</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>k</mi> <mo>&Element;</mo> <mi>S</mi> <mi>V</mi> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>&alpha;</mi> <mi>k</mi> <mo>*</mo> </msubsup> <mo>-</mo> <msub> <mi>&alpha;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <msub> <mi>X</mi> <mi>kj</mi> </msub> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>R</mi> <mo>;</mo> <mi>j</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>n</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein alpha isk(k is 1, …, N) is yp-f(Xp) Lagrange multipliers corresponding to greater than 0 and less than 0,i.e. corresponding to the p-th normalized training sample XpThe furnace temperature predicted value and the operation variable value for optimizing the furnace temperature;
the host computer still include:
the model updating module is used for acquiring field intelligent instrument signals according to a set sampling time interval, comparing the obtained actually-measured furnace temperature with a system forecast value, and if the relative error is more than 10% or the furnace temperature exceeds the upper and lower production normal limit ranges, adding new data which enables the furnace temperature to be optimal when the furnace temperature is normally produced in the DCS database into training sample data, and updating the soft measurement model;
the result display module is used for transmitting the obtained furnace temperature forecast value and the operation variable value which enables the furnace temperature to be optimal to the DCS, displaying the values at a control station of the DCS and transmitting the values to a field operation station for displaying through the DCS and a field bus; meanwhile, the DCS system takes the obtained operating variable value which enables the furnace temperature to be optimal as a new operating variable set value, and automatically executes furnace temperature optimization operation;
the signal acquisition module is used for acquiring data from the database according to the set time interval of each sampling;
the key variables include the flow of waste liquid into the incinerator, the flow of air into the incinerator and the flow of fuel into the incinerator; the manipulated variables include air flow into the incinerator and fuel flow into the incinerator.
2. A pesticide waste liquid incinerator temperature optimization method of an optimal fuzzy network is characterized by comprising the following steps: the furnace temperature optimization method comprises the following specific implementation steps:
1) determining used key variables, collecting data of the variables in normal production from a DCS (distributed control system) database as an input matrix of a training sample TX, and collecting corresponding furnace temperature and operation variable data for optimizing the furnace temperature as an output matrix Y;
2) preprocessing a model training sample input from a DCS database, centralizing the training sample, namely subtracting the average value of the sample, and then standardizing the training sample to ensure that the average value is 0 and the variance is 1; the processing is accomplished using the following mathematical process:
2.1) calculating the mean value: <math> <mrow> <mover> <mrow> <mi>T</mi> <mi>X</mi> </mrow> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>TX</mi> <mi>i</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </math>
2.2) calculating the variance: <math> <mrow> <msubsup> <mi>&sigma;</mi> <mi>x</mi> <mn>2</mn> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </mfrac> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>TX</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <mrow> <mi>T</mi> <mi>X</mi> </mrow> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
2.3) standardization: <math> <mrow> <mi>X</mi> <mo>=</mo> <mfrac> <mrow> <mi>T</mi> <mi>X</mi> <mo>-</mo> <mover> <mrow> <mi>T</mi> <mi>X</mi> </mrow> <mo>&OverBar;</mo> </mover> </mrow> <msub> <mi>&sigma;</mi> <mi>x</mi> </msub> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein TX is a training sample, TXiThe ith training sample is data of key variables, furnace temperature and operating variables for optimizing the furnace temperature during normal production collected from the DCS database, N is the number of training samples,is the mean value of the training sample, and X is the training sample after standardization; sigmaxRepresenting the standard deviation, σ, of the training samples2 xRepresenting the variance of the training samples;
3) carrying out fuzzy reasoning and establishing a fuzzy rule on the input variable transmitted from the standardized processing module; carrying out fuzzy classification on the preprocessed training sample X transmitted from the standardized processing module to obtain the center and the width of each fuzzy cluster in a fuzzy rule base; let the p-th normalized training sample Xp=[Xp1,...,Xpn]Where n is the number of input variables;
let the fuzzy neural network have R fuzzy rules, in order to obtain each fuzzy rule for the normalized training sample XpEach input variable X ofpjJ is 1, …, n, and the following blurring equation will find its membership to the ith fuzzy rule:
<math> <mrow> <msub> <mi>M</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <mi>exp</mi> <mo>{</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mrow> <mi>p</mi> <mi>j</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>m</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msubsup> <mi>&sigma;</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mn>2</mn> </msubsup> </mfrac> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
whereinMijRepresenting an input variable XpjDegree of membership, m, to the ith fuzzy ruleijAnd σijRespectively representing the center and the width of a jth Gaussian member function of the ith fuzzy rule, and obtaining the center and the width by fuzzy clustering;
normalized training sample XpFitness to fuzzy rule i is mu(i)(Xp) Then μ(i)(Xp) Can be determined by the following formula:
<math> <mrow> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&Pi;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>M</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mo>{</mo> <mo>-</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mfrac> <msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mrow> <mi>p</mi> <mi>j</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>m</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msubsup> <mi>&sigma;</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mn>2</mn> </msubsup> </mfrac> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
in the formula, MijRepresenting an input variable XpjDegree of membership, m, to the ith fuzzy ruleijAnd σijRespectively representing the center and the width of a jth Gaussian member function of the ith fuzzy rule;
after the fitness of the input training sample to each rule is obtained, the fuzzy neural network deduces the output of the fuzzy rule to obtain the final analytic solution; in a commonly used fuzzy neural network structure, the process of deriving each fuzzy rule can be expressed as follows: firstly, the linear product sum of all input variables in the training sample is obtained, and then the linear product sum is used to match the fitness mu of the rule(i)(Xp) Multiplying to obtain the final output of each fuzzy rule; the derived output of the fuzzy rule i can be expressed as follows:
<math> <mrow> <msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>&times;</mo> <msub> <mi>X</mi> <mrow> <mi>p</mi> <mi>j</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mn>0</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>+</mo> <mi>b</mi> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <mo>&lsqb;</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>&times;</mo> <msub> <mi>X</mi> <mrow> <mi>p</mi> <mi>j</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mn>0</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>&rsqb;</mo> <mo>+</mo> <mi>b</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow> </math>
in the formula (f)(i)Obfuscating rules for the ith stripeIs then outputted from the output of (a),is the predicted output of the fuzzy neural network model to the p-th training sample, aijJ is 1, …, n is the linear coefficient of the jth variable in the ith fuzzy rule, ai0Is the constant term of the input variable linear product sum in the ith fuzzy rule, and b is the output offset;
4) in the formula (7), the determination of the parameters in the linear product sum of the input variables is a main problem used in the use of the fuzzy neural network, here, the original fuzzy rule derivation output form is converted into the optimization problem of the support vector machine, and then the linear optimization is performed by using the support vector machine, wherein the conversion process is as follows:
<math> <mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <mrow> <msup> <mi>f</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>+</mo> <mi>b</mi> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <mrow> <mrow> <mo>&lsqb;</mo> <mrow> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <mrow> <munderover> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mrow> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>&times;</mo> <msub> <mi>X</mi> <mrow> <mi>p</mi> <mi>j</mi> </mrow> </msub> </mrow> <mo>+</mo> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mn>0</mn> </mrow> </msub> </mrow> <mo>)</mo> </mrow> </mrow> <mo>&rsqb;</mo> </mrow> <mo>+</mo> <mi>b</mi> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>R</mi> </munderover> <mrow> <munderover> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </mrow> <mo>&times;</mo> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msub> <mi>X</mi> <mrow> <mi>p</mi> <mi>j</mi> </mrow> </msub> <mo>+</mo> <mi>b</mi> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein Xp0Is a constant term and is constant equal to 1; order to
Wherein,the conversion form of the original training sample is represented, namely, the original training sample is converted into the form of the formula as above, and the form is used as the training sample of the support vector machine:
wherein y is1,…,yNThe target output of the training sample is taken, S is taken as a new input training sample set, and then the original problem can be converted into the following dual optimization problem of the support vector machine:
<math> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mi>&omega;</mi> <mo>,</mo> <mi>b</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&gamma;</mi> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <msubsup> <mo>&Sigma;</mo> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </msubsup> <msub> <mi>L</mi> <mi>&epsiv;</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>p</mi> </msub> <mo>,</mo> <mi>f</mi> <mo>(</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> <mo>+</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msup> <mi>&omega;</mi> <mi>T</mi> </msup> <mi>&omega;</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein y ispIs inputting a training sample XpTarget output of f (X)p) Is corresponding to XpModel output of, L (yp,f(Xp) Is an input training sample XpCorresponding target output ypAnd the model output f (X)p) First order insensitivity function when the error margin of the optimization problem isCounting; omega is the hyperplane normal vector of the support vector machine, gamma is the penalty factor of the support vector machine, superscript T represents the transposition of the matrix, R (omega, b) is the objective function of the optimization problem, N is the number of training samples, L (yp,f(Xp) Expression) as follows:
● where is the error margin y of the optimization problempIs inputting a training sample XpTarget output of f (X)p) Is corresponding to XpThen, a support vector machine is used for obtaining the prediction output of the fuzzy rule optimal derivation linear parameters and the dual optimization problem of the fuzzy neural network:
<math> <mrow> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>&alpha;</mi> <mi>k</mi> <mo>*</mo> </msubsup> <mo>-</mo> <msub> <mi>&alpha;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <msub> <mi>X</mi> <mrow> <mi>k</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>k</mi> <mo>&Element;</mo> <mi>S</mi> <mi>V</mi> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>&alpha;</mi> <mi>k</mi> <mo>*</mo> </msubsup> <mo>-</mo> <msub> <mi>&alpha;</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>&mu;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <msub> <mi>X</mi> <mi>kj</mi> </msub> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>R</mi> <mo>;</mo> <mi>j</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>n</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein alpha isk(k is 1, …, N) is yp-f(Xp) Lagrange multipliers corresponding to greater than 0 and less than 0,i.e. corresponding to the p-th normalized training sample XpThe furnace temperature predicted value and the operation variable value for optimizing the furnace temperature;
the method further comprises the following steps:
5) acquiring field intelligent instrument signals according to a set sampling time interval, comparing the obtained actually-measured furnace temperature with a system forecast value, and if the relative error is more than 10% or the furnace temperature exceeds the upper and lower production normal limit ranges, adding new data which enables the furnace temperature to be optimal when the furnace temperature is normally produced in a DCS database into training sample data, and updating a soft measurement model;
6) calculating to obtain an optimal operating variable value in the step 4), transmitting the furnace temperature forecast value and the operating variable value which enables the furnace temperature to be optimal to the DCS, displaying the values at a control station of the DCS, and transmitting the values to a field operating station through the DCS and a field bus for displaying; meanwhile, the DCS system takes the obtained operating variable value which enables the furnace temperature to be optimal as a new operating variable set value, and automatically executes furnace temperature optimization operation;
the key variables include the flow of waste liquid into the incinerator, the flow of air into the incinerator and the flow of fuel into the incinerator; the manipulated variables include air flow into the incinerator and fuel flow into the incinerator.
CN201310436863.9A 2013-09-22 2013-09-22 The pesticide waste liquid incinerator furnace temperature optimization system of optimum FUZZY NETWORK and method Expired - Fee Related CN103488086B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310436863.9A CN103488086B (en) 2013-09-22 2013-09-22 The pesticide waste liquid incinerator furnace temperature optimization system of optimum FUZZY NETWORK and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310436863.9A CN103488086B (en) 2013-09-22 2013-09-22 The pesticide waste liquid incinerator furnace temperature optimization system of optimum FUZZY NETWORK and method

Publications (2)

Publication Number Publication Date
CN103488086A CN103488086A (en) 2014-01-01
CN103488086B true CN103488086B (en) 2015-09-16

Family

ID=49828405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310436863.9A Expired - Fee Related CN103488086B (en) 2013-09-22 2013-09-22 The pesticide waste liquid incinerator furnace temperature optimization system of optimum FUZZY NETWORK and method

Country Status (1)

Country Link
CN (1) CN103488086B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107168067B (en) * 2017-06-26 2020-03-13 北京工业大学 Fuzzy control method for temperature of garbage incinerator by adopting case reasoning and extraction rules

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07126663A (en) * 1993-10-25 1995-05-16 Kawasaki Heavy Ind Ltd Slag flow-down monitoring in coal partial combustion furnace and monitoring device therefor
WO2008031177A1 (en) * 2006-09-11 2008-03-20 Gerdau Açominas S/A Process using artificial neural network for predictive control in sinter machine
CN101751051A (en) * 2008-12-05 2010-06-23 中国科学院沈阳自动化研究所 Cement decomposing furnace temperature control method based on constraint smith GPC
CN101763085A (en) * 2009-12-29 2010-06-30 浙江大学 System and method for optimizing temperature of pesticide production waste liquid incinerator
CN101763084A (en) * 2009-12-29 2010-06-30 浙江大学 System and method for minimizing chemical oxygen demand (COD) discharge of pesticide production waste liquid incinerator

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07126663A (en) * 1993-10-25 1995-05-16 Kawasaki Heavy Ind Ltd Slag flow-down monitoring in coal partial combustion furnace and monitoring device therefor
WO2008031177A1 (en) * 2006-09-11 2008-03-20 Gerdau Açominas S/A Process using artificial neural network for predictive control in sinter machine
CN101751051A (en) * 2008-12-05 2010-06-23 中国科学院沈阳自动化研究所 Cement decomposing furnace temperature control method based on constraint smith GPC
CN101763085A (en) * 2009-12-29 2010-06-30 浙江大学 System and method for optimizing temperature of pesticide production waste liquid incinerator
CN101763084A (en) * 2009-12-29 2010-06-30 浙江大学 System and method for minimizing chemical oxygen demand (COD) discharge of pesticide production waste liquid incinerator

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
有机废液焚烧炉控制***设计;潘华丰,刘兴高;《控制工程》;20090720;第16卷(第S2期);第53-56页 *

Also Published As

Publication number Publication date
CN103488086A (en) 2014-01-01

Similar Documents

Publication Publication Date Title
Young Handbook of regression methods
Liu et al. A mixture of variational canonical correlation analysis for nonlinear and quality-relevant process monitoring
CN103472865B (en) The pesticide waste liquid incinerator furnace temperature optimization system of intelligence least square and method
Huang et al. Forecasting energy fluctuation model by wavelet decomposition and stochastic recurrent wavelet neural network
Sadiq et al. Probability density functions based weights for ordered weighted averaging (OWA) operators: An example of water quality indices
CN101763085B (en) System and method for optimizing temperature of pesticide production waste liquid incinerator
Abbasi et al. Monthly and seasonal modeling of municipal waste generation using radial basis function neural network
Yan et al. DSTED: A denoising spatial–temporal encoder–decoder framework for multistep prediction of burn-through point in sintering process
CN106199174A (en) Extruder energy consumption predicting abnormality method based on transfer learning
Wu et al. Online detection of steady-state operation using a multiple-change-point model and exact Bayesian inference
Lv et al. Non-iterative T–S fuzzy modeling with random hidden-layer structure for BFG pipeline pressure prediction
Shi et al. A CNN-LSTM based deep learning model with high accuracy and robustness for carbon price forecasting: A case of Shenzhen's carbon market in China
Kosana et al. Hybrid wind speed prediction framework using data pre-processing strategy based autoencoder network
CN103472721B (en) The pesticide waste liquid incinerator furnace temperature optimization system of self-adaptation machine learning and method
Fisher et al. Data-driven modelling for resource recovery: Data volume, variability, and visualisation for an industrial bioprocess
Zhenjie et al. A novel nonlinear causal inference approach using vector‐based belief rule base
CN103472867B (en) The optimizing temperature of pesticide production waste liquid incinerator system and method for support vector machine
Zhang et al. A behavior prediction method for complex system based on belief rule base with structural adaptive
CN103488086B (en) The pesticide waste liquid incinerator furnace temperature optimization system of optimum FUZZY NETWORK and method
CN103488089B (en) Adaptive pesticide waste liquid incinerator hazardous emission controls up to par system and method
Jiang et al. Rule-based expert system to assess caving output ratio in top coal caving
CN103675010A (en) Supporting-vector-machine-based industrial melt index soft measuring meter and method
CN103675005A (en) Soft industrial melt index measurement instrument and method for optimal fuzzy network
Lu et al. Petroleum demand forecasting for Taiwan using modified fuzzy‐grey algorithms
CN103488208B (en) The optimizing temperature of pesticide production waste liquid incinerator system and method for least square

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150916

Termination date: 20180922

CF01 Termination of patent right due to non-payment of annual fee