CN110837886A - Effluent NH4-N soft measurement method based on ELM-SL0 neural network - Google Patents

Effluent NH4-N soft measurement method based on ELM-SL0 neural network Download PDF

Info

Publication number
CN110837886A
CN110837886A CN201911030774.8A CN201911030774A CN110837886A CN 110837886 A CN110837886 A CN 110837886A CN 201911030774 A CN201911030774 A CN 201911030774A CN 110837886 A CN110837886 A CN 110837886A
Authority
CN
China
Prior art keywords
network
output
weight
input
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911030774.8A
Other languages
Chinese (zh)
Inventor
杨翠丽
聂凯哲
乔俊飞
武战红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201911030774.8A priority Critical patent/CN110837886A/en
Publication of CN110837886A publication Critical patent/CN110837886A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Tourism & Hospitality (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Primary Health Care (AREA)
  • Measuring Or Testing Involving Enzymes Or Micro-Organisms (AREA)

Abstract

The invention discloses an effluent NH4-N soft measurement method based on an ELM-SL0 neural network, belonging to the field of water treatment and intelligent information control. The method mainly comprises the following operation processes: the L0 regularization penalty term is first added to the conventional error function to approximate the insignificant weight to 0, and then the improved error function is updated using a batch gradient descent algorithm to achieve training and pruning of the network. The soft measurement method of the effluent NH4-N based on the neural network formed by the steps belongs to the protection scope of the invention. The invention combines the regularization technology and the batch gradient algorithm to optimize the ELM network structure, thereby reducing the network computation complexity, improving the prediction accuracy and increasing the generalization performance.

Description

Effluent NH4-N soft measurement method based on ELM-SL0 neural network
Technical Field
Aiming at the problem that the ammonia nitrogen concentration is difficult to measure in the sewage treatment process, the method applies the batch gradient descent algorithm and the L0 regularization to the neural network in a combined manner, and predicts the ammonia nitrogen concentration in the sewage treatment process. The neural network is one of the main branches of the intelligent information processing technology, and the sewage ammonia nitrogen concentration prediction technology based on the neural network not only belongs to the field of water treatment, but also belongs to the field of intelligent information.
Background
With the rapid development of the urbanization and the industrialization of the current society, the water environment of China is seriously damaged. The sewage discharge not only seriously affects the daily life of residents, but also destroys the ecological balance of the nature. In order to reduce the discharge amount of sewage and realize the recycling of water, sewage treatment plants are established in various places in China. In the sewage treatment process, the concentration of NH4-N is an important parameter for measuring the performance of a sewage treatment process (WWTP), but because the sewage treatment process is a complex system with the characteristics of high nonlinearity, large hysteresis, large time variation, multivariable coupling and the like and the maintenance cost is high, the prediction of the NH4-N is still a pending problem. Therefore, how to predict the concentration of the effluent NH4-N with low cost and high efficiency is very necessary for the qualification of the effluent quality and the stable operation of a sewage treatment plant.
The soft measurement method utilizes easily-measured variables and predicts difficultly-measured variables in real time by constructing a model, and provides an efficient and rapid solution for measuring key water quality parameters in the sewage treatment process. The neural network can carry out high-precision approximation on a nonlinear system due to good learning capability, information processing capability and self-adaptive characteristic. The invention designs an effluent NH4-N soft measurement method based on an ELM-SL0 neural network, and realizes online prediction of effluent NH4-N concentration.
Disclosure of Invention
An effluent NH4-N soft measurement method based on an ELM-SL0 neural network mainly comprises the following operation processes: the L0 regularization penalty term is first added to the conventional error function to approximate the insignificant weight to 0, and then the improved error function is updated using a batch gradient descent algorithm to achieve training and pruning of the network. The method utilizes the learning ability of the neural network, optimizes the output weight according to the training error, eliminates unimportant output weight, then predicts the ammonia nitrogen concentration in the sewage treatment process, minimizes the error and improves the sparsity of the network structure. The method is characterized by comprising the following steps:
step 1: initializing network structures and parameters
Step 1.1: initializing a network structure
And determining the echo state network structure to be 5-N-1 by taking the temperature, the dissolved oxygen amount, the total suspended matter content, the pH value and the effluent redox potential as input variables and the ammonia nitrogen concentration as an output variable, wherein N represents the number of nodes of the reserve tank. The number N of the reserve pool nodes of the typical echo state network is more than or equal to 50 and less than or equal to 1000, and the value of N is not suitable to be too small in order to better observe the pruning effect of the algorithm. N in the network is 500, namely the network comprises 5 input nodes, 500 reserve pool nodes and 1 output node.
Step 1.2: initializing network parameters
Taking the sigmoid function as a network activation function G (·), determining the initial iteration number i equal to 0, and determining the maximum iteration number imaxNot less than 5000 training samples
Figure BDA0002250087240000021
ukRepresenting the kth set of input samples, tkRepresenting the k-th set of actual output values,representing that the dimension of an input sample is n, and L is the total number of samples; the random initialization network input weight W and the threshold vector b are between (0,1), and the initial output weight W is set to be 0.
Step 2, determining the learning rate η and the regularization parameter lambda by adopting a grid search method
(1) First, the regularization parameter is set to 0, that is, λ is 0, then the search range of the learning rate is set to [0.0005,0.01] in 0.0005 steps, and the program is run to select the optimum learning rate η with the minimum training error.
(2) Under the condition of the optimal learning rate η, setting the search range of the regularization parameter to [0.0025,0.05] by 0.0025 step length, and ensuring that the optimal regularization parameter lambda with the optimal sparse effect is selected under the condition of not influencing the training error.
And step 3: computing the network output y of the input kth set of sampleskAnd the prediction error dk
For a given activation function G (-) input samples ukInputting the weight W and the threshold vector b to obtain the hidden layer output as follows:
Figure BDA0002250087240000023
wherein, gj1 < j < N denotes the activation function of the jth neuron in the reservoir, Wj·ukAnd j < N is more than 1 and represents an input weight vector W between the jth neuron of the reservoir and the input layerjAnd the input vector ukInner product of bj1 < j < N denotes the threshold for the jth neuron in the pool.
Inputting kth group of samples and outputting y from networkkThe following formula is obtained:
yk=W·G(Wuk+b) (2)
expected output t of networkkAnd the actual output ykTraining error d betweenkIs defined as:
dk=tk-yk(3)
and 4, step 4: calculating the gradient of output weight and updating the output weight
The standard mean square error function is defined as:
Figure BDA0002250087240000031
wherein the content of the first and second substances,
Figure BDA0002250087240000032
adding an L0 regularization term to the error function, the improved error function being:
Figure BDA0002250087240000033
wherein the content of the first and second substances,
Figure BDA0002250087240000034
the L0 norm, which is W, is defined as follows:
Figure BDA0002250087240000035
wherein, Wj(1 < j < N) is the jth output weight.
However, the L0 norm is a non-convex function, so equation (5) is an NP-hard minimization combination problem. To solve this problem, we approximate the L0 norm with a continuously differentiable function f (·), the function f (γ, W) with respect to Wj) Is defined as follows:
Figure BDA0002250087240000036
Figure BDA0002250087240000037
where γ is a positive number, which controls f (γ, W)j) Approximation
Figure BDA0002250087240000038
Of greater degree, function f (γ, W)j) The pruning degree of the weight vector is low, and when gamma is close to 0, the function f (gamma, W)j) The non-zero elements of the weight vector W can be better trimmed, and gamma is 0.05 in the patent. Thus, f (γ, W)j) The first derivative of (d) is:
Figure BDA0002250087240000041
therefore equation (5) is updated as:
Figure BDA0002250087240000042
introducing a batch gradient descent algorithm, wherein the initial weight W is W0In the case of (2), the gradient formula of E (W) is:
Figure BDA0002250087240000043
wherein the content of the first and second substances,the gradient of the ith pass of E (W),
Figure BDA0002250087240000045
is the ith time
Figure BDA0002250087240000046
Of the gradient of (c).
Therefore, the update formula of the output weight is as follows:
Figure BDA0002250087240000047
wherein, Wi+1Is the output weight, W, of the i +1 th iterationiIs the output weight of the ith iteration. Each time the output weight value is updated, i is added to 1, i is i + 1.
And 5: judging whether the training is finished
If i is more than or equal to imaxThen step 6 is executed, otherwise step 3 is returned to.
Step 6: test network
And inputting a test sample by using the output weight W obtained in the step, and testing the network.
The invention is mainly characterized in that:
(1) aiming at the problem that the ammonia nitrogen concentration is difficult to measure in the sewage treatment process, the invention designs an effluent NH4-N soft measuring method based on an ELM-SL0 neural network according to the characteristic of strong nonlinear mapping capability of an extreme learning machine, and the method has the advantages of high prediction precision, strong stability, low maintenance cost and the like.
(2) The method combines the L0 regularization method and the batch gradient descent method to train the neural network, effectively prunes the neurons with lower contribution degree in the network, reduces the calculation time of the network and improves the sparsity of the network structure.
Drawings
FIG. 1 is a diagram of a neural network topology of the present invention;
FIG. 2 is a graph of Root Mean Square Error (RMSE) variation trained by the effluent NH4-N concentration prediction method of the present invention;
FIG. 3 is a graph showing the variation of the number m of output weights with absolute values less than 0.005 during training;
FIG. 4 is a diagram of the result of predicting the NH4-N concentration of effluent according to the invention;
FIG. 5 shows an error diagram of the NH4-N concentration prediction of effluent water.
Detailed Description
An effluent NH4-N soft measurement method based on an ELM-SL0 neural network mainly comprises the following operation processes: the L0 regularization penalty term is first added to the conventional error function to approximate the insignificant weight to 0, and then the improved error function is updated using a batch gradient descent algorithm to achieve training and pruning of the network. The method utilizes the learning ability of the neural network, optimizes the output weight according to the training error, eliminates unimportant output weight, then predicts the ammonia nitrogen concentration in the sewage treatment process, minimizes the error and improves the sparsity of the network structure. The method is characterized by comprising the following steps:
step 1: initializing network structures and parameters
Step 1.1: initializing a network structure
And determining the echo state network structure to be 5-N-1 by taking the temperature, the dissolved oxygen amount, the total suspended matter content, the pH value and the effluent redox potential as input variables and the ammonia nitrogen concentration as an output variable, wherein N represents the number of neurons in the reserve pool. N in the network is 500, namely the network comprises 5 input nodes, 500 reserve pool nodes and 1 output node.
Step 1.2: initializing network parameters
Taking the sigmoid function as a network activation function G (·), setting the initial iteration number i to be 0, and setting the maximum iteration number imaxNot less than 5000 training samples
Figure BDA0002250087240000051
ukRepresenting the kth set of input samples, tkRepresenting the k-th set of actual output values,
Figure BDA0002250087240000052
representing that the dimension of an input sample is n, and L is the total number of samples; the random initialization network input weight W and the threshold vector b are between (0,1), and the initial output weight W is set to be 0.
Step 2, determining the learning rate η and the regularization parameter lambda by adopting a grid search method
(1) First, the regularization parameter is set to 0, that is, λ is 0, then the search range of the learning rate is set to [0.0005,0.01] in steps of 0.0005, and the program is run to select the optimum learning rate η of 0.01 with the minimum training error.
(2) And under the condition that the optimal learning rate η is 0.01, setting the search range of the regularization parameter to be [0.0025,0.05] by a step length of 0.0025, and ensuring that the optimal regularization parameter lambda with the optimal sparse effect is selected to be 0.05 under the condition that the training error is not influenced.
And step 3: computing the network output y of the input kth set of sampleskAnd the prediction error dk
For a given activation function G (-) input samples ukInputting the weight W and the threshold vector b to obtain the hidden layer output as follows:
Figure BDA0002250087240000061
wherein, gj1 < j < N denotes the activation function of the jth neuron in the reservoir, Wj·ukAnd j < N is more than 1 and represents an input weight vector W between the jth neuron of the reservoir and the input layerjAnd the input vector ukInner product of bj1 < j < N denotes the threshold for the jth neuron in the pool.
Inputting kth group of samples and outputting y from networkkThe following formula is obtained:
yk=W·G(Wuk+b) (2)
expected output t of networkkAnd the actual output ykTraining error d betweenkIs defined as:
dk=tk-yk(3)
and 4, step 4: calculating the gradient of output weight and updating the output weight
The standard mean square error function is defined as:
Figure BDA0002250087240000062
wherein the content of the first and second substances,
Figure BDA0002250087240000063
adding an L0 regularization term to the error function, the improved error function being:
Figure BDA0002250087240000064
wherein the content of the first and second substances,
Figure BDA0002250087240000065
the L0 norm, which is W, is defined as follows:
Figure BDA0002250087240000066
wherein, Wj(1 < j < N) is the jth output weight.
However, the L0 norm is a non-convex function, so equation (5) is an NP-hard minimization combination problem. To solve this problem, we approximate the L0 norm with a continuously differentiable function f (·), the function f (γ, W) with respect to Wj) Is defined as follows:
Figure BDA0002250087240000071
Figure BDA0002250087240000072
where γ is a positive number, which controls f (γ, W)j) Approximation
Figure BDA0002250087240000073
Of greater degree, function f (γ, W)j) The pruning degree of the weight vector is low, and when gamma is close to 0, the function f (gamma, W)j) The non-zero elements of the weight vector W can be better trimmed, and gamma is 0.05 in the patent.
Thus obtaining f (γ, W)j) The first derivative of (d) is:
Figure BDA0002250087240000074
therefore equation (5) is updated as:
Figure BDA0002250087240000075
introducing a batch gradient descent algorithm, wherein the initial weight W is W0In the case of (2), the gradient formula of E (W) is:
Figure BDA0002250087240000076
wherein the content of the first and second substances,
Figure BDA0002250087240000077
the gradient of the ith pass of E (W),
Figure BDA0002250087240000078
is the ith time
Figure BDA0002250087240000079
Of the gradient of (c).
Therefore, the update formula of the output weight is as follows:
wherein, Wi+1Is the output weight, W, of the i +1 th iterationiIs the output weight of the ith iteration. Each time the output weight value is updated, i is added to 1, i is i + 1.
And 5: judging whether the training is finished
If i is more than or equal to imaxThen step 6 is executed, otherwise step 3 is returned to.
Step 6: test network
And inputting a test sample by using the output weight W obtained in the step, and testing the network.
Data samples
Tables 1-12 are data from the experiments of the present invention. Tables 1-5 are training input samples: water inlet temperature, aerobic tail-end dissolved oxygen, aerobic tail-end total suspended solids, effluent pH value and effluent redox potential, wherein table 6 is the concentration of the ammonia nitrogen in the effluent of the training sample, and tables 7-11 are test input samples: the water inlet temperature, the dissolved oxygen at the aerobic end, the total suspended solid at the aerobic end, the pH value of the effluent and the oxidation-reduction potential of the effluent, and the concentration of the ammonia nitrogen in the effluent of the test sample is shown in Table 12.
Training a sample:
TABLE 1 auxiliary variable intake temperature (. degree. C.)
Figure BDA0002250087240000081
Figure BDA0002250087240000091
TABLE 2 auxiliary variables dissolved oxygen (mg/L)
0.0851 0.2667 0.0428 0.0336 0.0313 0.3165 0.0441 5.5228 0.2654 0.0451
0.0328 0.0399 0.0355 0.0341 0.0655 0.0314 5.7940 0.0317 5.7143 0.3624
0.0474 0.0441 1.2213 0.0743 0.0545 0.4207 5.1883 0.4694 0.0453 0.1624
0.0612 0.0345 6.1271 0.0965 0.0363 0.0312 0.0518 0.0319 0.0664 0.0309
0.5400 0.2701 1.1610 0.6857 0.0768 0.0329 0.0313 0.0467 0.3987 0.0339
0.0715 0.0338 0.9670 3.6627 0.0311 0.4564 0.3942 0.4684 0.5487 0.2066
0.0410 2.5088 0.2566 0.0464 6.1833 0.2890 0.5426 0.3782 0.0302 0.0309
0.0555 0.0373 0.2557 0.4711 0.0615 0.0312 0.0390 0.0416 0.0591 0.0451
0.0345 0.0540 0.4478 0.0637 6.1654 0.0308 0.4508 0.5192 0.1481 0.0396
0.0318 0.0489 2.9631 0.0357 0.0530 0.2282 0.5539 0.0384 0.2232 0.4448
0.0691 0.1172 0.0683 3.0178 0.5287 0.2558 0.0561 0.0309 0.0936 0.0311
0.0356 0.0412 0.0510 0.0448 0.0318 0.0387 5.5628 0.0350 0.0907 0.0363
5.3787 0.0472 0.0364 0.1396 0.8063 0.0686 0.0340 0.4833 0.2687 0.2740
0.2546 0.4329 0.0300 0.0312 0.0411 0.4291 0.0382 0.5351 0.0532 0.0302
0.3301 0.0909 0.0297 0.0346 0.0592 0.0461 0.0492 0.2079 0.0706 0.0334
0.0375 1.6391 0.0683 0.0406 0.0398 0.0562 0.4340 0.0291 0.0337 0.4621
0.2489 0.3703 0.3096 0.2646 0.0706 6.0993 0.4649 0.2659 0.0327 0.1247
1.2662 0.0308 2.1216 0.5378 5.3780 0.0338 0.0397 0.0411 0.0336 0.0870
0.0427 0.0956 0.0505 0.4026 0.0350 0.0286 0.0488 0.0559 0.0318 0.3640
0.0352 0.0455 0.0412 0.4273 0.0640 0.0792 0.0308 1.0497 0.0483 0.0309
0.0582 0.0971 0.0571 0.0478 0.0582 0.0494 0.0317 0.3930 0.0378 0.0410
0.0361 0.0529 0.0565 0.0447 0.7617 0.0963 0.0353 0.3812 0.1343 0.0535
0.0441 0.0692 0.0668 5.7520 0.0403 0.0442 0.0408 0.0799 0.3272 0.0307
0.2365 0.0464 5.4811 0.0769 0.4512 0.5309 0.0657 2.7794 0.0784 0.0617
0.3554 0.0422 0.0582 0.2470 0.4073 5.9548 0.0379 0.0796 0.2997 0.5858
0.0316 2.6852 0.4316 0.4455 0.0421 0.0548 0.0356 5.8531 2.0604 0.1009
0.0310 0.4379 0.0370 0.0432 0.5815 0.0480 0.0787 0.0567 0.2380 0.0486
0.0339 0.0415 0.4889 2.5040 0.0673 0.3274 0.5043 0.1995 0.0365 0.0297
0.0711 0.2404 0.0946 1.5057 0.5498 0.0696 0.0522 0.2974 0.0361 0.1865
0.0309 0.0831 0.0346 0.0683 5.9711 3.4109 0.0823 0.0561 0.1978 1.6931
TABLE 3 auxiliary variables Total solids suspension (mg/L)
Figure BDA0002250087240000092
TABLE 4 auxiliary variable pH
TABLE 5 Oxidation-reduction potential of the auxiliary variable
Figure BDA0002250087240000112
Figure BDA0002250087240000121
TABLE 6 actual NH4-N concentration (mg/L) of the water
Figure BDA0002250087240000131
Testing a sample:
TABLE 7 auxiliary variable Inlet temperature (. degree. C.)
26.6664 25.5925 26.0751 26.8655 24.9307 24.9436 25.2516 25.8255 24.9177 25.4691
25.6463 23.6239 26.7961 23.3835 25.5664 25.6231 23.6806 24.1833 25.5388 25.7410
25.9991 25.5576 24.9465 24.9725 24.7418 27.2087 25.8663 26.7136 24.9061 25.6696
24.6813 23.2770 23.8631 24.9667 26.8065 24.4801 24.8874 25.4850 22.9625 25.2472
25.9962 27.1094 25.6289 25.4081 24.2291 25.4720 27.2028 25.3994 25.5649 24.6698
24.9018 24.5476 25.3617 23.7378 24.3022 24.9840 22.8098 25.0100 25.2979 25.0303
27.0784 24.2721 24.4198 24.9826 25.6667 23.0559 23.7307 25.4778 25.3893 25.5126
25.6725 25.4067 25.0534 23.1565 25.0881 24.9119 24.9667 24.9480 24.8686 26.9098
25.9305 23.2841 25.3486 25.2993 24.5188 25.4371 24.9480 27.2933 25.9845 25.4618
25.2212 27.0562 23.1027 24.8614 25.0852 24.5591 25.4153 25.6260 26.9349 25.3501
25.3486 24.3796 25.2936 23.6253 24.5404 24.2047 26.7917 24.6051 25.6158 24.6368
22.9115 24.9047 25.2559 26.5046 27.1331 25.9641 24.9999 26.0429 23.6295 24.6698
24.7908 24.7490 26.0283 23.0630 25.4952 25.1589 23.2032 23.5598 25.6522 23.3310
25.5402 23.1551 23.8745 24.6152 26.7858 25.1633 25.9436 23.6295 25.1532 25.8255
24.8052 25.2950 25.1778 23.9902 27.3334 27.1880 23.4745 26.9556 25.3399 23.4048
25.9539 26.8153 25.6740 25.4458 26.0400 25.1315 24.8225 24.9494 23.4318 25.5053
26.6723 26.8212 23.0956 25.4981 25.2299 23.5769 23.6096 23.1381 23.7006 25.5068
23.5114 25.6405 25.1488 23.8717 26.9763 27.2147 26.9526 25.1040 23.6422 25.1285
25.1300 23.8477 23.4190 23.0191 24.9595 24.1218 23.6338 25.2849 23.6295 26.7652
TABLE 8 auxiliary variables dissolved oxygen (mg/L)
Figure BDA0002250087240000141
TABLE 9 auxiliary variables Total suspended solids (mg/L)
2.8203 2.9460 2.8678 2.8202 2.5611 2.5829 2.8432 2.8892 2.5314 3.0358
2.9424 2.5450 2.7539 2.3089 2.9585 2.9651 2.6572 2.4949 2.9497 2.8061
2.8056 2.9405 3.1266 2.5765 3.0128 2.8251 2.7974 2.7827 3.0233 2.8753
2.8377 2.2740 2.4693 2.8942 2.8151 2.4982 3.2238 3.0289 2.2692 2.7131
2.7684 3.1727 2.9420 3.0138 2.4700 2.9379 2.8182 2.9699 2.9699 2.9696
2.5363 2.4573 2.9005 2.4428 2.4121 2.4505 2.3100 2.8173 2.8868 3.0912
2.8053 2.5025 3.1527 2.9324 2.9416 2.3157 2.3829 2.8973 3.0728 3.1456
2.8617 3.0857 3.0329 2.2105 2.8024 2.4376 2.6005 2.9275 2.4709 2.7997
2.8238 2.4789 2.9423 2.9435 3.1618 2.9997 2.8217 2.7176 2.7800 3.0250
2.7410 2.8029 2.2935 2.3933 2.4443 3.0369 3.0349 2.9285 2.7858 2.9329
3.0151 2.3839 2.7219 2.5113 3.0535 2.4245 2.7999 2.9979 2.9201 2.4916
2.3119 2.5664 2.7491 2.8509 2.8060 2.7973 2.9019 2.8119 2.5754 3.1621
2.4192 2.7953 2.8213 2.3439 2.9265 2.4068 2.2200 2.3514 2.8738 2.2805
2.8895 2.4196 2.5045 2.4345 2.7979 2.8979 2.8572 2.4255 2.6941 2.8306
2.8052 2.7744 2.7306 2.4820 2.8343 2.8523 2.3883 2.8536 2.9709 2.5321
2.8260 2.7556 2.8632 3.1004 2.8337 3.0059 2.4971 2.7832 2.3155 3.0640
2.8295 2.8165 2.4155 3.0494 2.9023 2.3655 2.4784 2.4161 2.4331 3.0726
2.4347 2.9480 2.7790 2.5286 2.7725 2.8985 2.7998 2.9557 2.5519 2.8087
2.4082 2.2835 2.4440 2.2668 2.5590 2.6305 2.3938 2.7067 2.4866 2.9067
TABLE 10 auxiliary variables pH value
Figure BDA0002250087240000142
Figure BDA0002250087240000151
TABLE 11 Oxidation-reduction potential of auxiliary variables
-5.3838 38.3272 -45.9542 -122.6730 -196.9560 -196.1870 -126.8390 -40.0577
-194.4560 16.3435 35.3790 -170.4860 -87.8065 -163.3710 33.1357 32.1744
-168.6270 -190.8670 0.0641 -66.2715 -21.5991 29.9952 19.7404 -194.4560
48.0052 -17.4331 -41.2755 -97.8049 46.8515 36.0199 -88.8961 -165.2940
-163.0510 27.6238 -5.7042 -200.9940 18.2663 19.3559 -199.9040 -170.1650
-46.2106 -115.4940 33.8408 7.9475 -161.0000 34.9303 -67.2329 45.9542
-3.0764 41.2114 -196.8920 -205.3520 36.6608 -154.1420 -158.5640 -202.5960
-170.1650 -146.5150 30.4439 21.3427 -16.7281 -161.0640 18.6509 30.7643
35.3149 -194.8410 -155.1680 29.9952 8.1397 17.8177 -15.5103 19.4200
45.3133 -169.7810 -142.6700 -205.0960 -187.0860 26.4701 -196.5710 -5.7683
-45.6337 -172.3440 15.8949 30.4439 19.5482 6.8579 -27.8161 -16.5358
-10.2548 4.8069 -172.7930 -117.9940 -161.3850 -205.9290 -202.7240 30.3798
28.7775 34.0971 -13.5876 21.0223 9.0370 -190.8030 -163.2430 -170.6140
20.7018 -161.3200 -15.5103 36.0199 34.5458 -201.8910 -165.9990 -197.0200
-155.4880 -8.7807 -112.1620 -13.3312 33.3280 -12.4980 -171.9600 18.9713
-196.6990 -63.9001 -12.8185 -158.5000 31.3412 -202.3400 -174.0750 -157.7950
-61.3364 -164.5250 37.6863 -164.0120 -163.4350 -206.2490 -76.3340 -138.9520
-18.4586 -152.8600 -178.4330 -38.0068 -55.9526 -160.8720 -176.7030 -162.3460
-16.7922 -73.8344 -162.6020 -121.6470 46.0183 -157.5390 -39.7373 -89.2806
39.5450 19.4200 -6.0888 44.9287 -196.6990 -102.4200 -163.0510 18.0740
-20.3814 -99.7277 -161.7690 17.5613 -135.4270 -159.9750 -151.5140 -173.4980
-177.4080 18.6509 -161.9610 35.0585 -108.6370 -186.5730 -5.2556 -71.1425
-76.8467 37.4940 -172.8570 -150.6170 -202.7880 -145.4900 -157.8590 -201.2500
-194.5840 -161.6410 -160.5510 -157.2830 -174.7800 -120.7500
TABLE 12 actual NH4-N concentration (mg/L) of the water
Figure BDA0002250087240000152
Figure BDA0002250087240000161

Claims (1)

1. An effluent NH4-N soft measurement method based on an ELM-SL0 neural network is characterized by comprising the following steps:
step 1: initializing network structures and parameters
Step 1.1: initializing a network structure
Determining the echo state network structure to be 5-N-1 by taking the temperature, the dissolved oxygen amount, the total suspended matter content, the pH value and the effluent redox potential as input variables and the ammonia nitrogen concentration as an output variable, wherein N represents the number of nodes of a reserve pool; the number N of reserve pool nodes of the typical echo state network is equal to or more than 50 and equal to or less than 1000;
step 1.2: initializing network parameters
Taking the sigmoid function as a network activation function G (·), determining the initial iteration number i equal to 0, and finallyLarge number of iterations imaxNot less than 5000 training samples
Figure FDA0002250087230000011
ukRepresenting the kth set of input samples, tkRepresenting the k-th set of actual output values,
Figure FDA0002250087230000012
representing that the dimension of an input sample is n, and L is the total number of samples; randomly initializing a network input weight W and a threshold vector b between (0,1), and setting an initial output weight W to be 0;
step 2, determining the learning rate η and the regularization parameter lambda by adopting a grid search method
(1) Firstly, setting the regularization parameter to be 0, namely, λ is 0, then setting the search range of the learning rate to be [0.0005,0.01] in steps of 0.0005, running the program, and selecting the optimal learning rate η with the minimum training error;
(2) under the condition of the optimal learning rate η, setting the search range of the regularization parameter to [0.0025,0.05] by 0.0025 step length, and ensuring that the optimal regularization parameter lambda with the optimal sparse effect is selected under the condition of not influencing the training error;
and step 3: computing the network output y of the input kth set of sampleskAnd the prediction error dk
For a given activation function G (-) input samples ukInputting the weight W and the threshold vector b to obtain the hidden layer output as follows:
Figure FDA0002250087230000013
wherein, gj1 < j < N denotes the activation function of the jth neuron in the reservoir, Wj·ukAnd j < N is more than 1 and represents an input weight vector W between the jth neuron of the reservoir and the input layerjAnd the input vector ukInner product of bj1 < j < N represents the threshold for the jth neuron in the pool;
inputting kth group of samples and outputting y from networkkThe following formula is obtained:
yk=W·G(Wuk+b) (2)
expected output t of networkkAnd the actual output ykTraining error d betweenkIs defined as:
dk=tk-yk(3)
and 4, step 4: calculating the gradient of output weight and updating the output weight
The standard mean square error function is defined as:
Figure FDA0002250087230000021
wherein the content of the first and second substances,
Figure FDA0002250087230000022
adding an L0 regularization term to the error function, the improved error function being:
Figure FDA0002250087230000023
wherein the content of the first and second substances,
Figure FDA0002250087230000024
the L0 norm, which is W, is defined as follows:
Figure FDA0002250087230000025
wherein, WjJ is more than 1 and less than N is the jth output weight;
however, the L0 norm is a non-convex function, so equation (5) is an NP-hard minimization combination problem; approximating the L0 norm with a continuously differentiable function f (·), function f (γ, W) with respect to Wj) Is defined as follows:
Figure FDA0002250087230000026
Figure FDA0002250087230000027
wherein gamma is positive number, and gamma is 0.05; thus obtaining f (γ, W)j) The first derivative of (d) is:
Figure FDA0002250087230000031
therefore equation (5) is updated as:
Figure FDA0002250087230000032
introducing a batch gradient descent algorithm, wherein the initial weight W is W0In the case of (2), the gradient formula of E (W) is:
Figure FDA0002250087230000033
wherein the content of the first and second substances,
Figure FDA0002250087230000034
the gradient of the ith pass of E (W),
Figure FDA0002250087230000035
is the ith time
Figure FDA0002250087230000036
A gradient of (a);
therefore, the update formula of the output weight is as follows:
Figure FDA0002250087230000037
wherein, Wi+1Is the output weight, W, of the i +1 th iterationiThe output weight of the ith iteration is; when the output weight value is updated once, i is accumulated to be 1, namely i is i + 1;
and 5: judging whether the training is finished
If i is more than or equal to imaxIf yes, executing step 6, otherwise, returning to step 3;
step 6: test network
And inputting a test sample by using the output weight W obtained in the step, and testing the network.
CN201911030774.8A 2019-10-28 2019-10-28 Effluent NH4-N soft measurement method based on ELM-SL0 neural network Pending CN110837886A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911030774.8A CN110837886A (en) 2019-10-28 2019-10-28 Effluent NH4-N soft measurement method based on ELM-SL0 neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911030774.8A CN110837886A (en) 2019-10-28 2019-10-28 Effluent NH4-N soft measurement method based on ELM-SL0 neural network

Publications (1)

Publication Number Publication Date
CN110837886A true CN110837886A (en) 2020-02-25

Family

ID=69575622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911030774.8A Pending CN110837886A (en) 2019-10-28 2019-10-28 Effluent NH4-N soft measurement method based on ELM-SL0 neural network

Country Status (1)

Country Link
CN (1) CN110837886A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116151121A (en) * 2023-02-21 2023-05-23 北京工业大学 Neural network-based effluent NH4-N soft measurement method
CN116451763A (en) * 2023-03-17 2023-07-18 北京工业大学 Effluent NH based on EDDESN 4 -N prediction method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BR8200467U (en) * 2002-03-18 2003-12-09 Volnei Jaco Knorst Synthetic marble sink with tub in differentiated material
CN104616030A (en) * 2015-01-21 2015-05-13 北京工业大学 Extreme learning machine algorithm-based recognition method
CN104965971A (en) * 2015-05-24 2015-10-07 北京工业大学 Ammonia nitrogen concentration soft-measuring method based on fuzzy neural network
CN106503730A (en) * 2016-09-30 2017-03-15 暨南大学 A kind of bridge moving load identification method based on concatenate dictionaries and sparse regularization
CN106803237A (en) * 2016-12-14 2017-06-06 银江股份有限公司 A kind of improvement self-adaptive weighted average image de-noising method based on extreme learning machine
US20180093092A1 (en) * 2016-04-22 2018-04-05 Newton Howard Biological co-processor (bcp)
CN108469507A (en) * 2018-03-13 2018-08-31 北京工业大学 A kind of water outlet BOD flexible measurement methods based on Self organizing RBF Neural Network
CN109242194A (en) * 2018-09-25 2019-01-18 东北大学 A kind of thickener underflow concentration prediction method based on mixed model
JP2019040414A (en) * 2017-08-25 2019-03-14 日本電信電話株式会社 Learning device and learning method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BR8200467U (en) * 2002-03-18 2003-12-09 Volnei Jaco Knorst Synthetic marble sink with tub in differentiated material
CN104616030A (en) * 2015-01-21 2015-05-13 北京工业大学 Extreme learning machine algorithm-based recognition method
CN104965971A (en) * 2015-05-24 2015-10-07 北京工业大学 Ammonia nitrogen concentration soft-measuring method based on fuzzy neural network
US20180093092A1 (en) * 2016-04-22 2018-04-05 Newton Howard Biological co-processor (bcp)
CN106503730A (en) * 2016-09-30 2017-03-15 暨南大学 A kind of bridge moving load identification method based on concatenate dictionaries and sparse regularization
CN106803237A (en) * 2016-12-14 2017-06-06 银江股份有限公司 A kind of improvement self-adaptive weighted average image de-noising method based on extreme learning machine
JP2019040414A (en) * 2017-08-25 2019-03-14 日本電信電話株式会社 Learning device and learning method
CN108469507A (en) * 2018-03-13 2018-08-31 北京工业大学 A kind of water outlet BOD flexible measurement methods based on Self organizing RBF Neural Network
CN109242194A (en) * 2018-09-25 2019-01-18 东北大学 A kind of thickener underflow concentration prediction method based on mixed model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YU YL ET AL: "A Homotopy Iterative Hard Thresholding Algorithm With Extreme Learning Machine for scene Recognition" *
慈能达: "车载毫米波雷达通信一体化***中的压缩感知DOA估计" *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116151121A (en) * 2023-02-21 2023-05-23 北京工业大学 Neural network-based effluent NH4-N soft measurement method
CN116151121B (en) * 2023-02-21 2024-03-26 北京工业大学 Neural network-based effluent NH4-N soft measurement method
CN116451763A (en) * 2023-03-17 2023-07-18 北京工业大学 Effluent NH based on EDDESN 4 -N prediction method
CN116451763B (en) * 2023-03-17 2024-04-12 北京工业大学 Effluent NH based on EDDESN 4 -N prediction method

Similar Documents

Publication Publication Date Title
CN108469507B (en) Effluent BOD soft measurement method based on self-organizing RBF neural network
CN111354423B (en) Method for predicting ammonia nitrogen concentration of effluent of self-organizing recursive fuzzy neural network based on multivariate time series analysis
CN111291937A (en) Method for predicting quality of treated sewage based on combination of support vector classification and GRU neural network
CN109657790B (en) PSO-based recursive RBF neural network effluent BOD prediction method
CN106022954B (en) Multiple BP neural network load prediction method based on grey correlation degree
CN109344971B (en) Effluent ammonia nitrogen concentration prediction method based on adaptive recursive fuzzy neural network
CN112949894B (en) Output water BOD prediction method based on simplified long-short-term memory neural network
CN114037163A (en) Sewage treatment effluent quality early warning method based on dynamic weight PSO (particle swarm optimization) optimization BP (Back propagation) neural network
CN104680015A (en) Online soft measurement method for sewage treatment based on quick relevance vector machine
CN111242380A (en) Lake (reservoir) eutrophication prediction method based on artificial intelligence algorithm
CN109599866B (en) Prediction-assisted power system state estimation method
CN110837886A (en) Effluent NH4-N soft measurement method based on ELM-SL0 neural network
CN112989704A (en) DE algorithm-based IRFM-CMNN effluent BOD concentration prediction method
CN115660165A (en) Modular neural network effluent ammonia nitrogen concentration multi-step prediction method based on double-layer PSO
CN109408896B (en) Multi-element intelligent real-time monitoring method for anaerobic sewage treatment gas production
CN110991616B (en) Method for predicting BOD of effluent based on pruning feedforward small-world neural network
CN114330815A (en) Ultra-short-term wind power prediction method and system based on improved GOA (generic object oriented architecture) optimized LSTM (least Square TM)
CN110542748B (en) Knowledge-based robust effluent ammonia nitrogen soft measurement method
CN113111576A (en) Mixed coding particle swarm-long and short term memory neural network based soft measurement method for ammonia nitrogen in effluent
CN116306803A (en) Method for predicting BOD concentration of outlet water of ILSTM (biological information collection flow) neural network based on WSFA-AFE
Varkeshi et al. Predicting the performance of Gorgan wastewater treatment plant using ANN-GA, CANFIS, and ANN models
CN116432832A (en) Water quality prediction method based on XGBoost-LSTM prediction model
CN110909492A (en) Sewage treatment process soft measurement method based on extreme gradient lifting algorithm
CN115905821A (en) Urban sewage treatment process state monitoring method based on multi-stage dynamic fuzzy width learning
CN112924646B (en) Effluent BOD soft measurement method based on self-adaptive pruning feedforward small-world neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination