CN110991616B - Method for predicting BOD of effluent based on pruning feedforward small-world neural network - Google Patents

Method for predicting BOD of effluent based on pruning feedforward small-world neural network Download PDF

Info

Publication number
CN110991616B
CN110991616B CN201911211235.4A CN201911211235A CN110991616B CN 110991616 B CN110991616 B CN 110991616B CN 201911211235 A CN201911211235 A CN 201911211235A CN 110991616 B CN110991616 B CN 110991616B
Authority
CN
China
Prior art keywords
layer
neuron
neural network
output
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911211235.4A
Other languages
Chinese (zh)
Other versions
CN110991616A (en
Inventor
李文静
褚明慧
乔俊飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201911211235.4A priority Critical patent/CN110991616B/en
Publication of CN110991616A publication Critical patent/CN110991616A/en
Application granted granted Critical
Publication of CN110991616B publication Critical patent/CN110991616B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16CCOMPUTATIONAL CHEMISTRY; CHEMOINFORMATICS; COMPUTATIONAL MATERIALS SCIENCE
    • G16C20/00Chemoinformatics, i.e. ICT specially adapted for the handling of physicochemical or structural data of chemical particles, elements, compounds or mixtures
    • G16C20/30Prediction of properties of chemical compounds, compositions or mixtures
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16CCOMPUTATIONAL CHEMISTRY; CHEMOINFORMATICS; COMPUTATIONAL MATERIALS SCIENCE
    • G16C20/00Chemoinformatics, i.e. ICT specially adapted for the handling of physicochemical or structural data of chemical particles, elements, compounds or mixtures
    • G16C20/70Machine learning, data mining or chemometrics
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A20/00Water conservation; Efficient water supply; Efficient water use
    • Y02A20/152Water filtration

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Chemical & Material Sciences (AREA)
  • Crystallography & Structural Chemistry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Feedback Control In General (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The method for predicting BOD concentration of the effluent based on the pruned feedforward small world neural network is an important branch in the field of advanced manufacturing technology, and belongs to the field of control and water treatment. According to the invention, by designing the pruned feedforward small world neural network, the real-time accurate measurement of the BOD concentration is realized according to the data acquired in the sewage treatment process, the problem that the BOD concentration of the effluent in the sewage treatment process is difficult to measure in real time is solved, and the real-time monitoring level of the water quality of the urban sewage treatment plant is improved.

Description

Method for predicting BOD of effluent based on pruning feedforward small-world neural network
Technical field:
the invention relates to a water outlet BOD prediction method based on a truncated feedforward small-world neural network. The realization of real-time prediction of BOD concentration is an important branch in the advanced manufacturing technology field, and belongs to the control field and the water treatment field.
The background technology is as follows:
the biochemical oxygen demand (Biochemical Oxygen Demand, BOD) refers to the amount of dissolved oxygen in water consumed by decomposing organic matters by microorganisms in a specified time, is an important index for evaluating the quality of sewage, and can be used for rapidly and accurately measuring the BOD concentration of the effluent so as to be beneficial to effectively controlling water pollution. The current BOD measurement methods include dilution and inoculation methods, microorganism sensor rapid measurement methods and the like, the BOD analysis measurement period is 5 days, the measurement period is long, and the change of the BOD concentration in sewage cannot be reflected in real time. Meanwhile, the microbial sensor has the defects of high manufacturing cost, short service life, poor stability and the like, and the universality of the microbial sensor is reduced. Therefore, how to detect the BOD concentration of the effluent with low cost and high efficiency is a difficult problem in the sewage treatment process.
The soft measurement method adopts an indirect measurement thought, utilizes an easily-measured variable, predicts a difficult-to-measure variable in real time by constructing a model, and provides a high-efficiency and quick solution for measuring key water quality parameters in the sewage treatment process. The invention designs a water outlet BOD prediction method based on a truncated feedforward small-world neural network, which is an effective model in a soft measurement method and has the strong generalization capability.
Disclosure of Invention
The invention obtains the BOD prediction method of the effluent based on the pruned feedforward small world neural network, realizes the real-time measurement of the BOD concentration according to the data acquired in the sewage treatment process, solves the problem that the BOD concentration of the effluent in the sewage treatment process is difficult to be measured in real time, and improves the real-time monitoring level of the water quality of the urban sewage treatment plant;
a method for soft measurement of BOD concentration of effluent based on a pruned feedforward small world neural network is characterized by comprising the following steps:
step 1: selecting auxiliary variables of a BOD prediction model of the effluent;
directly selecting given M auxiliary variables; normalizing the auxiliary variable to [ -1,1] according to formula (1), and normalizing the output variable BOD to [0,1] according to formula (2):
wherein F is m Represents the mth auxiliary variable, O represents the output variable, x m And y represents the mth auxiliary variable and the output variable after normalization respectively; min (F) m ) And max (F) m ) Respectively representing the minimum value and the maximum value in the mth auxiliary variable, and min (O) and max (O) respectively represent the minimum value and the maximum value in the output variable;
step 2: designing a feedforward small world neural network model;
step 2.1: designing a feedforward small-world neural network model wiring mode;
constructing a feedforward small-world neural network according to the Watts-Strogatz rewiring rule; the specific construction process is as follows: firstly, constructing an L-layer feedforward neural network with regular connection, then randomly selecting one connection from the model according to reconnection probability p, disconnecting from the tail end and reconnecting to another neuron in the model, wherein p is selected empirically, the value range is (0, 1), if the new connection exists, randomly selecting another new neuron to connect, and the neurons in the same layer cannot be connected with each other;
step 2.2: designing a topological structure of a feedforward small-world neural network model;
the designed feedforward small world neural network topological structure is L-layer in total and comprises an input layer, an hidden layer and an output layer; the calculation functions of each layer are as follows:
(1) input layer: the layer has M neurons, representing M input auxiliary variables, and the input of the input layer is x (1) =[x 1 (1) ,x 2 (1) ,…,x M (1) ]Wherein x is m (1) Representing the mth input auxiliary variable of the input layer, m=1, 2, …, M, the layer output is equal to the input, the output of the mth neuron of the input layer is:
(2) hidden layer: by adopting the sigmoid function as an activation function of the hidden layer, the input and output definitions of the j-th neuron of the first layer of the neural network are shown in formulas (4) and (5), respectively:
wherein n is u Representing the number of neurons in the u-th layer of the neural network,representing a connection weight between an ith neuron of a ith layer and a jth neuron of a first layer of the neural network;
(3) output layer: the output layer comprises a neuron, and the output of the output neuron is as follows:
wherein the method comprises the steps ofRepresenting the connection weight between the jth neuron of the first layer of the neural network and the output neuron, n l Representing the number of neurons of the first layer of the neural network;
step 3: designing a deletion algorithm of the feedforward small world neural network;
step 3.1: defining a performance index function:
wherein Q is the number of samples, d q For the desired output value of the q-th sample,a predicted output value for the q-th sample;
step 3.2: carrying out parameter correction by adopting a batch gradient descent algorithm;
(1) the output weight correction of the output layer is as shown in formulas (8) - (10):
wherein the method comprises the steps of
Wherein,and->Represents the connection weight between the j-th neuron and the output neuron of the first layer of the neural network at the moments t and t+1 respectively,/o>Representing the variation value of the connection weight between the jth nerve of the jth layer of the neural network at the moment t and the output nerve element, eta v Represents the learning rate, eta in the output weight correction process of the output layer v Selected empirically to have a value in the range of (0,0.1)];
(2) The output weight correction of the hidden layer is as shown in formulas (11) - (13):
wherein the method comprises the steps of
Wherein,and->Representing the connection weight between the ith neuron of the s-th layer and the jth neuron of the first layer of the neural network at the moments t and t+1 respectively, < + >>Representing the variation value of the connection weight between the ith neuron of the s-th layer and the jth neuron of the first layer of the neural network at the moment of t, eta w Represents the learning rate, eta in the process of correcting the output weight of the hidden layer w Selected empirically to have a value in the range of (0,0.1)];
Step 3.3: inputting training sample data, updating output weights of an implicit layer and an output layer according to formulas (8) - (13) in step 3.2, wherein Iter represents training iteration times, the iteration times are increased once every time the weights are updated, if the iteration times of the training process can be divided by a learning step length constant tau, step 3.4 is executed, otherwise, step 3.6 is skipped, wherein tau is selected according to experience, and the value range is an integer in the range of [10,100 ];
step 3.4: calculating the Katz centrality and the normalized Katz centrality of all hidden layer neurons; katz centrality is defined as shown in equation (14):
wherein the method comprises the steps ofRepresents the k power of the connection weight between the neuron g and the neuron h, alpha represents the attenuation factor, and the set value of alpha needs to satisfy 0<α<1/λ max Alpha is selected empirically to have a value in the range of (0,0.1)],λ max A value representing the maximum eigenvalue of the network adjacency matrix with greater Katz centrality indicates that the node is more important, and vice versa;
the normalized Katz centrality definition is shown in equation (15):
wherein the method comprises the steps ofKatz centrality of jth neuron representing the s-th layer of the neural network, +.>Katz centrality normalized by the jth neuron representing the s-th layer of the neural network; is provided with->Average value of normalized Katz centrality of all neurons representing the s-th layer of the neural network, wherein θ is a preset threshold parameter, and is selected empirically within a range of [0.9,1 ]]If the Katz centrality of neurons satisfies +.>The neuron is considered as an unimportant neuron, and the unimportant hidden layer neuron set in the s layer of the neural network is marked as A s The remaining set of neurons in layer s is denoted B s
Step 3.5: computing set A s And set B s The correlation coefficient between hidden layer neurons in (2) is defined as shown in formula (16):
wherein,and->Respectively representing the output values of the ith neuron and the jth neuron of the ith layer of the neural network when the qth training sample is input; />And->Respectively representing the input of all samples +.>And->Average value of (2); sigma (sigma) i Sum sigma j Respectively representing the input of all samples +.>And->Standard deviation of (2); will set A s Hidden layer neurons in (denoted as neuron a, a e A) s ) The neuron with the highest correlation coefficient (named neuron B, b.epsilon.B) s ) Combining to generate a new neuron c, wherein the connection weight between the neuron c and the neuron of the forward network layer is constructed according to the reconnection rule of Watt-Strogatz and the reconnection probability p, wherein p is selected empirically and has the value range of (0, 1) so as to ensure the small worldwide performance of the network, and the output of the neuron c is shown as a formula (17):
wherein the method comprises the steps ofRepresenting a connection weight between an ith neuron in an nth layer of the neural network and a neuron c in an s-th layer;
the connection weights between neurons c and neurons of the backward network layer according to the pruning algorithm are as shown in equations (18) - (19):
wherein the method comprises the steps ofAnd->Connection weights between neurons a, b and c representing the s-th layer of the neural network and neuron j in the backward hidden layer, respectively, +.>And->Connection weights between neurons a, b and c, respectively representing the s-th layer of the neural network, and the output neurons,/for>And->Output values of neurons a, b and c respectively representing the s-th layer of the neural network are combined according to formulas (17) - (19), and then step 3.3 is skipped;
step 3.6: calculate training RMSE, if it is satisfied that RMSE is less than the desired training RMSE (RMSE d ) Or the number of iterations reaches the maximum number of iterations (Iter max ) Stopping the calculation at the time, wherein Iter max Selected according to experience, the value range is [5000,10000 ]]Otherwise, jumping to step 3.3, wherein the definition of the RMSE is shown in a formula (20);
step 4: predicting BOD of the effluent;
and taking the test sample data as the input of the trained pruned feedforward small-world neural network, and performing inverse normalization on the output of the neural network to obtain the predicted value of the BOD of the output water.
Compared with the prior art, the invention has the following obvious advantages and beneficial effects:
(1) Aiming at the problems that the current sewage treatment process has long measurement period of key water quality parameters BOD and a mathematical model is difficult to determine, the invention provides a truncated feedforward small-world neural network model for realizing real-time measurement of the effluent BOD, and the method has the characteristics of good instantaneity, high precision, good stability and strong generalization capability.
(2) Aiming at the problems that the traditional small world neural network structure is large and the structure is easy to be overlarge and time-consuming due to fixation, the importance of hidden layer neurons is measured by adopting Katz centrality, and a pruning algorithm is provided to determine the number of the hidden layer neurons of the neural network, so that the condition that the network is overlarge in scale and more calculation time and storage space are needed is avoided.
Drawings
FIG. 1 is a diagram of the topology of a neural network of the present invention;
FIG. 2 is a graph of training Root Mean Square Error (RMSE) variation for the BOD concentration prediction method of the present invention;
FIGS. 3 and 4 are views of hidden layer node changes in the training process of the BOD concentration prediction method of the invention;
FIG. 5 is a graph showing the result of predicting BOD concentration of the effluent of the present invention;
FIG. 6 is a graph showing the BOD concentration prediction error of the effluent of the present invention.
Detailed Description
The invention obtains the BOD prediction method of the effluent based on the pruned feedforward small world neural network, realizes the real-time measurement of the BOD concentration according to the data acquired in the sewage treatment process, solves the problem that the BOD concentration of the effluent in the sewage treatment process is difficult to be measured in real time, and improves the real-time monitoring level of the water quality of the urban sewage treatment plant;
experimental data comes from 2011 water quality analysis data of a sewage plant, which contains 365 groups of data, ten water quality variables, including: (1) total nitrogen concentration of effluent; (2) ammonia nitrogen concentration of effluent; (3) total nitrogen concentration in the feed water; (4) BOD concentration of the incoming water; (5) ammonia nitrogen concentration of the inlet water; (6) effluent phosphate concentration; (7) biochemical MLSS concentration; (8) biochemical pool DO concentration; (9) influent phosphate concentration; (10) COD concentration of the inlet water. All 365 sets of samples were divided into two parts: 265 groups of data are used as training samples, and the other 100 groups of data are used as measurement samples;
a soft measurement method of BOD concentration of effluent based on a truncated feedforward small world neural network comprises the following steps:
step 1: selecting auxiliary variables of a BOD prediction model of the effluent;
directly selecting given M auxiliary variables; normalizing the auxiliary variable to [ -1,1] according to formula (1), and normalizing the output variable BOD to [0,1] according to formula (2):
wherein F is m Represents the mth auxiliary variable, O represents the output variable, x m And y represents the mth auxiliary variable and the output variable after normalization respectively; min (F) m ) And max (F) m ) Respectively representing the minimum value and the maximum value in the mth auxiliary variable, and min (O) and max (O) respectively represent the minimum value and the maximum value in the output variable;
in this embodiment, given 10 auxiliary variables, i.e., m=10, are directly selected; the 10 auxiliary variables comprise (1) total nitrogen concentration of effluent; (2) ammonia nitrogen concentration of effluent; (3) total nitrogen concentration in the feed water; (4) BOD concentration of the incoming water; (5) ammonia nitrogen concentration of the inlet water; (6) effluent phosphate concentration; (7) biochemical MLSS concentration; (8) biochemical pool DO concentration; (9) influent phosphate concentration; (10) COD concentration of the inlet water;
step 2: designing a feedforward small world neural network model;
step 2.1: designing a feedforward small-world neural network model wiring mode;
constructing a feedforward small-world neural network according to the Watts-Strogatz rewiring rule; the specific construction process is as follows: firstly, constructing a regularly connected L-layer feedforward neural network, wherein the random generation range of initial weights is [ -1,1], then randomly selecting one connection from a model according to reconnection probability p, disconnecting from the tail end and reconnecting to another neuron in the model, wherein p is selected empirically, the value range is (0, 1), if the new connection exists, randomly selecting another new neuron for connection, the newly generated weight range is [ -1,1], and the neurons in the same layer cannot be connected with each other, and p is 0.5 in the embodiment;
step 2.2: designing a topological structure of a feedforward small-world neural network model;
the designed feedforward small world neural network topological structure is L-layer in total and comprises an input layer, an hidden layer and an output layer; the calculation functions of each layer are as follows:
(1) input layer: the layer has M neurons, representing M input auxiliary variables, and the input of the input layer is x (1) =[x 1 (1) ,x 2 (1) ,…,x M (1) ]Wherein x is m (1) Representing the mth input auxiliary variable of the input layer, m=1, 2, …, M, the layer output is equal to the input, the output of the mth neuron of the input layer is:
(2) hidden layer: by adopting the sigmoid function as an activation function of the hidden layer, the input and output definitions of the j-th neuron of the first layer of the neural network are shown in formulas (4) and (5), respectively:
wherein n is u Representing the number of neurons in the u-th layer of the neural network,representing a connection weight between an ith neuron of a ith layer and a jth neuron of a first layer of the neural network;
(3) output layer: the output layer comprises a neuron, and the output of the output neuron is as follows:
wherein the method comprises the steps ofRepresenting the connection weight between the jth neuron of the first layer of the neural network and the output neuron, n l Representing the number of neurons of the first layer of the neural network;
in this embodiment, the hidden layers are two layers, and the number of initial neurons contained in each hidden layer is 40;
step 3: designing a deletion algorithm of the feedforward small world neural network;
step 3.1: defining a performance index function:
wherein Q is the number of samples, d q For the desired output value of the q-th sample,a predicted output value for the q-th sample;
step 3.2: carrying out parameter correction by adopting a batch gradient descent algorithm;
(1) the output weight correction of the output layer is as shown in formulas (8) - (10):
wherein the method comprises the steps of
Wherein,and->Represents the connection weight between the j-th neuron and the output neuron of the first layer of the neural network at the moments t and t+1 respectively,/o>Representing the variation value of the connection weight between the jth nerve of the jth layer of the neural network at the moment t and the output nerve element, eta v Represents the learning rate, eta in the output weight correction process of the output layer v Selected empirically to have a value in the range of (0,0.1)]η in the present embodiment v Taking 0.0003;
(2) the output weight correction of the hidden layer is as shown in formulas (11) - (13):
wherein the method comprises the steps of
Wherein,and->Representing the connection weight between the ith neuron of the s-th layer and the jth neuron of the first layer of the neural network at the moments t and t+1 respectively, < + >>Representing the variation value of the connection weight between the ith neuron of the s-th layer and the jth neuron of the first layer of the neural network at the moment of t, eta w Represents the learning rate, eta in the process of correcting the output weight of the hidden layer w Selected empirically to have a value in the range of (0,0.1)]η in the present embodiment w Taking 0.0003;
step 3.3: inputting training sample data, updating output weights of an implicit layer and an output layer according to formulas (8) - (13) in step 3.2, wherein Iter represents training iteration times, the iteration times are increased once every time the weights are updated, if the iteration times in the training process can be divided by a learning step length constant tau, executing step 3.4, otherwise, jumping to step 3.6, wherein tau is selected empirically, the value range is an integer in the range of [10,100], and tau is 20 in the embodiment;
step 3.4: calculating the Katz centrality and the normalized Katz centrality of all hidden layer neurons; katz centrality is defined as shown in equation (14):
wherein the method comprises the steps ofRepresents the k power of the connection weight between the neuron g and the neuron h, alpha represents the attenuation factor, and the set value of alpha needs to satisfy 0<α<1/λ max Alpha is selected empirically to have a value in the range of (0,0.1)]In this embodiment, α is 0.01, λ max A value representing the maximum eigenvalue of the network adjacency matrix with greater Katz centrality indicates that the node is more important, and vice versa;
the normalized Katz centrality definition is shown in equation (15):
wherein the method comprises the steps ofKatz centrality of jth neuron representing the s-th layer of the neural network, +.>Katz centrality normalized by the jth neuron representing the s-th layer of the neural network; is provided with->Average value of normalized Katz centrality of all neurons representing the s-th layer of the neural network, wherein θ is a preset threshold parameter, and is selected empirically within a range of [0.9,1 ]]In this example, θ is 0.93 if the Katz centrality of the neuron satisfies +.>The neuron is considered as an unimportant neuron, and the unimportant hidden layer neuron set in the s layer of the neural network is marked as A s The remaining set of neurons in layer s is denoted B s
Step 3.5: computing set A s And set B s The correlation coefficient between hidden layer neurons in (2) is defined as shown in formula (16):
wherein,and->Respectively representing the output values of the ith neuron and the jth neuron of the ith layer of the neural network when the qth training sample is input; />And->Respectively representing the input of all samples +.>And->Average value of (2); sigma (sigma) i Sum sigma j Respectively representing the input of all samples +.>And->Standard deviation of (2); will set A s Hidden layer neurons in (denoted as neuron a, a e A) s ) The neuron with the highest correlation coefficient (named neuron B, b.epsilon.B) s ) Combining to generate a new neuron c, wherein the connection weight between the neuron c and the neuron of the forward network layer is constructed according to the reconnection rule of Watt-Strogatz and the reconnection probability p to ensure the small worldwide property of the network, wherein p is selected according to experience and has a value range of (0, 1), in the embodiment, p takes 0.5, and the output of the neuron c is shown as a formula (17):
wherein the method comprises the steps ofRepresenting a connection weight between an ith neuron in an nth layer of the neural network and a neuron c in an s-th layer;
the connection weights between neurons c and neurons of the backward network layer according to the pruning algorithm are as shown in equations (18) - (19):
wherein the method comprises the steps ofAnd->Connection weights between neurons a, b and c representing the s-th layer of the neural network and neuron j in the backward hidden layer, respectively, +.>And->Connection weights between neurons a, b and c, respectively representing the s-th layer of the neural network, and the output neurons,/for>And->Output values of neurons a, b and c respectively representing the s-th layer of the neural network are combined according to formulas (17) - (19), and then step 3.3 is skipped;
step 3.6: calculate training RMSE, if it is satisfied that RMSE is less than the desired training RMSE (RMSE d ) Or the number of iterations reaches the maximum number of iterations (Iter max ) Stopping the calculation at the time, wherein Iter max Selected according to experience, the value range is [5000,10000 ]]Otherwise, jumping to step 3.3, wherein the definition of the RMSE is shown in a formula (20);
iter in this embodiment max 10000 rmse d Taking 0.05; training RMSE change graph as shown in fig. 2, X-axis: training steps, Y axis: training RMSE in mg/L; in the training process, hidden layer node change diagrams are respectively shown in fig. 3 and 4, and the X axis is as follows: training steps, Y axis: the number of neurons in the hidden layer is one;
step 4: predicting BOD of the effluent;
and taking the test sample data as the input of the trained pruned feedforward small-world neural network, and performing inverse normalization on the output of the neural network to obtain the predicted value of the BOD of the output water.
In this embodiment, the prediction result is shown in fig. 5, and the X axis: sample number, in units of number/sample, Y-axis: the BOD concentration of the effluent is in mg/L, the solid line is the actual output value of the BOD concentration of the effluent, and the dotted line is the predicted output value of the BOD concentration of the effluent; the error between the actual output value of the BOD concentration of the effluent and the predicted output value of the BOD concentration of the effluent is shown in FIG. 6, and the X axis is: sample number, in units of number/sample, Y-axis: predicting the BOD concentration of the effluent in mg/L; the result shows that the method for predicting the BOD concentration of the effluent based on the truncated neural network in the world is effective.
Tables 1-23 are experimental data for the present invention, wherein tables 1-11 are training samples: total nitrogen concentration of effluent, ammonia nitrogen concentration of effluent, total nitrogen concentration of influent, BOD concentration of influent, ammonia nitrogen concentration of influent, phosphate concentration of effluent, biochemical MLSS concentration, DO concentration of biochemical pool, phosphate concentration of influent, COD concentration of influent and BOD concentration of actual effluent, and tables 12-22 are training samples: the total nitrogen concentration of the effluent, the ammonia nitrogen concentration of the effluent, the total nitrogen concentration of the inlet water, the BOD concentration of the inlet water, the ammonia nitrogen concentration of the inlet water, the phosphate concentration of the outlet water, the biochemical MLSS concentration, the DO concentration of the biochemical pool, the phosphate concentration of the inlet water, the COD concentration of the inlet water and the BOD concentration of the outlet water actually measured are shown in Table 23, and the BOD predicted value of the outlet water of the invention is shown in Table 23.
Training samples:
TABLE 1 auxiliary variable total nitrogen in effluent (mg/L)
TABLE 2 auxiliary variable ammonia nitrogen in effluent (mg/L)
/>
TABLE 3 auxiliary variable total nitrogen in water (mg/L)
7.1360 10.5635 10.3759 10.3069 8.4245 12.8941 8.5735 8.6006 9.8132 7.6567
12.1750 7.8706 14.5415 7.5145 7.3283 8.7793 8.5030 8.5518 6.7060 10.1227
5.9246 7.9102 6.5293 8.7373 9.8952 8.5680 8.6127 11.4471 7.7494 7.7176
12.1330 14.2435 10.1667 13.8488 5.5685 7.9894 10.1884 7.2206 8.4340 8.6818
7.1448 6.9728 8.3270 8.7468 8.7197 8.1306 9.3014 11.4911 8.6276 9.0000
7.4244 8.9107 8.3771 8.5234 11.1708 6.6688 10.4897 11.9712 8.5254 8.6466
6.8428 7.0879 10.3610 10.4612 6.1020 10.2473 9.1104 6.2571 10.6915 9.7604
11.8737 8.9134 6.9315 10.9244 8.5139 7.8625 5.2353 9.7828 12.2380 11.5514
11.3794 6.1758 8.0685 7.7519 8.2715 8.5166 8.5992 8.2986 11.4078 8.6832
10.4342 10.2615 8.5748 10.0035 8.1293 10.0008 8.8321 8.6561 5.9693 7.9878
6.6877 10.1105 7.2490 7.1069 7.1840 7.4149 8.8186 14.4670 6.1210 7.1827
7.1339 7.0141 10.3245 9.2817 8.6628 7.1637 8.0074 11.5487 7.1380 11.1262
6.3824 10.7436 7.6377 8.8660 11.9976 8.2742 10.4267 8.0846 10.5093 10.0177
7.1380 14.3180 10.2520 7.8842 13.7865 10.1464 11.2074 8.8308 9.8965 14.7744
10.4308 7.0554 9.5959 7.2017 10.5195 6.2909 8.6046 11.6895 14.6160 10.5865
7.0967 10.0604 9.9155 9.2695 8.8890 10.3475 11.6313 8.4482 9.2729 9.9785
7.0513 5.2123 6.9301 10.0543 10.3353 12.1540 6.8089 8.3060 10.1755 8.6913
8.7895 9.5146 10.2791 8.1477 9.9175 13.4635 6.7182 8.2269 8.1990 8.0683
8.2904 9.3433 6.4799 10.4700 7.3540 14.1691 13.8294 8.3548 8.9391 11.0097
7.4278 11.7234 8.9743 13.0214 7.7636 5.9707 8.6019 10.3719 7.1163 10.1200
10.6156 4.8677 14.2970 7.2321 11.9245 8.9459 11.2555 10.2039 7.0019 7.2084
7.3289 6.6654 8.4055 7.8625 11.0354 8.1374 10.1850 9.9981 8.5884 7.1258
8.6981 8.2992 8.5559 14.7647 10.7700 9.1957 7.9654 8.6574 10.3915 6.2808
8.4394 6.8015 12.4264 9.0122 8.2160 6.9376 7.7765 6.8509 8.7549 8.7928
9.0481 6.5665 11.2629 12.2590 8.6493 8.6689 8.5200 6.9931 7.1793 5.6030
14.6904 10.4111 9.4909 8.8470 7.7596 7.2937 12.2170 4.5000 10.4335 9.5891
11.8303 7.5936 6.9735 8.6615 10.3096
TABLE 4 auxiliary variable BOD (mg/L) of incoming water
/>
TABLE 5 auxiliary variable intake ammonia nitrogen (mg/L)
/>
TABLE 6 auxiliary variable out-water phosphate (mg/L)
11.1500 9.2000 8.0250 11.4750 14.6500 13.8333 14.3750 13.5250 4.5000 13.0750
11.2750 12.9750 15.4000 14.6500 11.8500 9.5500 13.1250 13.3250 14.0250 11.5250
13.6250 14.0375 13.8250 14.6500 11.4250 13.7250 13.8250 11.4500 14.1750 14.1500
11.0750 14.8000 7.0750 11.6000 13.5750 14.1250 6.8500 10.4500 13.4250 14.0250
10.7500 11.6500 12.0250 14.1250 14.5250 13.7750 14.4750 14.3000 14.0000 8.9500
12.2000 14.3500 13.4250 13.6250 11.7500 12.7000 9.3750 11.1250 14.3250 14.4500
13.7000 10.9750 11.5750 11.5500 13.0750 11.7250 8.6500 13.6750 9.2000 11.2750
14.3500 14.4000 11.8500 11.7750 13.8250 14.2250 13.6250 14.2000 11.5750 10.8250
10.5000 13.4000 14.2125 13.8625 12.3750 13.6250 13.8250 12.7750 11.7250 14.5000
12.0250 7.5250 14.2000 6.1750 14.1000 5.6750 13.5000 14.4250 13.2500 11.6000
13.7000 6.6250 14.4500 10.9000 11.3250 14.3000 13.5500 15.2500 13.6000 10.6000
13.8500 11.4500 8.1250 10.6250 14.3750 10.6750 13.9500 11.5750 10.8500 12.0250
13.5500 11.8000 14.4750 14.6750 10.3000 13.3750 8.8500 12.8750 9.4750 5.9500
13.8500 14.9500 6.5750 14.1250 14.6000 9.0000 10.1750 10.6750 11.3250 12.2500
9.0750 11.2500 11.1250 10.5250 8.8750 12.9500 14.0000 11.4250 15.5500 7.4250
11.0500 11.4250 5.2250 10.6750 13.3750 7.9750 11.2500 10.4500 9.4750 11.3500
14.0000 13.5250 13.9000 6.1750 8.4250 11.1750 13.8000 14.4750 7.0750 13.8500
11.9750 8.2250 7.9750 14.3000 5.7250 15.3250 14.0500 14.3875 11.5500 13.2750
13.0000 10.7750 12.8250 9.2750 11.7500 14.6500 13.5222 11.8500 14.7500 14.5750
14.0000 11.1500 10.2750 14.5000 13.0250 13.8250 14.2750 11.9250 14.3500 6.6750
11.6250 13.5250 13.3667 11.5000 10.8250 13.2500 9.1250 11.6750 14.0000 14.1500
13.4500 13.9000 12.7250 13.1750 9.8500 14.2500 11.6250 5.7250 14.1000 10.8250
13.9250 12.2000 13.8250 13.2111 11.7000 10.5750 13.2250 13.9250 8.8750 13.6750
13.5500 13.9750 13.9889 14.8250 12.7250 14.0500 11.6500 14.1500 13.7000 14.6000
10.3750 13.6000 11.6500 11.6750 13.7250 9.8500 14.2250 13.7750 10.6500 13.7250
15.7000 8.9750 10.9750 14.4250 13.1250 14.2000 11.4750 13.4250 8.4250 10.9750
11.2750 13.6875 14.0250 13.8750 11.8250
TABLE 7 auxiliary variable Biochemical MLSS (mg/L)
/>
TABLE 8 auxiliary variable Biochemical pool DO (mg/L)
/>
TABLE 9 auxiliary variable intake phosphate (mg/L)
5.3779 6.3867 6.1319 7.9230 6.0115 6.6805 5.7779 5.4381 5.3673 4.8221
7.7000 4.9212 6.3088 5.6150 5.6186 9.3142 5.3035 5.3708 5.6469 7.5761
4.8823 13.4080 5.7389 5.8381 6.9956 5.6681 5.6611 9.1584 5.5513 5.3637
7.6150 6.2097 6.7018 8.7761 4.7867 13.8664 5.8770 5.6398 5.6894 5.7637
5.4558 5.0664 5.5867 5.6398 5.8062 5.3496 6.0611 5.9867 5.6575 9.9301
5.7389 5.9018 5.2434 5.6752 8.5637 5.4133 6.5248 7.8133 5.8062 5.7920
4.9035 5.3177 9.0097 8.0115 5.1690 7.4062 10.2381 5.6894 8.7973 6.9566
6.0150 5.9336 4.9708 8.2770 5.7000 5.4027 4.8823 5.6788 7.8274 9.5938
9.4345 4.6204 14.3248 12.4912 5.5513 5.7142 5.7389 5.1195 8.7761 5.7460
7.1513 9.8062 5.5619 10.9743 5.5832 5.6221 5.4274 5.7000 4.5000 5.4487
5.0027 6.7195 5.4876 5.3637 5.4381 5.3425 5.4982 6.2841 5.6646 5.5478
5.0381 5.1619 6.1637 6.3867 5.7920 5.5018 5.4027 8.5354 5.4487 9.2575
4.7407 9.1336 5.5619 5.8345 9.4841 5.1832 6.3230 5.0204 6.5035 5.6858
5.2080 6.2345 10.8646 5.4558 6.1566 10.1920 9.2752 8.5142 6.6416 8.4221
6.5885 5.2575 8.7619 5.5938 8.6381 5.2504 5.4982 8.2947 6.3336 10.3655
5.3531 7.6611 5.5265 6.6133 5.4097 9.4168 9.5549 8.3903 10.5885 8.8858
5.1726 4.6912 5.1159 6.7372 6.6487 7.6575 5.0593 15.7000 10.1956 5.6044
13.1442 9.3106 6.6664 14.7832 11.3637 6.1885 5.2327 15.2416 5.5442 5.0628
5.5619 6.7018 5.3319 6.5460 5.1619 6.1850 7.1431 5.6044 5.8487 5.8487
5.2858 9.7531 6.2593 6.1000 4.8717 5.2646 5.6080 7.2363 5.4239 5.8451
8.1000 4.6912 7.3743 5.4982 8.3690 5.3920 9.3673 6.7690 5.1018 5.2221
5.3389 5.7637 5.1690 4.9425 9.1159 5.6788 7.4912 6.7549 5.7460 5.4097
5.5726 5.5690 5.5088 7.6056 8.1885 6.5248 5.0027 5.6540 6.6310 4.9779
5.6540 5.7885 6.4493 5.8628 5.5159 5.8133 5.3531 5.2965 5.5513 5.8204
6.3478 4.9460 8.7619 7.8699 5.5053 9.0062 5.7425 5.1690 5.5442 5.0735
6.3584 6.6097 6.8788 5.8133 4.8823 5.2858 7.7850 4.5000 9.0274 6.5142
8.0540 11.5743 5.2965 5.4805 7.3212
TABLE 10 auxiliary variable inflow COD (mg/L)
/>
Table 11. Found BOD concentration of effluent (mg/L)
/>
Test sample:
table 12. Auxiliary variable total Nitrogen in effluent (mg/L)
12.5450 5.8739 8.4295 6.3286 7.2489 5.8362 13.0872 13.1711 6.5681 13.0872
13.2356 5.8812 9.5222 4.5000 4.5815 13.0021 6.4052 12.5061 11.5140 12.5134
11.7036 13.3936 13.0483 12.6666 12.4271 5.5237 7.4605 7.0447 8.8781 12.4307
6.9426 7.0863 12.0343 12.9280 12.2313 6.5085 11.8422 5.6331 9.5091 7.4690
7.9152 12.3869 5.5298 11.7620 6.6544 9.8702 6.2641 6.4708 11.4106 12.4988
5.0702 11.6489 13.7340 12.9204 12.5766 6.3225 7.0787 12.6362 15.7000 12.1170
6.1912 5.9930 7.6283 13.2047 13.0483 6.2033 8.1303 12.6119 12.9936 8.2021
7.1237 12.9280 5.9359 13.0495 12.7541 12.9632 5.7693 8.8708 5.4143 9.0702
12.6362 11.1857 10.8502 13.0775 7.4775 12.6301 12.7128 8.5401 13.4398 11.4726
13.2647 12.8951 12.2872 12.4404 12.5729 13.3863 8.7711 13.4605 5.8508 6.7578
Table 13 auxiliary variable ammonia nitrogen (mg/L) in the effluent
12.4818 5.0091 4.9000 6.2636 7.2909 4.8273 8.7273 12.8091 5.3909 12.1273
12.4091 5.9091 5.5182 5.7636 5.6636 12.8091 6.2455 11.6273 12.0182 12.3545
8.4273 15.6273 12.1000 14.6091 10.2636 6.3000 7.0455 6.5818 7.5182 14.7182
6.9273 6.3455 13.4091 11.6818 12.1273 4.5000 12.0818 7.9545 7.2636 4.5636
7.4727 12.4091 5.6182 13.2636 5.3909 6.1091 6.0909 5.6636 8.0364 11.6273
4.7818 8.7273 9.5545 9.6545 12.4909 6.5727 5.8364 9.5727 13.5182 9.4636
4.8455 5.6818 7.6545 11.1273 12.5182 5.4818 6.7727 12.0273 11.5909 7.9364
4.7727 11.1091 6.6455 11.5364 13.4455 11.8091 5.8636 6.3182 6.8545 6.5000
11.9000 7.1364 12.5091 11.3000 4.8364 11.5273 11.6000 7.8636 11.7182 9.1000
13.3727 11.2636 12.8636 11.5909 11.3545 12.3091 6.9909 12.3000 5.4091 5.9455
Table 14. Auxiliary variable total Nitrogen in Water (mg/L)
/>
TABLE 15 auxiliary variable BOD (mg/L) of incoming water
5.8200 5.7800 9.7800 11.6200 8.9000 11.1400 7.5800 6.2200 8.7000 6.1000
6.6200 9.8600 12.4644 9.5400 9.1400 6.4600 11.2200 6.2200 5.7400 5.2600
5.7800 5.7800 4.8600 5.2600 5.5800 7.5400 9.1800 11.6200 10.7000 7.7400
12.9800 15.2956 5.1400 4.9000 7.2200 6.7400 5.2600 7.7000 5.1800 8.2600
9.3000 6.4200 9.3800 5.8600 8.5800 12.0600 7.8600 7.3000 9.0600 5.6600
6.7400 9.5400 6.4600 7.9160 8.1400 8.3400 9.8200 8.4200 4.7800 9.2600
6.2600 10.3000 8.7800 7.4120 6.0600 8.6200 14.0822 9.0200 5.9800 9.8200
9.3000 6.3800 8.0200 6.3000 6.3400 9.1000 9.4200 5.8600 7.3800 5.0600
4.5000 10.9800 6.9400 4.9000 8.8600 5.5000 5.6200 10.2600 5.8600 9.1800
6.1000 5.2600 6.5800 7.5800 7.4200 8.4600 6.2600 7.6200 6.4200 11.3400
Table 16 auxiliary variable intake ammonia nitrogen (mg/L)
8.1356 10.8022 8.7933 10.3933 11.1222 10.7889 10.6333 8.4378 10.8644 8.1356
8.0378 11.8689 15.7000 10.3800 9.5844 8.7578 9.8244 8.5356 7.1133 7.8244
8.1533 7.2200 7.7444 7.6778 8.3933 7.7089 15.0556 12.0378 12.0867 10.2111
11.7533 12.6333 8.7133 8.1089 7.6022 9.8733 7.1933 7.9044 7.8600 11.5756
12.7311 7.6022 10.6644 8.3311 12.6067 14.0556 12.6244 12.4778 11.0689 7.6244
12.1356 10.8867 10.2467 9.5844 8.6467 7.8022 12.5222 8.1533 7.3444 9.9933
8.7133 11.0289 10.4733 9.8156 7.6289 10.3222 12.7000 7.6467 7.5978 12.6778
10.6867 7.7667 7.2778 7.3711 8.5533 8.1356 11.8111 7.7622 7.3400 7.5933
7.6022 10.3044 7.9578 8.0911 10.1711 6.8289 7.8867 13.8778 7.4244 10.4956
10.4200 8.0378 10.6556 7.8511 7.7756 8.7756 8.4333 8.1667 8.7933 12.6067
Table 17 auxiliary variable out-water phosphate (mg/L)
14.6250 10.1500 10.9500 11.1250 14.5500 11.3500 10.7250 13.9750 11.3750 13.6250
13.4000 11.7250 13.0556 11.3250 11.4000 14.3000 10.8750 14.0000 13.4000 13.5250
12.5500 13.7250 14.0250 14.1000 13.2750 12.4500 15.1000 14.4500 10.3750 13.9250
14.4000 14.1444 13.8500 14.1000 12.9250 8.3500 13.7000 12.3250 11.6750 9.6500
12.3750 12.8250 11.5500 13.5500 9.9750 12.9000 10.4750 9.5250 9.1750 14.2250
11.8500 9.5750 13.2250 13.7750 14.1250 11.6750 10.9750 13.6000 15.5250 6.9750
9.2500 11.5750 12.0250 13.9500 14.2500 11.8750 13.6778 13.3250 13.7500 12.7250
8.2750 13.7250 12.0250 13.8500 14.1500 13.6000 11.8750 11.7000 12.5750 11.5000
13.9500 7.4000 13.7500 14.5750 6.6250 12.9250 14.1250 11.5500 13.9250 7.5250
14.0750 14.3500 13.4750 14.3000 14.1500 14.3000 11.8000 14.5500 10.9000 14.9500
TABLE 18 auxiliary variable Biochemical MLSS (mg/L)
/>
TABLE 19 auxiliary variable biochemical pool DO (mg/L)
11.4597 9.0630 7.4959 7.9568 13.4877 8.8786 6.4358 10.0309 8.7403 9.4778
13.5337 9.1551 9.2012 8.9708 7.6342 8.2794 8.1872 9.9387 9.4317 10.9527
13.4416 12.2432 12.3354 10.4918 13.2572 13.1189 11.0449 10.6761 7.5420 13.5337
13.2111 10.1691 10.9527 12.0128 8.6481 8.5560 10.4918 12.5658 12.7502 9.0630
7.9568 12.5658 6.4358 13.0267 8.4177 9.3856 8.6481 8.0490 8.0951 13.2572
8.0490 8.8786 8.0029 8.9708 12.0588 13.1189 6.7584 8.9708 13.6259 8.0490
9.1551 9.1091 11.5058 8.8786 12.4737 7.4498 9.1551 11.4136 11.3214 7.9568
8.7403 11.5979 12.1049 10.1230 12.9807 11.2292 9.4778 12.1049 10.7222 14.0407
12.6119 7.8646 12.0588 13.8564 8.6481 13.2111 11.8284 8.4177 11.0449 9.0169
10.3074 13.7181 11.5519 13.3033 13.2111 11.1831 14.0868 12.3815 8.2333 13.3033
TABLE 20 auxiliary variable intake phosphate (mg/L)
5.8381 8.6982 9.1301 6.9177 6.1283 9.3566 11.8664 5.4239 7.7425 5.4451
5.4451 7.0735 7.8369 7.7460 7.8345 5.4416 6.7903 5.1726 4.6345 5.6823
5.5336 5.0735 5.6469 5.2292 5.6080 5.5761 6.2593 6.0717 6.6416 5.4558
6.0434 6.2180 4.9814 5.4239 4.9708 10.5460 4.8611 5.6575 5.6221 9.8381
7.0239 5.0699 8.9602 4.7690 8.8681 8.0681 6.4363 8.9566 6.5673 5.7885
8.3655 6.4823 14.4221 12.0327 5.8381 5.5584 7.5726 11.1159 6.3584 8.0327
9.6221 7.0345 6.8965 12.9496 5.3602 9.0168 6.9118 5.1230 5.4628 7.1513
9.8664 5.1053 5.6788 5.2646 5.3071 5.2965 7.1124 5.2575 5.4947 5.6398
5.1690 6.0044 5.7142 5.7920 10.5850 5.2363 5.5159 6.8965 5.2469 6.6841
5.3248 5.6540 4.5956 5.7637 5.7106 5.5088 5.0664 5.5513 8.6381 6.0186
Table 21. Auxiliary variable COD (mg/L) of inflow
9.2032 7.3698 5.6559 10.2794 13.1093 9.2431 10.8374 10.4388 9.7214 9.3626
9.6018 12.8701 9.8808 10.0801 9.1633 13.1890 11.1164 8.8843 6.0544 7.5292
9.2032 9.8808 8.0075 6.7719 9.3228 8.1270 10.4388 8.8445 11.8737 11.1164
10.3192 10.5982 12.1128 10.2794 10.5584 10.1199 7.9278 8.8843 9.2032 7.1306
8.7648 8.1669 11.9534 9.1633 10.8772 15.7000 13.2687 11.9135 9.8409 7.7683
11.0765 10.6779 10.5185 11.6744 9.8011 8.8843 10.8772 7.7683 4.8587 10.6779
9.6815 11.5548 8.9242 10.5584 8.6053 10.0801 10.3591 8.8046 7.2103 13.6274
9.9206 9.6018 9.1633 8.9242 12.1527 12.0331 14.4644 6.6125 7.0907 7.7683
7.5690 8.8046 9.8409 8.4459 8.5256 8.8445 7.7683 14.7833 8.5256 9.8409
12.3918 10.0004 9.3228 9.1633 8.2865 10.9968 8.5655 9.3626 8.0473 10.7178
Table 22. Actual measurement of BOD concentration (mg/L) of effluent
11.1429 11.6714 13.1286 12.8571 13.8429 14.5429 12.3143 10.9000 13.3857 10.9143
10.8000 12.6857 14.1000 13.8000 13.8143 10.3000 12.7429 10.2429 10.1286 10.2857
11.4286 11.0429 10.7143 10.7714 11.5143 11.4857 12.6714 14.5857 13.0857 12.2286
14.9571 15.5000 10.3857 10.2857 11.0286 12.1000 10.3143 11.4429 11.5714 12.6143
13.0000 11.1143 14.2857 10.1571 14.0000 13.9000 12.1143 14.0857 12.7286 10.8286
13.9000 12.5000 12.1714 12.6600 12.6000 10.8857 13.1000 12.8000 11.9000 12.5286
11.8857 12.7286 12.8000 12.5200 10.8000 12.9286 14.9000 10.6143 10.9857 13.2000
14.4000 11.1000 11.2286 11.0000 10.2714 10.6571 12.6429 11.7714 11.5286 11.6000
10.2000 12.6286 12.2429 11.7143 14.6571 11.1429 11.2000 13.1429 10.8000 12.7714
10.6000 11.4571 11.2571 11.4000 11.3000 11.2857 11.8571 11.4000 11.9714 11.9857
TABLE 23 prediction of BOD concentration (mg/L) of the water by the soft measurement method of the invention
10.9587 12.4779 13.3882 13.5493 12.9277 14.3140 12.4474 10.8082 13.2564 10.9683
11.1841 13.2436 13.8942 13.2937 13.2559 10.8312 13.4366 10.8581 10.6723 10.7437
11.4622 10.6948 10.7798 10.5419 11.2111 11.5180 13.4736 13.7765 13.2609 11.4760
14.1395 14.4762 10.6665 10.8532 10.7924 13.1953 10.6136 11.5691 11.4409 13.3273
12.6630 10.8584 14.0394 10.7962 13.8171 14.1084 12.5939 13.3541 12.3291 11.0501
13.0923 12.3768 12.2838 12.6805 11.4128 11.7002 13.9789 12.4416 10.7326 12.5194
12.7084 13.2320 12.1601 12.6555 10.9330 13.6297 14.0882 11.3101 10.9983 13.1563
14.0852 10.9683 11.7059 10.8075 11.0923 11.5350 13.2023 11.6394 11.4022 11.6044
10.7674 12.8016 11.1348 11.0762 14.2194 10.8501 11.0021 13.4111 10.8473 12.2415
10.9474 11.1059 10.9798 11.5382 11.4708 11.3660 11.3216 11.2816 12.5421 12.6423

Claims (1)

1. A method for soft measurement of BOD concentration of effluent based on a pruned feedforward small world neural network is characterized by comprising the following steps:
step 1: selecting auxiliary variables of a BOD prediction model of the effluent;
directly selecting given M auxiliary variables; normalizing the auxiliary variable to [ -1,1] according to formula (1), and normalizing the output variable BOD to [0,1] according to formula (2):
wherein F is m Represents the mth auxiliary variable, O represents the output variable,x m And y represents the mth auxiliary variable and the output variable after normalization respectively; min (F) m ) And max (F) m ) Respectively representing the minimum value and the maximum value in the mth auxiliary variable, and min (O) and max (O) respectively represent the minimum value and the maximum value in the output variable;
step 2: designing a feedforward small world neural network model;
step 2.1: designing a feedforward small-world neural network model wiring mode;
constructing a feedforward small-world neural network according to the Watts-Strogatz rewiring rule; the specific construction process is as follows: firstly, constructing an L-layer feedforward neural network with regular connection, then randomly selecting one connection from the model according to reconnection probability p, disconnecting from the tail end and reconnecting to another neuron in the model, wherein the value range of p is (0, 1), if the new connection exists, randomly selecting another new neuron for connection, and the neurons in the same layer cannot be connected with each other;
step 2.2: designing a topological structure of a feedforward small-world neural network model;
the designed feedforward small world neural network topological structure is L-layer in total and comprises an input layer, an hidden layer and an output layer; the calculation functions of each layer are as follows:
(1) input layer: the layer has M neurons, representing M input auxiliary variables, and the input of the input layer is x (1) =[x 1 (1) ,x 2 (1) ,…,x M (1) ]Wherein x is m (1) Representing the mth input auxiliary variable of the input layer, m=1, 2, …, M, the layer output is equal to the input, the output of the mth neuron of the input layer is:
(2) hidden layer: by adopting the sigmoid function as an activation function of the hidden layer, the input and output definitions of the j-th neuron of the first layer of the neural network are shown in formulas (4) and (5), respectively:
wherein n is u Representing the number of neurons in the u-th layer of the neural network,representing a connection weight between an ith neuron of a ith layer and a jth neuron of a first layer of the neural network, wherein f () is a sigmoid function;
(3) output layer: the output layer comprises a neuron, and the output of the output neuron is as follows:
wherein the method comprises the steps ofRepresenting the connection weight between the jth neuron of the first layer of the neural network and the output neuron, n l Representing the number of neurons of the first layer of the neural network;
step 3: designing a deletion algorithm of the feedforward small world neural network;
step 3.1: defining a performance index function:
wherein Q is the number of samples, d q For the desired output value of the q-th sample,a predicted output value for the q-th sample;
step 3.2: carrying out parameter correction by adopting a batch gradient descent algorithm;
(1) the output weight correction of the output layer is as shown in formulas (8) - (10):
wherein the method comprises the steps of
Wherein,and->Represents the connection weight between the j-th neuron and the output neuron of the first layer of the neural network at the moments t and t+1 respectively,/o>Representing the variation value of the connection weight between the jth nerve of the jth layer of the neural network at the moment t and the output nerve element, eta v Represents the learning rate, eta in the output weight correction process of the output layer v The value range is (0,0.1)];
(2) The output weight correction of the hidden layer is as shown in formulas (11) - (13):
wherein the method comprises the steps of
Wherein,and->Representing the connection weight between the ith neuron of the s-th layer and the jth neuron of the first layer of the neural network at the moments t and t+1 respectively, < + >>Representing the variation value of the connection weight between the ith neuron of the s-th layer and the jth neuron of the first layer of the neural network at the moment of t, eta w Represents the learning rate, eta in the process of correcting the output weight of the hidden layer w The value range is (0,0.1)];
Step 3.3: inputting training sample data, updating output weights of an implicit layer and an output layer according to formulas (8) - (13) in step 3.2, wherein Iter represents training iteration times, the iteration times are increased once every time the weights are updated, if the iteration times in the training process can be divided by a learning step length constant tau, executing step 3.4, otherwise, jumping to step 3.6, wherein the tau value range is an integer in a [10,100] range;
step 3.4: calculating the Katz centrality and the normalized Katz centrality of all hidden layer neurons; katz centrality is defined as shown in equation (14):
wherein the method comprises the steps ofRepresents the k power of the connection weight between the neuron g and the neuron h, alpha represents the attenuation factor, and the set value of alpha needs to satisfy 0<α<1/λ max Alpha has a value in the range of (0,0.1)],λ max A value representing the maximum eigenvalue of the network adjacency matrix with greater Katz centrality indicates that the node is more important, and vice versa;
the normalized Katz centrality definition is shown in equation (15):
wherein the method comprises the steps ofKatz centrality of jth neuron representing the s-th layer of the neural network, +.>Katz centrality normalized by the jth neuron representing the s-th layer of the neural network; is provided with->Average value of normalized Katz centrality of all neurons representing the s-th layer of the neural network, wherein theta is a preset threshold parameter, and the value range of theta is [0.9,1 ]]If the Katz centrality of neurons satisfies +.>The neuron is considered as an unimportant neuron, and the unimportant hidden layer neuron set in the s layer of the neural network is marked as A s The remaining set of neurons in layer s is denoted B s
Step 3.5: computing set A s And set B s The correlation coefficient between hidden layer neurons in (2) is defined as shown in formula (16):
wherein,and->Respectively representing the output values of the ith neuron and the jth neuron of the ith layer of the neural network when the qth training sample is input; />And->Respectively representing the input of all samples +.>And->Average value of (2); sigma (sigma) i Sum sigma j Respectively representing the input of all samples +.>And->Standard deviation of (2); will set A s Combining the hidden layer neuron a and the neuron b with the highest correlation coefficient to generate a new neuron c, wherein the connection weight between the neuron c and the neuron of the forward network layer is constructed according to the reconnection rule of Watt-Strogatz and the reconnection probability p, wherein the value range of p is (0, 1) so as to ensure the small worldwide of the network, and the output of the neuron c is shown as a formula (17):
wherein the method comprises the steps ofRepresenting a connection weight between an ith neuron in an nth layer of the neural network and a neuron c in an s-th layer;
the connection weights between neurons c and neurons of the backward network layer according to the pruning algorithm are as shown in equations (18) - (19):
wherein the method comprises the steps ofAnd->Connection weights between neurons a, b and c representing the s-th layer of the neural network and neuron j in the backward hidden layer, respectively, +.>And->Connection weights between neurons a, b and c, respectively representing the s-th layer of the neural network, and the output neurons,/for>And->Output values of neurons a, b and c respectively representing the s-th layer of the neural network are combined according to formulas (17) - (19), and then step 3.3 is skipped;
step 3.6: calculating a training RMSE if the RMSE is satisfied to be less than the expected training rmsemse d Or the iteration number reaches the maximum iteration number Iter max Stopping the calculation at the time, wherein Iter max The value range is [5000,10000 ]]Otherwise, jumping to step 3.3, wherein the definition of the RMSE is shown in a formula (20);
step 4: predicting BOD of the effluent;
and taking the test sample data as the input of the trained pruned feedforward small-world neural network, and performing inverse normalization on the output of the neural network to obtain the predicted value of the BOD of the output water.
CN201911211235.4A 2019-12-02 2019-12-02 Method for predicting BOD of effluent based on pruning feedforward small-world neural network Active CN110991616B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911211235.4A CN110991616B (en) 2019-12-02 2019-12-02 Method for predicting BOD of effluent based on pruning feedforward small-world neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911211235.4A CN110991616B (en) 2019-12-02 2019-12-02 Method for predicting BOD of effluent based on pruning feedforward small-world neural network

Publications (2)

Publication Number Publication Date
CN110991616A CN110991616A (en) 2020-04-10
CN110991616B true CN110991616B (en) 2024-04-05

Family

ID=70088889

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911211235.4A Active CN110991616B (en) 2019-12-02 2019-12-02 Method for predicting BOD of effluent based on pruning feedforward small-world neural network

Country Status (1)

Country Link
CN (1) CN110991616B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949894B (en) * 2020-12-10 2023-09-19 北京工业大学 Output water BOD prediction method based on simplified long-short-term memory neural network
CN112924646B (en) * 2021-02-21 2022-11-04 北京工业大学 Effluent BOD soft measurement method based on self-adaptive pruning feedforward small-world neural network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101387632A (en) * 2008-10-17 2009-03-18 北京工业大学 Soft measurement method for biochemical oxygen demand BOD in process of sewage disposal
CN104965971A (en) * 2015-05-24 2015-10-07 北京工业大学 Ammonia nitrogen concentration soft-measuring method based on fuzzy neural network
WO2019134759A1 (en) * 2018-01-08 2019-07-11 Gregor Mendel Institute Of Molecular Plant Biology Gmbh Identification of extracellular networks of leucine-rich repeat receptor kinases
CN110011838A (en) * 2019-03-25 2019-07-12 武汉大学 A kind of method for real time tracking of dynamic network PageRank value
CN110135092A (en) * 2019-05-21 2019-08-16 江苏开放大学(江苏城市职业学院) Complicated weighting network of communication lines key node recognition methods based on half local center

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3291120B1 (en) * 2016-09-06 2021-04-21 Accenture Global Solutions Limited Graph database analysis for network anomaly detection systems

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101387632A (en) * 2008-10-17 2009-03-18 北京工业大学 Soft measurement method for biochemical oxygen demand BOD in process of sewage disposal
CN104965971A (en) * 2015-05-24 2015-10-07 北京工业大学 Ammonia nitrogen concentration soft-measuring method based on fuzzy neural network
WO2019134759A1 (en) * 2018-01-08 2019-07-11 Gregor Mendel Institute Of Molecular Plant Biology Gmbh Identification of extracellular networks of leucine-rich repeat receptor kinases
CN110011838A (en) * 2019-03-25 2019-07-12 武汉大学 A kind of method for real time tracking of dynamic network PageRank value
CN110135092A (en) * 2019-05-21 2019-08-16 江苏开放大学(江苏城市职业学院) Complicated weighting network of communication lines key node recognition methods based on half local center

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
M.J. Park et al.A Katz-centrality-based protocol design for leader-following formation of discrete-time multi-agent systems with communication delays.《Journal of the Franklin Institute》.2018,第335卷(第13期),6111-6131页. *
王宇等.基于NW型小世界人工神经网络的污水出水水质预测.《计算机测量与控制》.2015,第24卷(第1期),61-63页 . *

Also Published As

Publication number Publication date
CN110991616A (en) 2020-04-10

Similar Documents

Publication Publication Date Title
CN110728401B (en) Short-term power load prediction method of neural network based on squirrel and weed hybrid algorithm
CN108469507B (en) Effluent BOD soft measurement method based on self-organizing RBF neural network
CN107358021B (en) DO prediction model establishment method based on BP neural network optimization
CN111354423B (en) Method for predicting ammonia nitrogen concentration of effluent of self-organizing recursive fuzzy neural network based on multivariate time series analysis
CN109828089B (en) DBN-BP-based water quality parameter nitrous acid nitrogen online prediction method
CN109657790B (en) PSO-based recursive RBF neural network effluent BOD prediction method
CN110824915B (en) GA-DBN network-based intelligent monitoring method and system for wastewater treatment
CN109344971B (en) Effluent ammonia nitrogen concentration prediction method based on adaptive recursive fuzzy neural network
CN112183719A (en) Intelligent detection method for total nitrogen in effluent based on multi-objective optimization-fuzzy neural network
CN102854296A (en) Sewage-disposal soft measurement method on basis of integrated neural network
CN112884056A (en) Optimized LSTM neural network-based sewage quality prediction method
CN110991616B (en) Method for predicting BOD of effluent based on pruning feedforward small-world neural network
CN114037163A (en) Sewage treatment effluent quality early warning method based on dynamic weight PSO (particle swarm optimization) optimization BP (Back propagation) neural network
CN112989704B (en) IRFM-CMNN effluent BOD concentration prediction method based on DE algorithm
CN111242380A (en) Lake (reservoir) eutrophication prediction method based on artificial intelligence algorithm
CN116542382A (en) Sewage treatment dissolved oxygen concentration prediction method based on mixed optimization algorithm
CN113448245A (en) Deep learning-based dissolved oxygen control method and system in sewage treatment process
CN113111576B (en) Mixed coding particle swarm-long-short-term memory neural network-based effluent ammonia nitrogen soft measurement method
CN109408896B (en) Multi-element intelligent real-time monitoring method for anaerobic sewage treatment gas production
CN114690700A (en) PLC-based intelligent sewage treatment decision optimization method and system
CN110837886A (en) Effluent NH4-N soft measurement method based on ELM-SL0 neural network
CN105372995A (en) Measurement and control method for sewage disposal system
CN110542748B (en) Knowledge-based robust effluent ammonia nitrogen soft measurement method
CN117252285A (en) Multi-index sewage water quality prediction method based on parallel CNN-GRU network
CN112924646B (en) Effluent BOD soft measurement method based on self-adaptive pruning feedforward small-world neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant