CN114611398B - Soft measurement method for nitrogen oxides in urban solid waste incineration process based on brain-like modularized neural network - Google Patents

Soft measurement method for nitrogen oxides in urban solid waste incineration process based on brain-like modularized neural network Download PDF

Info

Publication number
CN114611398B
CN114611398B CN202210266639.9A CN202210266639A CN114611398B CN 114611398 B CN114611398 B CN 114611398B CN 202210266639 A CN202210266639 A CN 202210266639A CN 114611398 B CN114611398 B CN 114611398B
Authority
CN
China
Prior art keywords
module
network
sample
output
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210266639.9A
Other languages
Chinese (zh)
Other versions
CN114611398A (en
Inventor
蒙西
王岩
乔俊飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202210266639.9A priority Critical patent/CN114611398B/en
Publication of CN114611398A publication Critical patent/CN114611398A/en
Application granted granted Critical
Publication of CN114611398B publication Critical patent/CN114611398B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/02Reliability analysis or reliability optimisation; Failure analysis, e.g. worst case scenario performance, failure mode and effects analysis [FMEA]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A50/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE in human health protection, e.g. against extreme weather
    • Y02A50/20Air quality improvement or preservation, e.g. vehicle emission control or emission reduction by using catalytic converters

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Algebra (AREA)
  • Computational Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Mathematical Optimization (AREA)
  • Medical Informatics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Investigating Or Analyzing Materials By The Use Of Fluid Adsorption Or Reactions (AREA)
  • Feedback Control In General (AREA)

Abstract

The invention relates to a soft measuring method of nitrogen oxides in urban solid waste incineration process based on brain-like modularized neural network, which realizes NO X The real-time accurate acquisition of the concentration comprises the following steps: firstly, acquiring data; preprocessing the acquired data to determine an input variable and an output variable of the model; then, a brain-like modularized neural network is adopted to establish a soft measurement model; and finally, taking the test data as the input of the model, and verifying the validity of the model. The invention effectively realizes NO X The real-time accurate detection of the concentration has important theoretical significance and application value.

Description

Soft measurement method for nitrogen oxides in urban solid waste incineration process based on brain-like modularized neural network
Technical Field
The invention relates to Nitrogen Oxides (NO) in urban solid waste incineration process X ) A soft measurement method; NO based on Brain-like modularized neural network (Brain-Inspired Modular Neural Network, BIMNN) was established X Soft measurement model, realize NO X And (5) accurately acquiring the concentration in real time. Not only belongs to the field of urban solid waste treatment, but also belongs to the field of intelligent modeling.
Background
With the rapid development of Chinese economy and the continuous acceleration of the urban process, the urban solid waste yield is increased, and a plurality of cities face the crisis of 'solid waste surrounding cities'. The solid waste incineration treatment mode becomes a main mode of urban solid waste treatment in China increasingly. And NO X Is one of main pollutants generated in the urban solid waste incineration process, and seriously affects the health of people and the quality of ecological environment. Along with the increasing environmental protection and treatment requirements of China, the NOx emission control becomes one of key problems to be solved urgently in urban solid waste incineration plants, and the real-time accurate detection of NOx is one of important preconditions for improving the denitration efficiency of the urban solid waste incineration plants. Thus, NO is realized X The real-time accurate detection of the method has important theoretical significance and application value.
Disclosure of Invention
The invention aims to provide a method for generating NO in the urban solid waste incineration process based on a brain-like modularized neural network X Soft measurement method for establishing NO by adopting brain-like modularized neural network X Soft measurement model for implementing NO X And (5) accurately acquiring the concentration in real time.
The invention adopts the following technical scheme and implementation steps:
1. collecting data;
2. determining input and output variables of a model: determining an input variable of a model by adopting a maximum correlation minimum redundancy (mRMR) algorithm, wherein the output variable of the model is NO at the current moment X Concentration;
the method for determining the input variable of the model by adopting the mRMR algorithm is as follows:
given two random variables a and b, the mutual information between the two random variables is calculated as follows:
Figure BDA0003552096710000011
wherein I is mutual information between random variables a and b, and p (a) and p (b) are edge probability distributions of the random variables a and b respectively; p (a, b) is a joint distribution of random variables a and b;
first, based on mutual information, searching a feature subset S with the greatest correlation with a variable c to be measured:
Figure BDA0003552096710000021
wherein ,mi For the feature variables in the feature subset S, |s| is the number of the feature variables in the feature subset S, D is the correlation between the selected feature variable and the variable c to be measured, and if D is larger, the correlation between the selected feature variable and the variable c to be measured is higher;
considering that certain similarity exists among the selected feature variables, and eliminating the redundant features does not affect the performance of the model, therefore, the redundancy among the features is calculated, and the mutually exclusive features are found:
Figure BDA0003552096710000022
wherein ,mi ,n j For the feature variables in the feature subset S, R is redundancy among the variables in the feature subset S, and the smaller R is, the lower the redundancy is;
in the mRMR algorithm application process, the maximum correlation index D and the minimum redundancy index R are generally unified into an evaluation function Φ=d-R, and then the optimal feature subset S is determined by searching the maximum value of the evaluation function:
maxΦ(D,R),Φ=D-R (4)
3. designed for NO X A brain-like modularized neural network model for soft concentration measurement;
(1) Task decomposition
In order to measure the modularization degree of the evaluation network and simulate the characteristics of modularization of the brain network, a modularized index (modularity quality, MQ) oriented to a modularized neural network is provided; the modularization index consists of the degree of density in the modules and the degree of sparsity among the modules, wherein the degree of density in the modules is calculated as follows:
Figure BDA0003552096710000023
wherein ,JC For the density in the modules, P is the number of modules in the current network, N l To the number of samples allocated to the first module, x i To input samples, h l and rl Respectively representing the position and the action range of the core node of the first module;
the degree of sparseness between modules is calculated based on the Euclidean distance:
Figure BDA0003552096710000024
wherein ,JS For the degree of sparsity between modules, d (h l ,h s ) Representing the distance between the core node of the first module and the core node of the second module, comprehensively considering the density J in the modules C And degree of sparsity J between modules S The modularization index weighing mode is proposed as follows:
Figure BDA0003552096710000031
thus, the greater the value of MQ, the higher the modularization degree of the network, and therefore, a brain-like modularization partitioning method is proposed, and the main idea is as follows: firstly, distributing training samples through a core node, and further determining whether the training samples are distributed to a current existing module or a newly added module; the new core node of the existing module is then determined by seeking the greatest "modularization" degree of the network, and thus the modular structure build can be divided into two cases: adding a new module and updating an existing module;
(1) adding new modules
At the initial moment, the number of modules of the whole network is 0;
after the first data sample enters the network, it is set as the core node of the first sub-module:
h 1 =x 1 (8)
Figure BDA0003552096710000032
/>
Figure BDA0003552096710000033
wherein ,h1 and r1 Respectively representing the position and the action range of the core node of the first module, x 1 An input vector, d, for the first training sample max For training sample x i And x j Maximum distance between;
at time t, when the t training sample enters the network, assuming that k modules already exist, finding the core node nearest to the sample:
Figure BDA0003552096710000034
wherein ,xt Is the input vector of the t training sample, h s Representing the position, k, of the core node of the s-th module min Training sample for representing distanceThe X is t The nearest core node;
if the t training sample is not at k min In the scope of the core node, a new module is needed to learn the current sample, and the parameters of the new module corresponding to the core node are set as follows:
h k+1 =x t (12)
Figure BDA0003552096710000041
wherein ,hk+1 ,r k+1 For the position and the action range of the core node corresponding to the newly added module, x t Is the input vector for the t-th training sample,
Figure BDA0003552096710000042
the furthest distance from other core nodes to the newly added module core node is provided;
(2) optimizing existing modules
Otherwise, the sample is considered to be classified as k min In the module, in order to make the network have the optimal modularization degree, according to the formula (7), respectively calculating the modularization index value MQ of the whole network under the condition that the current sample and the original core node are respectively used as the core nodes t And
Figure BDA0003552096710000043
if it is
Figure BDA0003552096710000044
The network modularization degree of selecting the current input sample as the core node is considered to be higher, and the existing core node is replaced by the sample to become a new core node, and initial parameters are set as follows:
Figure BDA0003552096710000045
Figure BDA0003552096710000046
wherein ,
Figure BDA0003552096710000047
the new core nodes and the new action ranges of the module are respectively provided; n (N) k For the number of samples assigned to the kth module;
if it is
Figure BDA0003552096710000048
The location of the current core node +.>
Figure BDA0003552096710000049
The range of the node is only required to be adjusted>
Figure BDA00035520967100000410
And (3) the following steps:
Figure BDA00035520967100000411
wherein ,
Figure BDA00035520967100000412
is the original core node of the module;
after all training samples are compared, the samples are distributed to different sub-modules, a partition structure is formed, the modularization degree of the current network can be considered to be the largest, and then the sub-network needs to be built aiming at the task of each sub-module;
(2) Sub-network structure design
Training data set
Figure BDA00035520967100000413
Is divided into M subsets; />
wherein ,xi ,y i The input variable and the output variable of the model respectively,
Figure BDA00035520967100000414
representing the domain, V is the input vector x i N is the number of the input variable and the output variable data of the model;
an adaptive task-oriented radial basis function neural network (ATO-RBF) is adopted to construct a sub-network corresponding to each module, and the design of the sub-network comprises three parts: network structure growth, network structure pruning and network parameter adjustment;
(1) network structure growth
The center, radius, and connection weight to the output layer of the first hidden layer node of the s-th module are based on the sample with the largest absolute output
Figure BDA0003552096710000051
Setting:
Figure BDA0003552096710000052
Figure BDA0003552096710000053
Figure BDA0003552096710000054
Figure BDA0003552096710000055
wherein ,
Figure BDA0003552096710000056
for the sample with the largest absolute output, +.>
Figure BDA0003552096710000057
Respectively representing the input variable and the output variable corresponding to the sample with the largest absolute output, r s For the range of the s-th module, N s For samples assigned to the s-th moduleA number;
Figure BDA0003552096710000058
the center and the radius of the first hidden layer node of the s-th module and the connection weight value to the output layer are respectively;
at time t s Time, training error vector e (t s ) Obtained by the formula:
Figure BDA0003552096710000059
Figure BDA00035520967100000510
wherein ,yf Is the desired output of the f-th sample,
Figure BDA00035520967100000511
is the f sample at time t s Is calculated by the following formula:
Figure BDA00035520967100000512
wherein H is the number of neurons in the hidden layer,
Figure BDA00035520967100000513
is the function of the jth hidden layer node of the s-th module, +.>
Figure BDA00035520967100000514
For the connection weight of the jth hidden layer node of the s-th module to the output layer, < ->
Figure BDA00035520967100000515
The center and the radius of the j hidden layer node of the s-th module;
finding a sample with the largest difference between the desired output and the network output
Figure BDA00035520967100000516
Figure BDA00035520967100000517
Then, a RBF neuron pair is newly added
Figure BDA00035520967100000518
The initial parameters of the newly added neurons are as follows:
Figure BDA0003552096710000061
Figure BDA0003552096710000062
wherein
Figure BDA0003552096710000063
and />
Figure BDA0003552096710000064
Respectively representing the center of the newly added neuron of the s-th module and the connection weight value to the output layer; />
Figure BDA0003552096710000065
Is->
Figure BDA0003552096710000066
Input variable corresponding to each sample, +.>
Figure BDA0003552096710000067
Respectively +.>
Figure BDA0003552096710000068
The expected output sum for each sample is at t s A network output value at a moment;
existing neurons have less effect on newly added neurons when the following relationship is satisfied:
Figure BDA0003552096710000069
Figure BDA00035520967100000610
wherein
Figure BDA00035520967100000611
Is the center of the neuron closest to the newly added neuron;
the radius of the newly added neuron is defined by equations (27) and (28):
Figure BDA00035520967100000612
when neurons are newly added each time, the network parameters are adjusted through a second-order learning algorithm; when reaching the preset maximum structure J max Or desired training accuracy E 0 Ending the network structure growth process; in the experimental process J max =10,E 0 =0.0001, the training accuracy of the network was measured using Root Mean Square Error (RMSE), calculated as follows:
Figure BDA00035520967100000613
wherein ,yi and yi The expected outputs and network outputs of the ith sample, respectively;
(2) network structure pruning
To avoid redundancy in the network structure, it is proposed to measure the contribution value of hidden layer neurons based on the index of the connection weights:
Figure BDA00035520967100000614
wherein SI (j) is the contribution value of the j-th hidden layer node,
Figure BDA00035520967100000615
is the weight from the jth hidden layer node of the s-th module to the output layer;
searching for an hidden layer node with the smallest contribution value:
Figure BDA0003552096710000071
wherein ,
Figure BDA0003552096710000072
the hidden layer node with the smallest contribution value in the s-th module is J, and the J is the number of hidden layer nodes in the network;
therefore, deleting the hidden node with the smallest contribution, adjusting parameters through a second-order learning algorithm, and comparing the root mean square error value (RMSE_1) after deleting the node with the root mean square error value (RMSE_0) when not deleting the node, wherein the calculation formula of the RMSE is shown as a formula (30); if RMSE_1 is less than or equal to RMSE_0, pruning the selected node under the condition of not sacrificing the network learning capability, and then repeating the process, wherein the selected node cannot be deleted; at this time, the network structure pruning process is finished, and the sub-network construction is completed; each time neurons are deleted, a second-order learning algorithm is used for adjusting parameters;
(3) network parameter adjustment
The second order learning algorithm is as follows:
θ L+1 =θ L -(Q LL E) -1 g L (33)
wherein θ refers to parameters to be adjusted, including center, radius and connection weight, Q is a Hessian-like matrix, μ is a learning coefficient (0.01 in experiment), E is a identity matrix, g is a gradient vector, and L is the number of iteration steps (50 in experiment);
to reduce memory requirements and computation time, the computation of the Hessian-like matrix Q and gradient vector g is converted into a Hessian-like sub-matrix summation and a gradient sub-vector summation:
Figure BDA0003552096710000073
Figure BDA0003552096710000074
q z is a Hessian-like submatrix, eta z The gradient sub-vectors can be calculated by the following formula:
Figure BDA0003552096710000075
Figure BDA0003552096710000076
wherein ,ez Expected output y for sample z z Output with network prediction
Figure BDA0003552096710000077
Is the difference of j z For the jacobian vector, the following is calculated:
Figure BDA0003552096710000078
Figure BDA0003552096710000079
according to the chain derivative rule, each component in the jacobian vector in equation (39) is calculated as follows:
Figure BDA0003552096710000081
Figure BDA0003552096710000082
Figure BDA0003552096710000083
wherein
Figure BDA0003552096710000084
The center, radius and connection weight to the output layer of the jth neuron of the s-th module, x z Is the input vector of the z-th training sample;
4、NO X soft measurement of concentration;
taking the test sample data as input of the brain-like modularized neural network, and outputting the model as NO X Soft measurement of concentration; at time T, after the T test sample enters the BIMNN, searching a core node closest to the sample, and activating a sub-module to which the core node belongs:
Figure BDA0003552096710000085
wherein ,xT An input vector, h, for the T-th test sample s Is the core node of the s-th module, l act For distance from the T test sample x T The nearest core node belongs to a sub-network, A is the number of sub-network modules;
therefore, the actual output of BIMNN is the first act Output of the sub-network:
Figure BDA0003552096710000086
wherein ,
Figure BDA0003552096710000087
for the actual output of BIMNN at time T, < >>
Figure BDA0003552096710000088
Is the first act Actual output of the sub-network;
the test precision is quantitatively evaluated by adopting a Root Mean Square Error (RMSE), an average percentage error (MAPE) and a Correlation Coefficient (CC), wherein the calculation formula of the RMSE is shown as a formula (30), and the calculation formulas of the MAPE and the CC are as follows:
Figure BDA0003552096710000089
/>
Figure BDA00035520967100000810
in the formula ,yi And
Figure BDA00035520967100000811
desired output and network output of the ith sample, N T To test the number of samples.
The invention has the following obvious advantages and beneficial effects:
1 the invention establishes stable and effective NO based on good nonlinear mapping capability and generalization capability of the brain-like modularized neural network X Concentration soft measurement model, can realize NO X Accurate acquisition of concentration in real time, NO in urban solid waste incineration process X Emission control is of great importance.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a block diagram of a brain-like modular neural network;
FIG. 3 is a block diagram of an ATO-RBF neural network;
fig. 4 is a training result diagram of the sub-network 1;
fig. 5 is a training result diagram of the subnetwork 2;
fig. 6 is a training result diagram of the subnetwork 3;
FIG. 7 is a BIMNN soft measurement model test output diagram;
fig. 8 is a BIMNN soft measurement test error chart.
Detailed Description
The present invention utilizes training data to build a training data for NO X A brain-like modularized neural network model for soft concentration measurement; verifying NO output by brain-like modularized neural network soft measurement model by using test data set X Accuracy of real-time concentration.
As an example, the validity of the method provided by the invention is verified by adopting actual data from a solid waste incineration plant in a certain Beijing city. After eliminating obvious abnormal data, 1000 groups of 96-dimensional experimental data are obtained. Based on the obtained data sample, adopting mRMR algorithm to make feature selection, selecting NO X The 20 variables with larger correlation are used as soft measurement model input variables, and are specifically shown in table 1.
TABLE 1
Figure BDA0003552096710000091
/>
Figure BDA0003552096710000101
For 1000 groups of data after dimension reduction, 750 groups of data are used for establishing a soft measurement model, and the rest 250 groups of data are used for testing the performance of the model;
(1) Based on 750 groups of training data, adopting a brain-like modularized partitioning method, dividing the 750 groups of training data into three subsets, wherein the sample number of each subset is 215, 273 and 262 respectively; correspondingly, the BIMNN is formed by 3 sub-networks, and the sub-networks are built through the sub-network building method in the step 3; fig. 4, 5, and 6 are training result diagrams of respective sub-networks, respectively, along the X-axis: training samples number, units are individual/sample, Y-axis: NO (NO) X Concentration in mg/Nm 3
(2) NO through brain-like modularized neural network based on 250 groups of test data X Concentration soft measurement, test results are shown in fig. 7, X-axis: the number of samples tested, in units of units/sample, Y-axis: NO (NO) X Concentration in mg/Nm 3 The method comprises the steps of carrying out a first treatment on the surface of the Test errors as shown in fig. 8, X-axis: training sample number, unit isSample, Y-axis: NO (NO) X Concentration test error in mg/Nm 3
(3) The test accuracy was quantitatively evaluated using the root mean square error RMSE, the average percent error MAPE, and the correlation coefficient CC, with the calculation result rmse=6.5031, mape= 3.8514, cc= 0.9777.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. It should be noted that the above-mentioned embodiments are merely examples of the present invention, and are not intended to limit the invention, but all modifications and optimization within the spirit and scope of the present invention are included in the following claims.

Claims (1)

1. A soft measuring method of nitrogen oxides in urban solid waste incineration process based on brain-like modularized neural network is characterized by comprising the following steps:
step 1, acquiring data of an urban solid waste incineration plant, and removing abnormal data to obtain a data sample;
step 2, determining input and output variables of a model, and based on the obtained data samples, training a set of the model;
the input variables of the model training set are determined by a maximum correlation minimum redundancy mRMR algorithm, and the input variables comprise: primary air flow, secondary air flow, grate left air flow, dry grate left air flow, inlet flue gas O 2 Concentration, boiler outlet flue gas accumulation, primary combustion chamber temperature, primary combustion chamber left side temperature, primary combustion chamber right side temperature, hearth average temperature, primary combustion chamber right side flue gas temperature, urea solvent supply flow accumulation, furnace urea solution amount accumulation, active carbon storage bin feeding amount accumulation, lime feeder accumulation, hearth water supply accumulation, main steam flow accumulation, economizer water supply accumulation, boiler outlet main steam flow and boiler drum water level;
the output variable of the model training set is NO at the current moment X Concentration;
step 3, designing a brain-like modularized neural network, and establishing a soft measurement model;
step (a)4. The test set data uses the same input variable as the training set as the input of the model, and the output of the model is NO at the current moment X A concentration measurement;
in step 2, the input variable selection method based on the mRMR algorithm is as follows:
given two random variables a and b, the mutual information between the two random variables is calculated as follows:
Figure QLYQS_1
wherein I is mutual information between random variables a and b, and p (a) and p (b) are edge probability distributions of the random variables a and b respectively; p (a, b) is a joint distribution of random variables a and b;
first, based on mutual information, searching a feature subset S with the greatest correlation with a variable c to be measured:
Figure QLYQS_2
wherein ,mi S is the number of the feature variables in the feature subset S, D is the correlation between the selected feature variable and the variable c to be measured, and if D is larger, the correlation between the selected feature variable and the variable c to be measured is higher;
calculating redundancy among features, and finding a mutual exclusion feature:
Figure QLYQS_3
wherein ,mi ,n j For the feature variables in the feature subset S, R is redundancy among the variables in the feature subset S, and the smaller R is, the lower the redundancy is;
in the application process of the mRMR algorithm, unifying the maximum correlation index D and the minimum redundancy index R into an evaluation function Φ=d-R, and then determining an optimal feature subset S by searching the maximum value of the evaluation function:
maxΦ(D,R),Φ=D-R (4)
in step 3, the soft measurement model design method based on the brain-like modularized neural network is as follows:
(1) Task decomposition;
the modularized index MQ consists of the density degree in the modules and the sparsity degree among the modules, wherein the density degree in the modules is calculated as follows:
Figure QLYQS_4
/>
wherein ,JC For the density in the modules, P is the number of modules in the current network, N l To the number of samples allocated to the first module, x i To input samples, h l and rl Respectively representing the position and the action range of the core node of the first module;
the degree of sparseness between modules is calculated based on the Euclidean distance:
Figure QLYQS_5
wherein ,JS For the degree of sparsity between modules, d (h l ,h s ) Representing the distance between the core node of the first module and the core node of the second module according to the density J in the module C And degree of sparsity J between modules S The modular index is obtained by the following measurement modes:
Figure QLYQS_6
the greater the value of MQ, the greater the degree of modularity of the network, and the modular structure construction can be divided into two cases: adding a new module and updating an existing module;
(1) adding a new module:
at the initial moment, the number of modules of the whole network is 0;
after the first data sample enters the network, it is set as the core node of the first sub-module:
h 1 =x 1 (8)
Figure QLYQS_7
Figure QLYQS_8
wherein ,h1 and r1 Respectively representing the position and the action range of the core node of the first module, x 1 An input vector, d, for the first training sample max Input vector x for training samples i And x j Maximum distance between;
at time t, when the input vector of the t training sample enters the network, k modules exist, and the core node closest to the input vector of the sample is found:
Figure QLYQS_9
wherein ,xt Is the input vector of the t training sample, h s Representing the position, k, of the core node of the s-th module min Input vector x representing distance training samples t The nearest core node;
if the input vector x of the t training sample t Not at k min In the scope of the core node, a module is newly added to learn the current sample, and the parameters of the newly added module corresponding to the core node are set as follows:
h k+1 =x t (12)
Figure QLYQS_10
wherein ,hk+1 ,r k+1 The position and the action range of the core node corresponding to the newly added module,x t Is the input vector for the t-th training sample,
Figure QLYQS_11
the furthest distance from other core nodes to the newly added module core node is provided; />
(2) Optimizing existing modules:
the sample is classified as k min In the module, according to the formula (7), respectively calculating the modularized index value MQ of the whole network under the condition that the current sample and the original core node are respectively used as the core nodes t And
Figure QLYQS_12
if it is
Figure QLYQS_13
The network modularization degree of selecting the current sample as the core node is higher, and the original core node is replaced by the sample to become a new core node, and the initial parameters are set as follows:
Figure QLYQS_14
Figure QLYQS_15
wherein ,
Figure QLYQS_16
the position and the action range of a new core node of the module are respectively; n (N) k For the number of samples assigned to the kth module;
if it is
Figure QLYQS_17
The location of the current core node +.>
Figure QLYQS_18
Remain unchanged, adjust only the nodeScope of action->
Figure QLYQS_19
Figure QLYQS_20
wherein ,
Figure QLYQS_21
is the original core node of the module;
after all training samples are compared, the samples are distributed to different sub-modules, a partition structure is formed, the modularization degree of the current network is considered to be the largest, and then the sub-network is required to be built for the task of each sub-module;
(2) And (3) structural design of a sub-network:
training data set
Figure QLYQS_22
Is divided into M subsets;
wherein ,xi ,y i The input vector and the output vector of the ith training sample of the model respectively,
Figure QLYQS_23
representing the domain, V is the input vector x i N is the number of the input vector and the output vector data of the model;
constructing a sub-network corresponding to each module by adopting an adaptive task-oriented radial basis function neural network ATO-RBF, wherein the design of the sub-network comprises three parts: network structure growth, network structure pruning and network parameter adjustment;
(1) network structure growth:
the center, radius, and connection weight to the output layer of the first hidden layer node of the s-th module are based on the sample with the largest absolute output
Figure QLYQS_24
Setting:
Figure QLYQS_25
Figure QLYQS_26
Figure QLYQS_27
Figure QLYQS_28
wherein ,
Figure QLYQS_29
for the sample with the largest absolute output, +.>
Figure QLYQS_30
Respectively representing an input vector and an output vector corresponding to a sample with the largest absolute output, r s For the range of the s-th module, N s For the number of samples assigned to the s-th module; />
Figure QLYQS_31
The center and the radius of the first hidden layer node of the s-th module and the connection weight value to the output layer are respectively;
at time t s Time, training error vector e (t s ) Obtained by the formula:
Figure QLYQS_32
/>
Figure QLYQS_33
wherein ,yf Is the desired output of the f-th sample,
Figure QLYQS_34
is the f sample at time t s Is calculated by the following formula:
Figure QLYQS_35
wherein H is the number of neurons in the hidden layer,
Figure QLYQS_36
is the function of the jth hidden layer node of the s-th module, +.>
Figure QLYQS_37
For the connection weight of the jth hidden layer node of the s-th module to the output layer, < ->
Figure QLYQS_38
The center and the radius of the j hidden layer node of the s-th module; x is x f An input vector for the f training sample;
finding a sample with the largest difference between the desired output and the network output
Figure QLYQS_39
Figure QLYQS_40
Then, a RBF neuron pair is newly added
Figure QLYQS_41
The initial parameters of the newly added neurons are as follows:
Figure QLYQS_42
Figure QLYQS_43
wherein
Figure QLYQS_44
and />
Figure QLYQS_45
Respectively representing the center of the newly added neuron of the s-th module and the connection weight value to the output layer; />
Figure QLYQS_46
Is->
Figure QLYQS_47
Input vector corresponding to each sample,/>
Figure QLYQS_48
Respectively +.>
Figure QLYQS_49
The expected output sum for each sample is at t s A network output value at a moment;
Figure QLYQS_50
Figure QLYQS_51
wherein
Figure QLYQS_52
Is the center of the neuron closest to the newly added neuron; />
Figure QLYQS_53
The center of the neuron is newly added for the s-th module,
Figure QLYQS_54
newly adding the radius of the neuron for the s-th module;
the radius of the newly added neuron is set as:
Figure QLYQS_55
when neurons are newly added each time, the network parameters are adjusted through a second-order learning algorithm; when reaching the preset maximum structure J max Or desired training accuracy E 0 Ending the network structure growth process; in the experimental process J max =10,E 0 =0.0001, the training accuracy of the network is measured using root mean square error RMSE, calculated as follows:
Figure QLYQS_56
wherein ,yi And
Figure QLYQS_57
the expected outputs and network outputs of the ith sample, respectively;
(2) and (3) pruning a network structure:
measuring contribution values of hidden layer neurons based on an exponent of the connection weight:
Figure QLYQS_58
wherein SI (j) is the contribution value of the j-th hidden layer node,
Figure QLYQS_59
is the weight from the jth hidden layer node of the s-th module to the output layer;
searching for an hidden layer node with the smallest contribution value:
Figure QLYQS_60
wherein ,
Figure QLYQS_61
the hidden layer node with the smallest contribution value in the s-th module is J, and the J is the number of hidden layer nodes in the network;
deleting the hidden node with the smallest contribution, adjusting parameters through a second-order learning algorithm, and comparing the root mean square error value RMSE_1 after deleting the node with the root mean square error value RMSE_0 when not deleting the node, wherein the calculation formula of the RMSE is shown as a formula (30); if RMSE_1 is less than or equal to RMSE_0, pruning the selected node under the condition of not sacrificing the network learning capability, and then repeating the process, wherein the selected node cannot be deleted; at this time, the network structure pruning process is finished, and the sub-network construction is completed; each time neurons are deleted, a second-order learning algorithm is used for adjusting parameters;
(3) and (3) network parameter adjustment:
the second order learning algorithm is as follows:
θ L+1 =θ L -(Q LL E) -1 g L (33)
wherein θ refers to parameters to be adjusted, including center, radius and connection weight, Q is a Hessian-like matrix, μ is a learning coefficient, and 0.01 is taken; e is an identity matrix, g is a gradient vector, L is an iteration step number, and the iteration step number is set to be 50;
converting the computation of the Hessian-like matrix Q and the gradient vector g into a Hessian-like submatrix summation and a gradient submatrix summation:
Figure QLYQS_62
Figure QLYQS_63
q z is a Hessian-like submatrix, eta z Is a gradient sub-directionThe amount can be calculated from the following formula:
Figure QLYQS_64
Figure QLYQS_65
wherein ,ez Expected output y for sample z z Output with network prediction
Figure QLYQS_66
Is the difference of j z Is the jacobian vector which is the vector,
the calculation is as follows:
Figure QLYQS_67
Figure QLYQS_68
according to the chain derivative law, each component in the jacobian vector in equation (39) is calculated as follows:
Figure QLYQS_69
Figure QLYQS_70
/>
Figure QLYQS_71
wherein
Figure QLYQS_72
The s-th module respectivelyThe center, radius and connection weight to the output layer of j neurons, x z Is the input vector of the z-th training sample; />
Figure QLYQS_73
The function of the j hidden layer node of the s-th module;
in step 4, the test data is used as the input of the model, and the output of the model is NO at the current moment X A concentration measurement;
at time T, after the T test sample enters the BIMNN, searching a core node closest to the sample, and activating a sub-module to which the core node belongs:
Figure QLYQS_74
wherein ,xT An input vector, h, for the T-th test sample s Is the core node of the s-th module, l act For distance from the T test sample x T The nearest core node belongs to a sub-network, A is the number of sub-network modules;
therefore, the actual output of BIMNN is the first act Output of the sub-network:
Figure QLYQS_75
wherein ,
Figure QLYQS_76
for the actual output of BIMNN at time T, < >>
Figure QLYQS_77
Is the first act The actual output of the sub-network. />
CN202210266639.9A 2022-03-17 2022-03-17 Soft measurement method for nitrogen oxides in urban solid waste incineration process based on brain-like modularized neural network Active CN114611398B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210266639.9A CN114611398B (en) 2022-03-17 2022-03-17 Soft measurement method for nitrogen oxides in urban solid waste incineration process based on brain-like modularized neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210266639.9A CN114611398B (en) 2022-03-17 2022-03-17 Soft measurement method for nitrogen oxides in urban solid waste incineration process based on brain-like modularized neural network

Publications (2)

Publication Number Publication Date
CN114611398A CN114611398A (en) 2022-06-10
CN114611398B true CN114611398B (en) 2023-05-12

Family

ID=81865131

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210266639.9A Active CN114611398B (en) 2022-03-17 2022-03-17 Soft measurement method for nitrogen oxides in urban solid waste incineration process based on brain-like modularized neural network

Country Status (1)

Country Link
CN (1) CN114611398B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733876A (en) * 2020-10-28 2021-04-30 北京工业大学 Soft measurement method for nitrogen oxides in urban solid waste incineration process based on modular neural network
CN113077039A (en) * 2021-03-22 2021-07-06 北京工业大学 Task-driven RBF neural network-based water outlet total nitrogen TN soft measurement method
CN113780639A (en) * 2021-08-29 2021-12-10 北京工业大学 Urban solid waste incineration nitrogen oxide NOx emission prediction method based on multitask learning framework

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733876A (en) * 2020-10-28 2021-04-30 北京工业大学 Soft measurement method for nitrogen oxides in urban solid waste incineration process based on modular neural network
CN113077039A (en) * 2021-03-22 2021-07-06 北京工业大学 Task-driven RBF neural network-based water outlet total nitrogen TN soft measurement method
CN113780639A (en) * 2021-08-29 2021-12-10 北京工业大学 Urban solid waste incineration nitrogen oxide NOx emission prediction method based on multitask learning framework

Also Published As

Publication number Publication date
CN114611398A (en) 2022-06-10

Similar Documents

Publication Publication Date Title
CN106920007B (en) PM based on second-order self-organizing fuzzy neural network2.5Intelligent prediction method
CN101404071B (en) Electronic circuit fault diagnosis neural network method based on grouping particle swarm algorithm
CN111144609A (en) Boiler exhaust emission prediction model establishing method, prediction method and device
CN111080070A (en) Urban land utilization cellular automata simulation method based on spatial error
CN112765902B (en) Soft measurement modeling method for COD concentration in rural domestic sewage treatment process based on TentFWA-GD RBF neural network
CN110826244B (en) Conjugated gradient cellular automaton method for simulating influence of rail transit on urban growth
Ning et al. GA-BP air quality evaluation method based on fuzzy theory.
CN114578087B (en) Wind speed uncertainty measurement method based on non-dominant sorting and stochastic simulation algorithm
CN112819087B (en) Method for detecting abnormality of BOD sensor of outlet water based on modularized neural network
CN117910120A (en) Buffeting response prediction method for wind-bridge system based on lightweight transducer
CN114611398B (en) Soft measurement method for nitrogen oxides in urban solid waste incineration process based on brain-like modularized neural network
CN113762602A (en) Short-term wind speed prediction method for wind power plant
Wang et al. Source term estimation with unknown number of sources using improved cuckoo search algorithm
Zain Al-Thalabi et al. Modeling and prediction using an artificial neural network to study the impact of foreign direct investment on the growth rate/a case study of the State of Qatar
Shen Research on marine water quality evaluation model based on improved harmony search algorithm by Gaussian disturbance to optimize takagi-sugeno fuzzy neural network
CN117035005B (en) Intelligent operation optimization method for urban solid waste incineration process
Jing et al. Research on genetic neural network algorithm and its application
CN117912585B (en) Optimization method for combustion chemical reaction based on deep artificial neural network
CN116796185A (en) Urban solid waste incineration process NO based on attention modularized neural network x Prediction method
Li et al. Remote sensing and artificial neural network estimation of on-road vehicle emissions
Li et al. Optimization for Boiler Based on Data Mining and Multi-Condition Combustion Model
Ye et al. Driving Cycle Condition Identification Model Based on Long Short-Term Memory Algorithm [J]
CN111709140B (en) Ship motion forecasting method based on intrinsic plasticity echo state network
CN113705932B (en) Short-term load prediction method and device
Gong et al. Research of reliability distribution model of numerical control machine tool based on ANN model and HPSO algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant