CN114971032A - Electronic nose online gas concentration prediction method based on OS-ELM - Google Patents

Electronic nose online gas concentration prediction method based on OS-ELM Download PDF

Info

Publication number
CN114971032A
CN114971032A CN202210608225.XA CN202210608225A CN114971032A CN 114971032 A CN114971032 A CN 114971032A CN 202210608225 A CN202210608225 A CN 202210608225A CN 114971032 A CN114971032 A CN 114971032A
Authority
CN
China
Prior art keywords
data
elm
model
hidden layer
electronic nose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210608225.XA
Other languages
Chinese (zh)
Inventor
陶洋
朱梓涵
杜黎明
谭锐
申婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202210608225.XA priority Critical patent/CN114971032A/en
Publication of CN114971032A publication Critical patent/CN114971032A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/0004Gaseous mixtures, e.g. polluted air
    • G01N33/0009General constructional details of gas analysers, e.g. portable test equipment
    • G01N33/0062General constructional details of gas analysers, e.g. portable test equipment concerning the measuring method or the display, e.g. intermittent measurement or digital display
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/0004Gaseous mixtures, e.g. polluted air
    • G01N33/0009General constructional details of gas analysers, e.g. portable test equipment
    • G01N33/0062General constructional details of gas analysers, e.g. portable test equipment concerning the measuring method or the display, e.g. intermittent measurement or digital display
    • G01N33/0068General constructional details of gas analysers, e.g. portable test equipment concerning the measuring method or the display, e.g. intermittent measurement or digital display using a computer specifically programmed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Chemical & Material Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Business, Economics & Management (AREA)
  • Food Science & Technology (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Theoretical Computer Science (AREA)
  • Marketing (AREA)
  • Combustion & Propulsion (AREA)
  • Game Theory and Decision Science (AREA)
  • Educational Administration (AREA)
  • Tourism & Hospitality (AREA)
  • Medicinal Chemistry (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides an online gas concentration prediction method based on an OS-ELM electronic nose system, and belongs to the technical field of sensors. The method provides a solution to the problems of high model training cost and insufficient precision when the electronic nose carries out concentration prediction. Firstly, an online learning mechanism of the OS-ELM is utilized, and after a new sample is input, the existing model can be adapted to samples of different batches only by iteratively updating, so that the training cost is reduced. Meanwhile, an improved PSO algorithm is adopted to search the optimal value of the hyperparameter for the defect that the precision of a prediction result is reduced due to the fact that the input weight and the hidden layer bias are randomly determined for the ELM series model, and therefore the algorithm effect is guaranteed. The invention reduces the training cost problem of the repeated training of the electronic nose gas recognition algorithm model, and simultaneously improves the accuracy in concentration prediction, thereby having greater application value in an electronic nose system.

Description

Electronic nose online gas concentration prediction method based on OS-ELM
Technical Field
The invention belongs to the technical field of sensors, and discloses an online gas concentration prediction method based on an OS-ELM electronic nose system.
Background
The electronic nose is an artificial olfaction system formed by combining hardware and software, and comprises a sensor array, a signal processing module and a pattern recognition algorithm. The pattern recognition algorithm is the most critical part of an electronic nose system, and at present, many high-precision sensors cannot convert an electric signal into an actual concentration value by using a simple linear conversion formula, and the effect of the pattern recognition algorithm has a large influence on the result of gas recognition.
The gas identification algorithm of the electronic nose is mainly divided into gas component identification and gas concentration prediction. Wherein the gas component identification belongs to qualitative analysis, and the electronic nose is used for identifying the gas components in the mixed gas; the gas concentration prediction belongs to quantitative analysis, an electronic nose system needs to predict the actual concentration value of an unknown sample according to the information of the known sample, and the quantitative analysis process is more difficult.
The gas concentration prediction algorithm of the electronic nose system mainly faces two problems: firstly, the accuracy problem in the prediction process; and secondly, when a new batch of samples appear in the training process, the model is repeatedly retrained, so that the training cost is caused. In practical applications, the gas concentration is not single and fixed, so to guarantee the prediction accuracy, an iterative model is needed each time a new sample is added, but if the model is retrained, all past samples need to be retrained, the process causes redundancy of information, so that the training cost is increased, but if the model is not updated because of the training cost, the prediction accuracy is reduced along with the increase of new samples which are not learned.
How to reduce the training cost while ensuring the prediction accuracy is a very important topic for the electronic nose gas concentration prediction. The online gas concentration prediction method based on the OS-ELM electronic nose system disclosed by the patent can obtain a better result in gas concentration prediction, and meanwhile, an original model is updated online through an online learning mechanism instead of being retrained, so that the training cost in the prediction process is reduced. The value of the electronic nose in a practical scene can be further increased by the method.
Disclosure of Invention
In view of the above, the present invention provides an online gas concentration prediction method based on an OS-ELM electronic nose system. A plurality of batches of samples, namely the first batch of samples, are used for initial training, and the later batches of samples are used for on-line training, so that the purposes of improving the prediction accuracy and reducing the training cost in the gas concentration prediction of the electronic nose system are achieved.
In order to achieve the purpose, the invention provides the following technical scheme:
an online gas concentration prediction method based on an OS-ELM electronic nose system comprises the following steps:
step 1) carrying out initial training on a model by using a first batch of samples;
further, the step 1) comprises the following steps:
step 11) inputting a first batch of sample data D 1
Step 12) carrying out data preprocessing on the batch of data to carry out normalization processing on the data with large numerical difference and large data dimension, and then carrying out PCA processing on the original data to obtain a sample set D subjected to data dimensionality reduction pca The characteristic dimension of the sample is N;
step 13) set of samples D pca Division into D train X, y and D test ={x test ,y test H, mixing D with train Inputting the data into an ELM model for training;
step 14) searching the weight a from the input layer to the hidden layer and the bias beta from the hidden layer in the ELM neural network by utilizing the improved PSO algorithm to obtain the D test Optimizing an evaluation function of a prediction result as a PSO algorithm target function y (t), and setting the maximum iteration number of the PSO algorithm as n;
step 15) the inertia weight factor omega of the PSO algorithm changes along with the iteration times in the iteration process, wherein the iteration times t belongs to [0, n ∈]The inertia weight factor is relatively large in the early stage of iteration and small in the later stage, so that the PSO algorithm is high in early stage global search capability and high in later stage local search capability, and the inertia weight factor omega belongs to [ omega ] minmax ]The change is an exponential decline:
Figure BDA0003672296640000021
learning factor c 1 And c 2 Also affects the PSO algorithm search capability and therefore varies synchronously with the inertial weight factor ω, c 1 、c 2 ∈[c min ,c max ]The variation function of the learning factor is:
c 1 =c min +w t
c 2 =c max -w t
and updating the speed value of each particle in the PSO algorithm according to the inertia weight factor and the learning factor:
Figure BDA0003672296640000022
update the current position of the particle:
Figure BDA0003672296640000023
judging whether the objective function value y (t) of the current particle i is better than the individual optimal value of the current particle
Figure BDA0003672296640000024
If so, replace the current
Figure BDA0003672296640000031
Judging the optimal solution of the current particle
Figure BDA0003672296640000032
Whether it is better than the global optimal solution gbest k If so, replace the current global optimal solution gbest k
Judging whether the maximum iteration number n is reached, if so, outputting the weight a from the input layer to the hidden layer best And hidden layer bias beta best The optimal solution of (2);
step 16) mixing a best And beta best Substitution into ELM model for training, g (a) ii ,x j ) A common sigmoid function is selected to calculate a hidden layer output matrix H in the ELM model 0 :
Figure BDA0003672296640000033
The output data matrix is:
Figure BDA0003672296640000034
thereby can obtain
Figure BDA0003672296640000035
Calculating initial output weight of ELM model
Figure BDA0003672296640000036
Step 2) performing on-line training according to the input sample batch number;
further, the step 2) comprises the following steps:
step 21) inputting the K batch of sample data D K
Step 22) carrying out data preprocessing on the batch of data to carry out normalization processing on data with large numerical difference and large data dimension, then carrying out PCA processing on original data to reduce the data into data with characteristic dimension N to obtain a sample set D subjected to data dimension reduction pca
Step 23) inputting the output obtained in the initial training into the hidden layer weight a best And hidden layer bias beta best The optimal solution is brought into an OS-ELM model to work out a hidden layer output matrix H of the OS-ELM K+1 Let us order
Figure BDA0003672296640000037
Thus P can be substituted k+1 Expressed as:
Figure BDA0003672296640000038
and solving an output matrix after the OS-ELM iteration:
Figure BDA0003672296640000039
step 24) saving the output weight beta of the current round k+1 Waiting for the input of the next batch of samples;
and 3) predicting the sample test set by using the final model after iterative update.
Further, the step 3) comprises the following steps:
step 31) inputting a test set T of sample data;
step 32) outputting the matrix H through the hidden layer T And a final output weight beta N The prediction result T-H can be obtained T β N
Step 33) evaluating the model concentration prediction result by using the evaluation function.
Drawings
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in detail with reference to the accompanying drawings, in which:
FIG. 1 is a flow chart of the present invention.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
The invention forms a network for online prediction of gas concentration through an online learning mechanism in the OS-ELM model so as to reduce the cost problem caused by retraining the model in the concentration prediction process. In addition, due to the fact that the ELM series models randomly determine the weight from the input layer to the hidden layer and bias of the hidden layer, the prediction result is uncertain, the global search capability is improved by the improved PSO algorithm, and the optimal parameters of the network are searched to improve the final prediction accuracy of the models. The method comprises the following steps: step 1) carrying out initial training on a model according to a first batch of samples, and searching out the optimal parameters of a network; step 2) iteratively updating the output weight of the model according to the input sample on the basis of the step 1) to achieve the purpose of online learning; and 3) on the basis of the step 2), using the updated final model to predict the result of the test set, and evaluating the model concentration prediction result through an evaluation function.
Step 1) carrying out initial training on a model by using a first batch of samples;
further, the step 1) comprises the following steps:
step 11) inputting a first batch of sample data D 1
And step 12) carrying out data preprocessing on the batch of data to carry out normalization processing on the data with large numerical value difference and large data dimension, and reducing the influence of the data on the model through the normalization processing. And then carrying out PCA processing on the original data, selecting principal components with larger contribution, and reducing noise and calculation overhead through the data after the PCA processing. Obtaining a sample set D after data dimension reduction pca The characteristic dimension of the sample is N;
step 13) set D of samples pca Division into D train X, y and D test ={x test ,y test H, mixing D with train Inputting the data into an ELM model for training;
step 14) because the weight a from the input layer to the hidden layer and the bias beta of the hidden layer in the ELM series model are randomly determined, the ELM model is directly used with certain uncertainty, and the accuracy of the prediction result can be ensured if a search formula algorithm is adopted to find the optimal solution of the parameters. Therefore, the improved PSO algorithm is utilized to search the weight a from the input layer to the hidden layer and the bias beta from the hidden layer in the ELM neural network, and D is obtained test Optimizing an evaluation function of a prediction result as a PSO algorithm target function y (t), and setting the maximum iteration number of the PSO algorithm as n;
step 15) the inertia weight factor omega of the traditional PSO algorithm is fixed, if omega is set to be relatively large, the algorithm can obtain good global search capability, but the later local search capability is weak, so that the algorithm convergence is not facilitated; and omega is set to be relatively small, the global search capability in the early stage of the algorithm is weak, the global optimal solution cannot be found, and if omega needs to present a descending trend in the iteration process of the algorithm, the calculation is carried outThe method has strong early global search capability and strong later local search capability, can improve the overall effect of the algorithm, can increase the calculation overhead if adopting a mixed algorithm form, and cannot exert the advantages of the PSO algorithm. Therefore, the inertia weight factor omega of the PSO algorithm directly changes along with the iteration times in the iteration process, wherein the iteration times t belongs to [0, n ∈]The inertia weight factor is relatively large in the early stage of iteration and small in the later stage, so that the PSO algorithm is high in early stage global search capability and high in later stage local search capability, and the inertia weight factor omega belongs to [ omega ] minmax ]The change mode is exponential decline, and the nonlinear decline mode can obtain better effect than the general linear decline mode:
Figure BDA0003672296640000051
learning factor c 1 And c 2 It also affects PSO algorithm searching ability if only c is used 1 The algorithm then presents a cognitive model in which only the individual learning part, if only c is used 2 The method is a non-private model, and the comprehensive searching capability of the PSO algorithm cannot be improved in both cases. Therefore, the algorithm is required to depend on the individual learning factor c for each particle in the early stage 1 ,c 1 Presenting a descending trend during the iteration, and depending on the social learning factor c in the later period 2 Showing a rising trend and therefore varying synchronously with the inertial weight factor omega, c 1 、c 2 ∈[c min ,c max ]The variation function of the learning factor is:
c 1 =c min +w t
c 2 =c max -w t
the speed of the particles cannot be too fast or too slow, if the speed is too fast, the particles can fly out of the margin to cause the divergence of the particles, and finally the algorithm is difficult to converge, if the speed is too slow, the particles can converge slowly and cannot find a global optimum value, and the speed value of each particle in the PSO algorithm is updated according to the inertia weight factor and the learning factor:
Figure BDA0003672296640000052
update the current position of the particle:
Figure BDA0003672296640000053
judging whether the objective function value y (t) of the current particle i is better than the individual optimal value of the current particle
Figure BDA0003672296640000054
If so, replace the current
Figure BDA0003672296640000055
Judging the optimal solution of the current particle
Figure BDA0003672296640000056
Whether it is better than the global optimal solution gbest k If so, replacing the current global optimal solution gbest k This method belongs to a synchronous updating method, and it is determined whether the optimal position needs to be updated every time the velocity and position of a particle are calculated.
Judging whether the maximum iteration number n is reached, if so, outputting the weight a from the input layer to the hidden layer best And hidden layer bias beta best The optimal solution of (1).
Step 16) mixing a best And beta best The method has the advantages that the parameters of the ELM are determined and then do not need to be adjusted again, so that the calculation speed is high, and meanwhile, the accuracy of the algorithm can be guaranteed through the optimal parameters. By a best And beta best An output matrix H may be determined, followed by a kernel function g (a) i, β i ,x j ) A common sigmoid function is selected to calculate a hidden layer output matrix H in the ELM model 0 :
Figure BDA0003672296640000061
The output data matrix is:
Figure BDA0003672296640000062
thereby can obtain
Figure BDA0003672296640000063
Calculating initial output weight of ELM model
Figure BDA0003672296640000064
The output weights in the OS-ELM model need to be updated iteratively, so the current output matrix weight values need to be recorded to facilitate the next update of the weight values. The OS-ELM can input and update a batch of samples, and can update the model on line by one piece of data.
Step 2) performing on-line training according to the input sample batch number;
further, the step 2) comprises the following steps:
step 21) inputting the K batch of sample data D K
Step 22) carrying out data preprocessing on the batch of data to carry out normalization processing on the data with large numerical value difference and large data dimension, then carrying out PCA processing on the original data to reduce the data into the data with the characteristic dimension N, wherein the sample data needs to keep the same dimension as the data of the first batch to ensure the consistency before and after obtaining a sample set D after data dimension reduction pca
Step 23) inputting the output obtained in the initial training into the hidden layer weight a best And hidden layer bias beta best The optimal solution is brought into an OS-ELM model to work out a hidden layer output matrix H of the OS-ELM K+1 Let us order
Figure BDA0003672296640000065
Thus P can be substituted k+1 Expressed as:
Figure BDA0003672296640000066
and solving an output matrix after the OS-ELM iteration:
Figure BDA0003672296640000067
step 24) saving the output weight beta of the current round k+1 Waiting for the input of the next batch of samples;
and 3) predicting the sample test set by using the final model after iterative update.
Further, the step 3) comprises the following steps:
step 31) inputting a test set T of sample data;
step 32) outputting the matrix H through the hidden layer T And a final output weight beta N The prediction result T-H can be obtained T β N
Step 33) the model concentration prediction result is evaluated by using an evaluation function, and the MAE, RMSE and R values are used as evaluation indexes because the concentration prediction belongs to quantitative analysis.
Finally, it is to be understood that the above real-time examples are intended to illustrate and not to limit the technical solutions of the present invention, and that, although the present invention has been described in detail by way of the above examples, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the scope of the invention as defined by the appended claims.

Claims (4)

1. An online gas concentration prediction method based on an OS-ELM electronic nose system is characterized by comprising the following steps:
step 1) carrying out initial training on a model by using a first batch of samples;
step 2) performing on-line training according to the input sample batch number;
and 3) predicting the sample test set by using the final model after iterative update.
2. The on-line gas concentration prediction method based on the OS-ELM electronic nose system of claim 1, wherein: the step 1) comprises the following steps:
step 11) inputting a first batch of sample data D 1
Step 12) carrying out data preprocessing on the batch of data to carry out normalization processing on the data with large numerical difference and large data dimension, and then carrying out PCA processing on the original data to obtain a sample set D subjected to data dimensionality reduction pca The characteristic dimension of the sample is N;
step 13) set of samples D pca Division into D train X, y and D test ={x test ,y test H, mixing D with train Inputting the data into an ELM model for training;
step 14) searching the weight a from the input layer to the hidden layer and the bias beta from the hidden layer in the ELM neural network by utilizing the improved PSO algorithm to obtain the D test Optimizing an evaluation function of a prediction result as a PSO algorithm target function y (t), and setting the maximum iteration number of the PSO algorithm as n;
step 15) the inertia weight factor omega of the PSO algorithm changes along with the iteration times in the iteration process, wherein the iteration times t belongs to [0, n ∈]The inertia weight factor is relatively large in the early stage of iteration and small in the later stage, so that the PSO algorithm is high in early stage global search capability and high in later stage local search capability, and the inertia weight factor omega belongs to [ omega ] minmax ]The change is an exponential decline:
Figure FDA0003672296630000011
learning factor c 1 And c 2 Also affects the PSO algorithm search capability and therefore varies synchronously with the inertial weight factor ω, c 1 、c 2 ∈[c min ,c max ]The variation function of the learning factor is:
c 1 =c min +w t
c 2 =c max -w t
and updating the speed value of each particle in the PSO algorithm according to the inertia weight factor and the learning factor:
Figure FDA0003672296630000012
update the current position of the particle:
Figure FDA0003672296630000013
judging whether the objective function value y (t) of the current particle i is better than the individual optimal value of the current particle
Figure FDA0003672296630000021
If so, replace the current
Figure FDA0003672296630000022
Judging the optimal solution of the current particle
Figure FDA0003672296630000023
Whether it is better than the global optimal solution gbest k If so, replace the current global optimal solution gbest k
Judging whether the maximum iteration number n is reached, if so, outputting the weight a from the input layer to the hidden layer best And hidden layer bias beta best The optimal solution of (1).
Step 16) mixing a best And beta best Substitution into ELM model for training, g (a) ii ,x j ) A common sigmoid function is selected to calculate a hidden layer output matrix H in the ELM model 0 :
Figure FDA0003672296630000024
The output data matrix is:
Figure FDA0003672296630000025
thereby can obtain
Figure FDA0003672296630000026
Calculating initial output weight of ELM model
Figure FDA0003672296630000027
3. The on-line gas concentration prediction method based on the OS-ELM electronic nose system of claim 1, wherein: the step 2) comprises the following steps:
step 21) inputting the K batch of sample data D K
Step 22) carrying out data preprocessing on the batch of data to carry out normalization processing on data with large numerical difference and large data dimension, then carrying out PCA processing on original data to reduce the data into data with characteristic dimension N to obtain a sample set D subjected to data dimension reduction pca
Step 23) inputting the output obtained in the initial training into the hidden layer weight a best And hidden layer bias beta best The optimal solution is brought into an OS-ELM model to work out a hidden layer output matrix H of the OS-ELM K+1 Let us order
Figure FDA0003672296630000028
Thus P can be substituted k+1 Expressed as:
Figure FDA0003672296630000029
and solving an output matrix after the OS-ELM iteration:
Figure FDA00036722966300000210
step 24) saving the output weight beta of the current round k+1 And waiting for the input of the next batch of samples.
4. The on-line gas concentration prediction method based on the OS-ELM electronic nose system of claim 1, wherein: the step 3) comprises the following steps:
step 31) inputting a test set T of sample data;
step 32) outputting the matrix H by the hidden layer T And a final output weight beta N The prediction result T ═ H can be obtained T β N
Step 33) evaluating the model concentration prediction result by using the evaluation function.
CN202210608225.XA 2022-05-31 2022-05-31 Electronic nose online gas concentration prediction method based on OS-ELM Pending CN114971032A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210608225.XA CN114971032A (en) 2022-05-31 2022-05-31 Electronic nose online gas concentration prediction method based on OS-ELM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210608225.XA CN114971032A (en) 2022-05-31 2022-05-31 Electronic nose online gas concentration prediction method based on OS-ELM

Publications (1)

Publication Number Publication Date
CN114971032A true CN114971032A (en) 2022-08-30

Family

ID=82957561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210608225.XA Pending CN114971032A (en) 2022-05-31 2022-05-31 Electronic nose online gas concentration prediction method based on OS-ELM

Country Status (1)

Country Link
CN (1) CN114971032A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116735146A (en) * 2023-08-11 2023-09-12 中国空气动力研究与发展中心低速空气动力研究所 Wind tunnel experiment method and system for establishing aerodynamic model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116735146A (en) * 2023-08-11 2023-09-12 中国空气动力研究与发展中心低速空气动力研究所 Wind tunnel experiment method and system for establishing aerodynamic model
CN116735146B (en) * 2023-08-11 2023-10-13 中国空气动力研究与发展中心低速空气动力研究所 Wind tunnel experiment method and system for establishing aerodynamic model

Similar Documents

Publication Publication Date Title
CN110532471B (en) Active learning collaborative filtering method based on gated cyclic unit neural network
CN109766583A (en) Based on no label, unbalanced, initial value uncertain data aero-engine service life prediction technique
CN111814956B (en) Multi-task learning air quality prediction method based on multi-dimensional secondary feature extraction
CN110084398A (en) A kind of Industrial Cycle self-adapting detecting method based on enterprise's electric power big data
CN116448419A (en) Zero sample bearing fault diagnosis method based on depth model high-dimensional parameter multi-target efficient optimization
CN111831895A (en) Network public opinion early warning method based on LSTM model
Wu et al. Hot‐Rolled Steel Strip Surface Inspection Based on Transfer Learning Model
CN110717090A (en) Network public praise evaluation method and system for scenic spots and electronic equipment
CN113743474A (en) Digital picture classification method and system based on cooperative semi-supervised convolutional neural network
CN114971032A (en) Electronic nose online gas concentration prediction method based on OS-ELM
CN110515836B (en) Weighted naive Bayes method for software defect prediction
CN116227716A (en) Multi-factor energy demand prediction method and system based on Stacking
He et al. Housing price analysis using linear regression and logistic regression: a comprehensive explanation using melbourne real estate data
Dobrea et al. Machine Learning algorithms for air pollutants forecasting
CN113283467A (en) Weak supervision picture classification method based on average loss and category-by-category selection
CN116911191A (en) Aeroengine gas circuit system modeling method based on improved PSO optimization BiLSTM
CN111652264A (en) Negative migration sample screening method based on maximum mean difference
CN110648023A (en) Method for establishing data prediction model based on quadratic exponential smoothing improved GM (1,1)
CN116415177A (en) Classifier parameter identification method based on extreme learning machine
Wu et al. Optimization and improvement based on K-Means Cluster algorithm
CN114357284A (en) Crowdsourcing task personalized recommendation method and system based on deep learning
CN112329535A (en) CNN-based quick identification method for low-frequency oscillation modal characteristics of power system
CN112634947A (en) Animal voice and emotion feature set sequencing and identifying method and system
Cui et al. Prediction of Aeroengine Remaining Useful Life Based on SE-BiLSTM
CN114117251B (en) Intelligent context-Bo-down fusion multi-factor matrix decomposition personalized recommendation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination