CN108062566A - A kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear - Google Patents

A kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear Download PDF

Info

Publication number
CN108062566A
CN108062566A CN201711327861.0A CN201711327861A CN108062566A CN 108062566 A CN108062566 A CN 108062566A CN 201711327861 A CN201711327861 A CN 201711327861A CN 108062566 A CN108062566 A CN 108062566A
Authority
CN
China
Prior art keywords
mrow
msub
msubsup
msup
prime
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711327861.0A
Other languages
Chinese (zh)
Inventor
汤健
刘卓
余刚
赵建军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201711327861.0A priority Critical patent/CN108062566A/en
Publication of CN108062566A publication Critical patent/CN108062566A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/02Computing arrangements based on specific mathematical models using fuzzy logic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Fuzzy Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Algebra (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computational Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)

Abstract

The present invention discloses a kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear, carries out integrated construction based on multiple candidate's nuclear parameters extraction feature, obtains the potential character subset towards different nuclear parameters;Candidate's fuzzy reasoning submodel is built using these potential character subsets as training subset, builds to obtain selective ensemble fuzzy reasoning master cast using optimization algorithm and adaptive weighted algorithm;Master cast prediction error is calculated, select nuclear parameter and predicts the relevant potential characteristic set of error with master cast using in KPLS extraction input datas;Integrated construction is carried out using Bootstrap algorithms based on these potential characteristic sets, obtains the training subset sampled towards training sample;Candidate's submodel based on core random weight neutral net is constructed based on these training subsets, selective ensemble KRWNN compensation models are built using genetic algorithm optimization tool box and AWF;The output of selective ensemble fuzzy reasoning master cast and selective ensemble KRWNN compensation models is merged to obtain prediction result.

Description

A kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear
Technical field
The invention belongs to industrial process technical field more particularly to a kind of intelligent integrateds based on the potential feature extraction of multinuclear Flexible measurement method.
Background technology
In complex industrial process field, strong coupling, some and the production of complicated mechanism, many factors by production process The quality of product, safety-related key process parameters are difficult to directly detect using instrument.It is special that outstanding field is relied primarily at present Family's estimation or manually timing sampling, the method for test in laboratory obtain these parameters by rule of thumb, there are dependence is big, accuracy The shortcomings of low, detection is lagged, taken, it is difficult to which running optimizatin and control offer effectively support are provided to industrial process.Using offline It is to solve the problems, such as this alternative [1] that historical data, which establishes soft-sensing model,.There are strong nonlinearities for industrial process data And synteny, the complexity of model is not only increased using whole variable modelings, and influences the modeling accuracy and speed of model. In general, input variable (feature) quantity is based particularly on spectrum, frequency always more than the quantity built needed for efficient concise model The model of spectrum, image and text.Meanwhile the valuable information that includes of Small samples modeling data have simultaneously it is uncertain and not Accuracy.
For the conllinear sex chromosome mosaicism of data in industrial process, feature extraction and Feature Selection can be handled effectively.Two Person respectively has its advantage and disadvantage:Feature Selection simply selects most important some correlated characteristics, and non-selected feature may Reduce the Generalization Capability [2] of estimation model;Feature extracting method be determined using linear or nonlinear mode it is suitable low Tie up the original higher-dimension primitive character of potential character displacement.The common feature extracting method based on principal component analysis (PCA) does not consider Output and input the correlation [3] between data.Feature extracting method based on offset minimum binary or latent variable mapping (PLS) can have Effect overcomes this defect [4], and core version, i.e. core PLS (KPLS) are that one kind is adopted as input data and extends nonlinear terms and reality Existing Nonlinear feature extraction is simple and efficient method [5,6].But core type and nuclear parameter are often related to modeling data, it is difficult to It is reasonably selected, and can extract to obtain different potential features using different nuclear parameters based on KPLS methods.
Fuzzy reasoning is a kind of having for Nonlinear Modeling problem of the processing containing uncertainty and non-precision information Efficacious prescriptions method.Document [7] proposes the method that efficient fuzzy rule is extracted from modeling data, and then simplifies constructive inference model Difficulty.In general, the extraction process of fuzzy rule is referred to as Structure Identification.Many offline and online cluster strategies are such as fuzzy C- averages, cluster of climbing the mountain [8], subtractive clustering [9], recursion on-line talking [10] are used for the extraction of fuzzy rule, but these are calculated Method strategy does not consider to output and input correlation existing for data space.Document [11] is by introducing newly-designed parameter to defeated Enter space to be weighted, effectively solve the problems, such as this.Current Fuzzy Inference Model uses traditional single model structure more. Integrated modelling approach can improve the generalization, validity and confidence level of model.Integrated study leads to the submodule with otherness Type carries out integrated acquisition estimated performance more better than single model and stability.Research shows that preferably available submodel carries out Selective ensemble (SEN) can obtain having more preferably Generalization Capability than simple integrated whole submodels and single model [12].Therefore, selective optimization, which integrates multiple fuzzy reasoning submodels, can also obtain better estimated performance, while can be with Simplify inference rule.Obviously, the Fuzzy Inference Model of knowledge based rule possesses stronger inferential capability, but its study and pattern Recognition capability is weaker.
During in face of Small Sample Database, back propagation neural networks (BPNN) are difficult to set up the prediction mould of high stability Type.Support vector machines (SVM) modeling method based on structural risk minimization be suitable for Small Sample Database model, it is necessary to spend compared with More times solve optimal solution.Random weight neutral net (RWNN) is although solving speed is fast [13,14,15], towards small sample The problem of estimated performance is unstable is equally existed during data modeling, it is also difficult to be directly used in high dimensional data modeling.Nuclear technology is drawn The above problem [16] can effectively be overcome by entering core RWNN (KRWNN) model of RWNN structures.Obviously, the data of these irregular reasonings Driving modeling method can effectively be fitted modeling data.
Industry actually shows that expert needs the experience for accumulating some cycles that can effectively be carried out to some procedure parameters Estimation.Consider from other visual angle, in expert's accumulation when lacking experience, judgement can bring error, it is necessary to fuzzy reasoning Rule compensates.In addition, expert, which during accumulating experience, can store valuable information and abandon, forgets useless experience. To a certain extent, these experiences the are corresponding used training sample for representing different operating modes when being exactly data modeling.
The content of the invention
The present invention there is stronger Fuzzy inferential engine and compensation to recognize the uncertain factor faced from simulation human brain It sets out at the visual angle of mechanism, it is proposed that a kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear:Utilize integrated Frame is practised, the fuzzy reasoning SEN master casts based on the potential feature of multinuclear is built first, then builds based on the random of potential feature Neutral net SEN compensation models are weighed, finally merge the two from principal and subordinate's angle.The flexible measurement method is inherently from master Multi-source feature and multi-state sample are merged successively from angle, is suitable for human expert and is made inferences simultaneously based on main knowledge Perfect cognitive process is gradually compensated in practice.The validity of institute's extracting method is demonstrated using generated data.
To achieve the above object, the present invention adopts the following technical scheme that:
A kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear, including:
Step 1 carries out integrated structure based on multiple candidate's nuclear parameters using core offset minimum binary (KPLS) algorithm extraction feature It makes, obtains the potential character subset towards different nuclear parameters;
Step 2 builds candidate's fuzzy reasoning submodel using these potential character subsets as training subset, is calculated using optimization Method and adaptive weighted algorithm (AWF) structure obtain selective ensemble fuzzy reasoning master cast;
Step 3 calculates master cast prediction error, selects nuclear parameter and using pre- with master cast in KPLS extraction input datas Survey the relevant potential characteristic set of error;
Step 4 carries out integrated construction based on these potential characteristic sets using Bootstrap algorithms, obtains towards training The training subset of specimen sample;
Step 5, candidate's submodel that core random weight neutral net (KRWNN) is based on based on these training subsets construction, are adopted With genetic algorithm optimization tool box (GAOT) and AWF structure selective ensemble KRWNN compensation models;
Step 6 closes selective ensemble fuzzy reasoning master cast and the output of selective ensemble KRWNN compensation models And obtain the prediction result of intelligent integrated soft-sensing model.
Flexible measurement method of the present invention inherently successively melts multi-source feature and multi-state sample from principal and subordinate's angle It closes, is suitable for human expert and is made inferences based on main knowledge and gradually compensate perfect cognitive process in practice.Using conjunction The validity of institute's extracting method into data verification.
Description of the drawings
The flow chart of intelligent integrated flexible measurement methods of the Fig. 1 based on the potential feature extraction of multinuclear;
Fig. 2 (a) clusters threshold value and master cast estimated performance (KLV=2);Wherein, left figure:During KLV=2, threshold value model is clustered Relation when being trapped among value between 0.001 and 0.01 between master cast estimated performance, right figure:During KLV=2, cluster threshold range exists Relation between 0.01 and 0.1 during value between master cast estimated performance;
Fig. 2 (b) clusters threshold value and master cast estimated performance relation (KLV=3);Wherein, left figure:During KLV=3, threshold is clustered It is worth relation of the scope between 0.001 and 0.01 during value between master cast estimated performance, right figure:During KLV=3, threshold value model is clustered Relation when being trapped among value between 0.01 and 0.1 between master cast estimated performance;
Fig. 2 (c) clusters threshold value and master cast estimated performance relation (KLV=4);Wherein, left figure:During KLV=4, threshold is clustered It is worth relation of the scope between 0.001 and 0.01 during value between master cast estimated performance, right figure:During KLV=4, threshold value model is clustered Relation when being trapped among value between 0.01 and 0.1 between master cast estimated performance;
Fig. 2 (d) clusters threshold value and master cast estimated performance relation (KLV=5);Wherein, left figure:During KLV=5, threshold is clustered It is worth relation of the scope between 0.001 and 0.01 during value between master cast estimated performance, right figure:During KLV=5, threshold value model is clustered Relation when being trapped among value between 0.01 and 0.1 between master cast estimated performance;
The Gaussian membership function curve of the 3rd work song models of Fig. 3;
The latent variable of the 3rd work song models of Fig. 4 and the Clustering of output variable;
The membership function at the 3rd work song Model tying centers of Fig. 5;
The training data output error of Fig. 6 fuzzy reasoning master casts;
The nuclear radius of the potential feature extractions of Fig. 7 (a) and intelligent integrated soft-sensing model estimated performance;Wherein, left figure:It is potential Relation of the nuclear radius value of feature extraction when between 0.01 to 0.1 between intelligent integrated soft-sensing model estimated performance, it is right Figure:Pass of the nuclear radius value of potential feature extraction when between 0.1 to 1 between intelligent integrated soft-sensing model estimated performance System;
The punishment parameter of Fig. 7 (b) KRWNN and intelligent integrated soft-sensing model estimated performance;Wherein, left figure:KRWNN's punishes Relation of penalty parameter value when between 1 to 100 between intelligent integrated soft-sensing model estimated performance, middle figure:The punishment of KRWNN Relation of parameter value when between 100 to 1000 between intelligent integrated soft-sensing model estimated performance, right figure:KRWNN's punishes Relation of penalty parameter value when between 1000 to 10000 between intelligent integrated soft-sensing model estimated performance;
The nuclear radius of Fig. 7 (c) KRWNN and intelligent integrated soft-sensing model estimated performance;Wherein, left figure:The core of KRWNN half Relation of footpath value when between 0.001 to 0.01 between intelligent integrated soft-sensing model estimated performance, middle figure:The core of KRWNN Relation of radius value when between 0.01 to 0.1 between intelligent integrated soft-sensing model estimated performance, right figure:The core of KRWNN Relation of radius value when between 0.1 to 1 between intelligent integrated soft-sensing model estimated performance;
Fig. 8 obscures the prediction curve of master cast and intelligent integrated soft-sensing model;Wherein, left figure:Fuzzy master cast and intelligence The prediction curve that soft-sensing model faces training data, right figure can be integrated:Fuzzy master cast and intelligent integrated soft-sensing model face To the prediction curve of test data;
Fig. 9 obscures the prediction error of master cast and intelligent integrated soft-sensing model, wherein, left figure:Fuzzy master cast and intelligence The prediction error that soft-sensing model faces training data, right figure can be integrated:Fuzzy master cast and intelligent integrated soft-sensing model face To the prediction error of test data.
Specific embodiment
As shown in Figure 1, the present invention provides a kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear, bag It includes:
Step 1 carries out integrated structure based on multiple candidate's nuclear parameters using core offset minimum binary (KPLS) algorithm extraction feature It makes, obtains the potential character subset towards different nuclear parameters;
Step 2 builds candidate's fuzzy reasoning submodel using these potential character subsets as training subset, is calculated using optimization Method and adaptive weighted algorithm (AWF) structure obtain selective ensemble fuzzy reasoning master cast;
Step 3 calculates master cast prediction error, selects nuclear parameter and using pre- with master cast in KPLS extraction input datas Survey the relevant potential characteristic set of error;
Step 4 carries out integrated construction based on these potential characteristic sets using Bootstrap algorithms, obtains towards training The training subset of specimen sample;
Step 5, candidate's submodel that core random weight neutral net (KRWNN) is based on based on these training subsets construction, are adopted With genetic algorithm optimization tool box (GAOT) and AWF structure selective ensemble KRWNN compensation models;
Step 6 closes selective ensemble fuzzy reasoning master cast and the output of selective ensemble KRWNN compensation models And obtain the prediction result of intelligent integrated soft-sensing model.
It is main to include based on the potential spy of multinuclear the present invention is based on the intelligent integrated flexible measurement method of the potential feature extraction of multinuclear Levy the integrated construction extracted, the selective ensemble Fuzzy Inference Model based on branch-and-bound (BB) algorithm, based on the potential of KPLS Feature extraction, the integrated construction based on Bootstrap, the selective ensemble KRWNN models based on GA and model output merge, Middle master cast includes the integrated construction based on the potential feature extraction of multinuclear and the selective ensemble based on branch-and-bound (BB) algorithm Fuzzy Inference Model, compensation model include the potential feature extraction based on KPLS, the integrated construction based on Bootstrap and are based on The selective ensemble KRWNN models of GA, as shown in Figure 1.
In Fig. 1, x=[x1,…,xp] and y represent that industrial process modeling object is output and input;Assuming that gather offline Sample size is k, and modeling data collection can be expressed as J candidate's nuclear parameter of expression Set, (pker)jRepresent th nuclear parameter of jth;Represent the set and use using the KPLS J potential features extracted In the training subset of construction candidate's fuzzy reasoning submodel, zjExpression is potential based on jth th that th nuclear parameter of jth is extracted Feature;Z is represented based on nuclear parameter pkerExtract to obtain potential feature;It represents using training caused by Bootstrap Collection and the training subset for building candidate's KRWNN submodels, zj′Represent jth ' th training subset;WithPoint Not Biao Shi master cast and compensation model output;Represent the output of constructed intelligent integrated soft-sensing model.
Fig. 1 shows that it is different to be integrated at two in master cast and compensation model:Integrated in master cast is based on multinuclear The integrated construction that potential feature carries out, the selective ensemble mould established using BB algorithms and adaptive weight fusion estimated algorithm (AWF) Inference pattern is pasted, is the fusion carried out to multi-source feature, main knowledge can be simulated;Integrated in compensation model is to be based on Bootstrap algorithms establish the integrated construction carried out towards the potential feature of master cast prediction error using GA and AWF algorithms Selective ensemble KRWNN models, be to multi-state sample carry out fusion, supplementary knowledge can be simulated.Obviously, which can Human brain is simulated to a certain extent to Fuzzy inferential engine possessed by the uncertain factor that faces and compensation Cognition Mechanism.
Step 1 carries out integrated structure based on multiple candidate's nuclear parameters using core offset minimum binary (KPLS) algorithm extraction feature It makes, obtains the potential character subset towards different nuclear parameters, be specially;
When carrying out potential feature extraction using KPLS, even if selecting identical kernel function (such as common radial basis function), The potential feature extracted in face of different nuclear parameters is different.It obviously, can be to modeling data collectionUsing Different nuclear parameters carries out the potential feature extraction of multinuclear, and then realizes integrated construction.
With th nuclear parameter (p of jthker)jExemplified by be described.Based on (pker)jX is mapped to using selected kernel function Higher dimensional space, obtained kernel function are labeled as Kj.It is demarcated according to equation below:
Wherein I is the unit matrix of k dimensions;1kIt is the vector that value is k for 1 length.
Total amount of core latent variable (KLVs) is extracted by the KPLS algorithms shown in table 1.
Table 1KPLS algorithms
Pass through the KPLS algorithms of table 1, low-dimensional score matrix Tj=[t1,t2,...,th] and Uj=[u1,u2,...,uh] can be with It respectively obtains.The dimension reduction of matrix X is originally inputted to h, the feature extracted can be denoted as:
Following procedural representation can be used in the process that integrated construction is carried out based on the potential feature extraction of multinuclear:
Wherein,Represent candidate's nuclear parameter set;J is represented using the quantity of candidate's nuclear parameter and through KPLS The training subset based on potential feature of extraction and the quantity of candidate's fuzzy reasoning submodel.
Do not become using the input feature vector dimension of training subset and sample size caused by such integrated building method Change, only potentially feature is different.Therefore, " the manipulation kernel functional parameter " proposed obtains the side that latent variable carries out integrated construction Formula can regard a kind of particular form of " handle input feature " as.Since each newly generated training subset is with different Input feature vector and identical output, so each training subset can regard the information in a new source as.Using these The information structuring SEN models of separate sources select valuable source-information that procedure parameter is identified similar to domain expert Or estimation.
Step 2 builds candidate's fuzzy reasoning submodel using these potential character subsets as training subset, is calculated using optimization Method and adaptive weighted algorithm (AWF) structure obtain selective ensemble fuzzy reasoning master cast, are specially:
Candidate's submodel based on fuzzy reasoning, th candidate's mould of jth are built for each training subset of above-mentioned generation The building process for pasting reasoning submodel is as follows:
Wherein, L represents the cluster threshold value set during structure fuzzy reasoning submodel.
The set of whole J candidate submodels can be expressed as:
WhereinRepresent the set of whole candidate's submodels.
The submodel that is fully integrated of selection is expressed asIntegrated pass between submodel and candidate's submodel System is represented by:
Wherein,Represent the set of integrated submodel;jsel=1,2 ..., Jsel, JselIt represents that selective ensemble obscures to push away Manage the integrated size of model, i.e., the quantity of selected integrated submodel.
The weighting coefficient of integrated submodel is calculated as follows using AWF algorithms:
Wherein,0≤wjsel≤ 1, wjselIt is based on jthselCorresponding to th integrated fuzzy reasoning submodels Weighting coefficient;σjselTo integrate fuzzy reasoning submodel output valveStandard deviation.
The root mean square relative error (RMSRE) of selective ensemble model is represented by:
Wherein, k is number of samples;ylFor the true value of l-th of sample;It is selective ensemble model to l-th sample Predicted value;To be based on jthselTh integrated Fuzzy Inference Models are to the predicted value of l-th of sample.
The process of selective ensemble Fuzzy Inference Model is established it needs to be determined that integrated fuzzy reasoning submodel quantity, selection mould Paste reasoning integrates submodel and determines its weighting coefficient wjsel, following optimization problem can be expressed as:
It is maximized using optimization aim, above-mentioned optimization problem is converted into:
Wherein θthFor given threshold.
The above-mentioned optimization problem of direct solution needs to determine the quantity of integrated fuzzy reasoning submodel simultaneously, selects integrated mould Paste reasoning submodel and its weighting coefficient.But we are not aware that in advance needs integrated how many submodel, and submodel plus Weight coefficient is obtained after integrated submodel has been selected by weighting algorithm again, and the quantity of optimal submodel is also unknown 's.This more complicated optimization problem is decomposed into several sub- optimization problems:(1) integrated fuzzy reasoning submodule given first The quantity of type;(2) and then selection integrates fuzzy reasoning submodel and calculates weighting coefficient;(3) there are different submodules in selection complete After the optimal selectivity towards of type quantity integrate Fuzzy Inference Model, the selective ensemble that sequencing selection has minimum modeling error obscures Inference pattern is as final soft-sensing model.
In the case where weight coefficient is determined using AWF algorithms, the algorithm for selecting optimal integrated fuzzy reasoning submodel is similar In optimal feature selection algorithm.In known preferred Characteristic Number, it can realize that the algorithm of optimal feature selection is only enumerated and BB Algorithm.BB algorithms by branch and can delimit process with higher computing efficiency acquisition optimal subset as Combinatorial Optimization instrument. Therefore, with reference to the optimizing algorithm based on BB and the weighting algorithm based on AWF, it can be achieved that most preferably integrated fuzzy reasoning of simultaneous selection Model and the selective ensemble modeling for calculating its weighting coefficient.Optimal son is realized by the way of BB and AWF algorithms are run multiple times The selection of model, i.e.,:Determine that integrated submodel number is 2,3 respectively first ..., optimal selectivity towards when (J-1) integrate fuzzy Then these selective ensemble Fuzzy Inference Models are ranked up by inference pattern, final according to modeling accuracy selection is fuzzy Reasoning master cast.
To sum up, the selective ensemble Fuzzy Inference Model algorithm based on BB and AWF is as shown in table 2.
Selective ensemble Fuzzy Inference Model algorithm of the table 2 based on BB and AWF
The output valve of final fuzzy reasoning master castIt is calculated by following formula:
Wherein,It represents based on jthselTh integrates the output of fuzzy reasoning submodel.
Step 3 calculates master cast prediction error, selects nuclear parameter and using pre- with master cast in KPLS extraction input datas The relevant potential characteristic set of error is surveyed, is specially:
The prediction error of master cast is calculated first, it is as follows:
Then, using nuclear parameter pker, based on the KPLS algorithms shown in table 1 to inputoutput dataIt carries out potential Feature extraction usesWithIn substitution tables 1And y, the potential of extraction are characterized as
Wherein,It represents based on nuclear parameter pkerIt maps and calibrated nuclear matrix, T and U is represented obtained by algorithm as shown in Table 1 The correspondence arrivedLow-dimensional score matrix.
The above process is represented by:
Step 4 carries out integrated construction based on these potential characteristic sets using Bootstrap algorithms, obtains towards training The training subset of specimen sample, specially:
For these creep measure features, integrated construction is carried out using " manipulation training sample " mode, the purpose is to select Training sample that can be representative builds final compensation model.
Integrated construction is carried out using Bootstrap algorithms, process is as follows:
Wherein, J ' expressions using the quantity of training subset caused by Bootstrap and candidate KRWNN submodels and The quantity of GA populations.
Do not become using the input feature vector dimension of training subset and sample size caused by such integrated building method Change, but generate the input and output sample pair with different sequence numbers, and due to being to have the sampling put back to, deposited in training subset In the input and output sample pair repeated.Therefore, valuable sample can reuse.
Step 5, candidate's submodel that core random weight neutral net (KRWNN) is based on based on these training subsets construction, are adopted With genetic algorithm optimization tool box (GAOT) and AWF structure selective ensemble KRWNN compensation models, it is specially:
Candidate's submodel based on KRWNN, jth ' th candidate KRWNN are built for each training subset of above-mentioned generation The building process of submodel is as follows:
Wherein, KKRWNNAnd CKRWNNRepresent the nuclear parameter and punishment parameter of KRWNN models.
In this way, the set of a candidate KRWNN submodels of whole J ' can be expressed as:
WhereinRepresent the set of whole candidate's submodels.
Building effective SEN models needs to select and merge have different diversity and pre- from candidate's KRWNN submodels Survey the integrated KRWNN submodels of precision.The KRWNN submodels that are fully integrated of selection are expressed as herein Therefore integrated relation between KRWNN submodels and candidate's KRWNN submodels is represented by:
WhereinRepresent the set of integrated submodel, J 'selRepresent the integrated size of SEN models.
Theoretically, in order to build effective SEN models, it is necessary to using validation data setHerein, phase It is denoted as the validation data set of the potential feature of master cast prediction error extractionIt will be based on verification data The prediction output of candidate's KRWNN submodels of collection is expressed as:
It predicts that error calculation is as follows:
Wherein,
Related coefficient between jth ' th and the s ' th candidate's KRWNN submodel is obtained using following formula:
Thus obtained correlation matrix is represented using following formula:
Then, random weight vectors are generated for each candidate's submodelGAOT tool boxes is recycled to be based on correlation MatrixDevelop and handle these weight vectors, and then obtain the weight vectors of optimizationSelect these weight vectors big KRWNN submodels are integrated in the conduct of threshold value 1/J '.The output of these integrated KRWNN submodels is represented by:
The weight of these integrated KRWNN submodels is calculated using adaptive weighted fusion (AWF) algorithm:
Wherein σj′selIt is the standard deviation of integrated KRWNN submodels prediction output.
Based on the selective ensemble KRWNN models of KPLS and GA as compensation model, output is expressed as:
Step 6 closes selective ensemble fuzzy reasoning master cast and the output of selective ensemble KRWNN compensation models And the prediction result of intelligent integrated soft-sensing model is obtained, it is specially:
Master cast is added to the output for obtaining intelligent integrated soft-sensing model with the output of compensation model:
Simulating, verifying
Generation simulating, verifying data test function be:
Wherein, t ∈ [- 1,1];For noise, wherein isy=1,2,3,4,5,6.Data distribution is total in C1, C2, C3 and C4 4 different zones, refer to table 3.
Table 3 emulates the different zones of data
The modeling of this emulation experiment and the quantity of test sample is are respectively 240 and 120, and wherein training sample is by each Each 60 samples composition, test sample are made of each 30 samples in each region in area.The potential feature of core is carried out to modeling data Extraction.Common RBF functions are selected, latent variable contribution rate during using different nuclear radius and different KLV is as shown in table 4.
Table 4 (a) latent variable contribution rate (nuclear radius 0.1)
Table 4 (b) latent variable contribution rate (nuclear radius 1)
Table 4 (c) latent variable contribution rate (nuclear radius 10)
Table 4 (d) latent variable contribution rate (nuclear radius 100)
Table 4 shows different nuclear radius being affected to the latent variable contribution rate of inputoutput data.Herein by nuclear radius Scope be taken as " 0.01,0.03,0.05,0.07,0.09,0.1,0.3,0.5,0.7,0.9,1,3,5,7,9,1 0,30,50, 70,100,300,500,700,900,1000 " totally 26 groups.In this way, the quantity of the training subset generated in total is 26, you can with Construct 26 candidate's fuzzy reasoning submodels.
Take KLV=2, when 3,4,5, the relation for clustering threshold value and master cast estimated performance is as shown in Figure 2.
Understood according to Fig. 2, for KLV=2,4,5, the prediction error of training and test data there are a Best Point (about 0.01 or so), it is clear that the size of the value is associated with the data;And for KLV=3, predict that the size of error and cluster threshold value are big Small variation be not it is obvious that and the Generalization Capability of training data is to be weaker than test data, reason is up for further investigation. Theoretically, cluster that the value of threshold value is smaller, the quantity of cluster is relatively more, and smaller cluster threshold value can cause the increase of rule, increase Add the complexity of model.Obviously, it is necessary to carry out balanced selection according to modeling object.
It is 0.01, KLV=4 that cluster threshold value is selected in this example.By the candidate's nuclear parameter selected before, need to generate candidate altogether The quantity of submodel is 26;Optimal submodel sum aggregate is removed into the full-integrated model of whole submodels, common property life SEN models are 24.Wherein, before whole candidate's submodels and estimated performance shown in 9 SEN Fuzzy Inference Models table 5 and table 6.
The statistical form of 5 candidate's fuzzy reasoning submodel of table
The statistical form of table 6SEN Fuzzy Inference Models
Fuzzy reasoning submodel is ordered as by test error as shown in Table 5:5,6,4,19,20,18,21,3,22,7, 23,24,25,8,2,9,11;And the integrated fuzzy reasoning submodel that integrated size is 2 as shown in Table 6 is 7 and 3, selection goes out Hair point is from training precision.Therefore, SEN fuzzy reasonings can select the submodel most preferably merged.The core ginseng of 3 work song models Number is 0.05, and the contribution rate of the latent variable of extraction is as shown in table 7.
The contribution rate (nuclear radius 0.03) of the input latent variable of 3rd work song model of 7 generated data of table
Fig. 3~5 give the poly- of the Gaussian membership function curve of the 3rd work song model, latent variable and output variable Class is grouped and membership function.
From the figure 3, it may be seen that different latent variables has different membership function curves according to its data characteristics;By Fig. 4, Initial data has been divided into 58 groups;Fig. 5 gives the membership function of every group of data of 120 training samples.
The training data output error of fuzzy reasoning master cast is calculated, as shown in Figure 6.
Using the output error shown in Fig. 6 as output true value, the training of model is compensated.This sentences setting for compensating mould The quantity of the KLV of the potential feature extraction of type is 5, and the quantity of candidate's submodel of compensation model is 40.Calculate compensation model winner Want modeling parameters (nuclear radius of potential feature extraction, the punishment parameter of KRWNN, the nuclear radius of KRWNN) and the soft survey of intelligent integrated The relation of model prediction performance is measured, as shown in Figure 7.
According to Fig. 7, preference pattern parameter is:The nuclear radius of potential feature extraction is that the punishment parameter of 0.01, KRWNN is 4000, KRWNN nuclear radius is 0.009.The prediction curve of intelligent integrated soft-sensing model after final master cast and compensation and Predict error as shown in Figure 8 and Figure 9:
As shown in Fig. 8 and Fig. 9, the estimated performance of intelligent integrated soft-sensing model is substantially better than fuzzy master cast, especially For training data.Due to employing the integrated building method based on Bootstrap in compensation model, by GAOT tool boxes into Row integrates the preferred of submodel, these can introduce some enchancement factors, is run 20 times using intelligent integrated soft-sensing model herein Statistical result compared with fuzzy master cast, the results are shown in Table 8.
8 master cast of table and collection intelligent integrated soft-sensing model statistical result
As shown in Table 8, from master cast and intelligent integrated model training error relatively from, intelligent integrated model operation 20 Secondary worst error reduces 40%, and the difference of maximal and minmal value is 0.0032, and variance is only 0.0007522;From test Error relatively on see, 20 times operation average value 7.3% is improved from precision of prediction, less than training precision improve 6%, The Generalization Capability for illustrating model is not to show there is the phenomenon that training over-fitting, also illustrates that model parameter is needed into one The optimum choice of step.Main cause is that numerous learning parameters of model need to select, and there are stronger between these learning parameters Coupling;In addition, simply consider that true value, as tutor's signal, can cause with master cast prediction difference in the training of compensation model The over-fitting of training process.
The present invention proposes a kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear, main innovation point It is:The master cast and submodel of institute's extracting method are selective ensemble model, and employ different integrated Constructing Policies; Master cast uses the integrated Constructing Policy based on more latent variable character subsets, and submodel is used based on manipulation training sample Integrated Constructing Policy.The flexible measurement method inherently successively melts multi-source feature and multi-state sample from principal and subordinate's angle It closes, is suitable for human expert and is made inferences based on main knowledge and gradually compensate perfect cognitive process in practice.Using conjunction The validity of institute's extracting method into data verification.
Bibliography
[1]Kadlec P,Gabrys B,Strand S.Data-driven soft-sensors in the process industry[J].Computers and Chemical Engineering,2009,33(4):795-814.
[2]Lázaro J.M.B.D.,Moreno A.P.,Santiago O.L.,and Neto A.J.D.S.Optimizing kernel methods to reduce dimensionality in fault diagnosis of industrial systems[J].Computers&Industrial Engineering,2015,87(C):140-149.
[3]Tang J.,Chai T.Y.,Zhao L.J.,Yu W.,and Yue H.Soft sensor for parameters of mill load based on multi-spectral segments PLS sub-models and on-line adaptive weighted fusion algorithm[J].Neurocomputing,2012,78(1):38- 47.
[4]Charanpal D.,Gunn S.R.,and John S.T.Efficient sparse kernel feature extraction based on partial least squares[J].IEEE Transactions on Pattern Analysis&Machine Intelligence,2009,31(8):1347-1361.
[5]Qin S.J.Survey on data-driven industrial process monitoring and diagnosis[J].Annual Reviews in Control,2012,36(2):220-234.
[6]Motai Y.Kernel association for classification and prediction:A survey[J].IEEE Transactions on Neural Networks and Learning Systems,2015,26 (2):208-223.
[7]Wang L.X.,and Mendel J.M.Generating fuzzy rules by learning from examples[J].IEEE Transactions on Systems,Man,and Cybernetics,2002,22(6):1414- 1427.
[8]Mitra S.,and Hayashi Y.,Neuro-fuzzy rule generation:survey in soft computing framework[J].IEEE Transactions on Neural Networks,2000,11(3):748- 68.
[9]Chiu S.L.,Fuzzy model identification based on cluster estimation, Journal of Intelligent and Fuzzy Systems,1994,2:267-278.
[10]Angelov P.An approach for fuzzy rule-base adaptation using on- line clustering[J].International Journal of Approximate Reasoning,2004,35(3): 275-289.
[11]Yu W.,and Li X.O.On-line fuzzy modeling via clustering and support vector machines[J].Information Sciences,2008,178(22):4264-4279.
[12]Zhou Z.H.,Wu J.,and Tang W.Ensembling neural networks:many could be better than all[J].Artificial Intelligence,2002,137(1-2):239-263.
[13]Pao,Y.H.,Takefuji,Y.Functional-link net computing,theory,system architecture,and functionalities[J].IEEE Comput.,1992,25(5):76-79.
[14]Igelnik,B.,Pao,Y.H.Stochastic choice of basis functions in adaptive function approximation and the functional-link net[J].IEEE Trans.Neural Network,1995,6(6):1320-1329.
[15]Comminiello D.,Scarpiniti M.,Azpicueta-Ruiz L.A.,Arenas-Garcia J.,Uncini A.Functional link adaptive filters for nonlinear acoustic echo cancellation[J].IEEE Trans.Audio Speech Lang.Process.2013,21(7):1502-1512.
[16]Tang,J.,Jia,M.Y.,Li,D.Selective ensemble simulate metamodeling approach based on latent features extraction and kernel Learning[C].In:the 27th Chinese Control and Decision Conference(2015 CCDC),Qingdao,China,May 23- May 25,2015.

Claims (7)

1. a kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear, which is characterized in that including:
Step 1 carries out integrated construction based on multiple candidate's nuclear parameters using core offset minimum binary (KPLS) algorithm extraction feature, obtains Obtain the potential character subset towards different nuclear parameters;
Step 2 builds candidate's fuzzy reasoning submodel using these potential character subsets as training subset, using optimization algorithm and Adaptive weighted algorithm (AWF) structure obtains selective ensemble fuzzy reasoning master cast;
Step 3 calculates master cast prediction error, selects nuclear parameter and is missed using being predicted in KPLS extraction input datas with master cast The potential characteristic set of difference correlation;
Step 4 carries out integrated construction based on these potential characteristic sets using Bootstrap algorithms, obtains towards training sample The training subset of sampling;
Step 5, candidate's submodel that core random weight neutral net (KRWNN) is based on based on these training subsets construction, using something lost Propagation algorithm Optimization Toolbox (GAOT) and AWF structure selective ensemble KRWNN compensation models;
Step 6 merges the output of selective ensemble fuzzy reasoning master cast and selective ensemble KRWNN compensation models To the prediction result of intelligent integrated soft-sensing model.
2. the intelligent integrated flexible measurement method as described in claim 1 based on the potential feature extraction of multinuclear, which is characterized in that set X=[x1,…,xp] and y represent that industrial process modeling object is output and input;Assuming that the sample size gathered offline is k, and Modeling data collection can be expressed asRepresent the set of J candidate's nuclear parameter, (pker)jIt represents Th nuclear parameter of jth;
Step 1 is specially:
When carrying out potential feature extraction using KPLS, to modeling data collectionIt is carried out using different nuclear parameters more The potential feature extraction of core, and then realize integrated construction,
With th nuclear parameter (p of jthker)jExemplified by be described, based on (pker)jX is mapped to by higher-dimension using selected kernel function Space, obtained kernel function are labeled as Kj, it is demarcated according to equation below:
<mrow> <msup> <mover> <mi>K</mi> <mo>~</mo> </mover> <mi>j</mi> </msup> <mo>=</mo> <mrow> <mo>(</mo> <mi>I</mi> <mo>-</mo> <mfrac> <mn>1</mn> <mi>k</mi> </mfrac> <msub> <mn>1</mn> <mi>k</mi> </msub> <msubsup> <mn>1</mn> <mi>k</mi> <mi>T</mi> </msubsup> <mo>)</mo> </mrow> <msup> <mi>K</mi> <mi>j</mi> </msup> <mrow> <mo>(</mo> <mi>I</mi> <mo>-</mo> <mfrac> <mn>1</mn> <mi>k</mi> </mfrac> <msub> <mn>1</mn> <mi>k</mi> </msub> <msubsup> <mn>1</mn> <mi>k</mi> <mi>T</mi> </msubsup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
Wherein, I is the unit matrix of k dimensions;1kIt is the vector that value is k for 1 length,
Pass through KPLS algorithms, low-dimensional score matrix Tj=[t1,t2,...,th] and Uj=[u1,u2,...,uh] can respectively obtain, The dimension reduction of matrix X is originally inputted to h, the feature extracted can be denoted as:
<mrow> <msup> <mi>Z</mi> <mi>j</mi> </msup> <mo>=</mo> <msup> <mover> <mi>K</mi> <mo>~</mo> </mover> <mi>j</mi> </msup> <msup> <mi>U</mi> <mi>j</mi> </msup> <msup> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msup> <mi>T</mi> <mi>j</mi> </msup> <mo>)</mo> </mrow> <mi>T</mi> </msup> <msup> <mover> <mi>K</mi> <mo>~</mo> </mover> <mi>j</mi> </msup> <msup> <mi>U</mi> <mi>j</mi> </msup> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <msubsup> <mrow> <mo>{</mo> <msub> <mrow> <mo>(</mo> <msup> <mi>z</mi> <mi>j</mi> </msup> <mo>)</mo> </mrow> <mi>l</mi> </msub> <mo>}</mo> </mrow> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
Following procedural representation can be used in the process that integrated construction is carried out based on the potential feature extraction of multinuclear:
Wherein,Represent candidate's nuclear parameter set;J is represented to use the quantity of candidate's nuclear parameter and extracted through KPLS Training subset based on potential feature and candidate's fuzzy reasoning submodel quantity.
3. the intelligent integrated flexible measurement method as described in claim 1 based on the potential feature extraction of multinuclear, which is characterized in that step Rapid 2 are specially:
Candidate's submodel based on fuzzy reasoning is built for each training subset of above-mentioned generation, th candidate of jth, which obscures, to push away The building process for managing submodel is as follows:
Wherein, L represents the cluster threshold value set during structure fuzzy reasoning submodel,
The set of whole J candidate submodels can be expressed as:
<mrow> <msubsup> <mi>S</mi> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> </mrow> <mrow> <mi>C</mi> <mi>a</mi> <mi>n</mi> </mrow> </msubsup> <mo>=</mo> <msubsup> <mrow> <mo>{</mo> <msubsup> <mi>f</mi> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> </mrow> <mrow> <mi>c</mi> <mi>a</mi> <mi>n</mi> </mrow> </msubsup> <msub> <mrow> <mo>(</mo> <mo>&amp;CenterDot;</mo> <mo>)</mo> </mrow> <mi>j</mi> </msub> <mo>}</mo> </mrow> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>J</mi> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
Wherein,Represent the set of whole candidate's submodels,
The submodel that is fully integrated of selection is expressed asIntegrated relation between submodel and candidate's submodel can It is expressed as:
<mrow> <msubsup> <mi>S</mi> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> </mrow> <mrow> <mi>S</mi> <mi>e</mi> <mi>l</mi> </mrow> </msubsup> <mo>=</mo> <msubsup> <mrow> <mo>{</mo> <msubsup> <mi>f</mi> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> </mrow> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msubsup> <msub> <mrow> <mo>(</mo> <mo>&amp;CenterDot;</mo> <mo>)</mo> </mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>}</mo> </mrow> <mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msubsup> <mo>&amp;Element;</mo> <msubsup> <mi>S</mi> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> </mrow> <mrow> <mi>C</mi> <mi>a</mi> <mi>n</mi> </mrow> </msubsup> <mo>,</mo> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>&amp;le;</mo> <mi>J</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
Wherein,Represent the set of integrated submodel;jsel=1,2 ..., Jsel, JselRepresent selective ensemble fuzzy reasoning mould The integrated size of type, i.e., the quantity of selected integrated submodel,
The weighting coefficient of integrated submodel is calculated as follows using AWF algorithms:
<mrow> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>=</mo> <mo>/</mo> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&amp;sigma;</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <munderover> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </munderover> <mfrac> <mn>1</mn> <msup> <mrow> <mo>(</mo> <msub> <mi>&amp;sigma;</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mfrac> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
Wherein, It is based on jthselAdding corresponding to th integrated fuzzy reasoning submodels Weight coefficient;To integrate fuzzy reasoning submodel output valveStandard deviation,
The root mean square relative error (RMSRE) of selective ensemble model is represented by:
<mrow> <msub> <mi>E</mi> <mrow> <mi>r</mi> <mi>m</mi> <mi>s</mi> <mi>r</mi> <mi>e</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mrow> <mfrac> <mn>1</mn> <mi>k</mi> </mfrac> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </munderover> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msup> <mi>y</mi> <mi>l</mi> </msup> <mo>-</mo> <msup> <mover> <mi>y</mi> <mo>^</mo> </mover> <mi>l</mi> </msup> </mrow> <msup> <mi>y</mi> <mi>l</mi> </msup> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> <mo>=</mo> <msqrt> <mrow> <mfrac> <mn>1</mn> <mi>k</mi> </mfrac> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </munderover> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msup> <mi>y</mi> <mi>l</mi> </msup> <mo>-</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </munderover> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <msubsup> <mover> <mi>y</mi> <mo>^</mo> </mover> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mi>l</mi> </msubsup> </mrow> <msup> <mi>y</mi> <mi>l</mi> </msup> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
Wherein, k is number of samples;ylFor the true value of l-th of sample;For prediction of the selective ensemble model to l-th of sample Value;To be based on jthselTh integrated Fuzzy Inference Models to the predicted value of l-th of sample,
The process of selective ensemble Fuzzy Inference Model is established it needs to be determined that integrated fuzzy reasoning submodel quantity, selection are obscured and pushed away The integrated submodel of reason and its definite weighting coefficientIt can be expressed as following optimization problem:
<mrow> <mtable> <mtr> <mtd> <mi>min</mi> </mtd> <mtd> <mrow> <msub> <mi>E</mi> <mrow> <mi>r</mi> <mi>m</mi> <mi>s</mi> <mi>r</mi> <mi>e</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mrow> <mfrac> <mn>1</mn> <mi>k</mi> </mfrac> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </munderover> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msubsup> <mi>y</mi> <mi>i</mi> <mi>l</mi> </msubsup> <mo>-</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </munderover> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <msubsup> <mover> <mi>y</mi> <mo>^</mo> </mover> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mi>l</mi> </msubsup> </mrow> <msubsup> <mi>y</mi> <mi>i</mi> <mi>l</mi> </msubsup> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> </mrow> </mtd> <mtd> <mrow> <munderover> <mi>&amp;Sigma;</mi> <mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </munderover> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>&amp;le;</mo> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>&amp;le;</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>&lt;</mo> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>&lt;</mo> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>,</mo> <mn>1</mn> <mo>&lt;</mo> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>&amp;le;</mo> <mi>J</mi> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow>
It is maximized using optimization aim, above-mentioned optimization problem is converted into:
<mrow> <mtable> <mtr> <mtd> <mi>max</mi> </mtd> <mtd> <mrow> <msub> <mi>E</mi> <mrow> <mi>r</mi> <mi>m</mi> <mi>s</mi> <mi>r</mi> <mi>e</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>&amp;theta;</mi> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </msub> <mo>-</mo> <msqrt> <mrow> <mfrac> <mn>1</mn> <mi>k</mi> </mfrac> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </munderover> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msup> <mi>y</mi> <mi>l</mi> </msup> <mo>-</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </munderover> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <msubsup> <mover> <mi>y</mi> <mo>^</mo> </mover> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mi>l</mi> </msubsup> </mrow> <msubsup> <mi>y</mi> <mi>i</mi> <mi>l</mi> </msubsup> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> </mrow> </mtd> <mtd> <mrow> <munderover> <mi>&amp;Sigma;</mi> <mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </munderover> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>&amp;le;</mo> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>&amp;le;</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>&lt;</mo> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>&lt;</mo> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>,</mo> <mn>1</mn> <mo>&lt;</mo> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>&amp;le;</mo> <mi>J</mi> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow>
Wherein, θthFor given threshold,
The output valve of final fuzzy reasoning master castIt is calculated by following formula:
<mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> </mrow> </msub> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </munderover> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <msubsup> <mi>f</mi> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> </mrow> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msubsup> <msub> <mrow> <mo>(</mo> <mo>&amp;CenterDot;</mo> <mo>)</mo> </mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </munderover> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow>
Wherein,It represents based on jthselTh integrates the output of fuzzy reasoning submodel.
4. the intelligent integrated flexible measurement method as claimed in claim 3 based on the potential feature extraction of multinuclear, which is characterized in that step Rapid 3 are specially:
The prediction error of master cast is calculated first, it is as follows:
<mrow> <msup> <mi>y</mi> <mo>&amp;prime;</mo> </msup> <mo>=</mo> <mi>y</mi> <mo>-</mo> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> </mrow> </msub> <mo>=</mo> <mi>y</mi> <mo>-</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </munderover> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow>
Then, using nuclear parameter pker, based on KPLS algorithms to inputoutput dataPotential feature extraction is carried out, that is, is adopted WithWithIn substitution tables 1And y, the potential of extraction are characterized as
<mrow> <mi>Z</mi> <mo>=</mo> <mover> <mi>K</mi> <mo>~</mo> </mover> <mi>U</mi> <msup> <mrow> <mo>(</mo> <msup> <mi>T</mi> <mi>T</mi> </msup> <mover> <mi>K</mi> <mo>~</mo> </mover> <mi>U</mi> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <msubsup> <mrow> <mo>{</mo> <msub> <mi>z</mi> <mi>l</mi> </msub> <mo>}</mo> </mrow> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow>
Wherein,It represents based on nuclear parameter pkerMapping and calibrated nuclear matrix,
The above process is represented by:
5. the intelligent integrated flexible measurement method as claimed in claim 4 based on the potential feature extraction of multinuclear, which is characterized in that step Rapid 4 are specially:
For these creep measure features, integrated construction is carried out using " manipulation training sample " mode, it can the purpose is to select Representative training sample builds final compensation model,
Integrated construction is carried out using Bootstrap algorithms, process is as follows:
Wherein, J ' expressions are using the quantity of training subset caused by Bootstrap and candidate KRWNN submodels and GA kinds The quantity of group.
6. the intelligent integrated flexible measurement method as described in claim 1 based on the potential feature extraction of multinuclear, which is characterized in that step Rapid 5 are specially:
Candidate's submodel based on KRWNN, jth ' th candidate's KRWNN submodule are built for each training subset of above-mentioned generation The building process of type is as follows:
Wherein, KKRWNNAnd CKRWNNRepresent the nuclear parameter and punishment parameter of KRWNN models,
In this way, the set of a candidate KRWNN submodels of whole J ' can be expressed as:
<mrow> <msubsup> <mi>S</mi> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> <mrow> <mi>C</mi> <mi>a</mi> <mi>n</mi> </mrow> </msubsup> <mo>=</mo> <msubsup> <mrow> <mo>{</mo> <msubsup> <mi>f</mi> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> <mrow> <mi>c</mi> <mi>a</mi> <mi>n</mi> </mrow> </msubsup> <msub> <mrow> <mo>(</mo> <mo>&amp;CenterDot;</mo> <mo>)</mo> </mrow> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>}</mo> </mrow> <mrow> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>17</mn> <mo>)</mo> </mrow> </mrow>
Wherein,Represent the set of whole candidate's submodels.
Building effective SEN models needs to select and merge have different diversity and prediction essence from candidate's KRWNN submodels The integrated KRWNN submodels of degree, the KRWNN submodels that are fully integrated of selection are expressed asTherefore it is integrated Relation between KRWNN submodels and candidate's KRWNN submodels is represented by:
<mrow> <msubsup> <mi>S</mi> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> <mrow> <mi>S</mi> <mi>e</mi> <mi>l</mi> </mrow> </msubsup> <mo>=</mo> <msubsup> <mrow> <mo>{</mo> <msubsup> <mi>f</mi> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msubsup> <msub> <mrow> <mo>(</mo> <mo>&amp;CenterDot;</mo> <mo>)</mo> </mrow> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </msub> <mo>}</mo> </mrow> <mrow> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <mn>1</mn> </mrow> <msubsup> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </msubsup> <mo>&amp;Element;</mo> <msubsup> <mi>S</mi> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> <mi>G</mi> <mi>A</mi> </mrow> <mrow> <mi>C</mi> <mi>a</mi> <mi>n</mi> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>&amp;le;</mo> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>18</mn> <mo>)</mo> </mrow> </mrow>
Wherein,Represent the set of integrated submodel, J 'selRepresent the integrated size of SEN models,
Effective SEN models are built, use validation data setCompared with the latent of master cast prediction error extraction It is denoted as in the validation data set of featureThe prediction of candidate's KRWNN submodels based on validation data set is defeated Go out to be expressed as:
<mrow> <msubsup> <mrow> <mo>{</mo> <msub> <mrow> <mo>(</mo> <msubsup> <mover> <mi>y</mi> <mo>^</mo> </mover> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> <mo>)</mo> </mrow> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>}</mo> </mrow> <mrow> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> </msubsup> <mo>=</mo> <msubsup> <mrow> <mo>{</mo> <msubsup> <mi>f</mi> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> <mrow> <mi>c</mi> <mi>a</mi> <mi>n</mi> </mrow> </msubsup> <msub> <mrow> <mo>(</mo> <msup> <mi>z</mi> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msup> <mo>)</mo> </mrow> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>}</mo> </mrow> <mrow> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mover> <mi>J</mi> <mo>&amp;CenterDot;</mo> </mover> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>19</mn> <mo>)</mo> </mrow> </mrow>
It predicts that error calculation is as follows:
<mrow> <msub> <mrow> <mo>(</mo> <msubsup> <mi>e</mi> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> <mo>)</mo> </mrow> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>=</mo> <msub> <mrow> <mo>(</mo> <msubsup> <mover> <mi>y</mi> <mo>^</mo> </mover> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> <mrow> <mo>&amp;prime;</mo> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> <mo>)</mo> </mrow> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>-</mo> <msup> <mi>y</mi> <mrow> <mo>&amp;prime;</mo> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msup> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>20</mn> <mo>)</mo> </mrow> </mrow>
Wherein,
Related coefficient between jth ' th and the s ' th candidate's KRWNN submodel is obtained using following formula:
<mrow> <msubsup> <mi>c</mi> <mrow> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> </mrow> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> <mo>=</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>k</mi> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msup> </munderover> <msubsup> <mi>e</mi> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> <mrow> <mo>(</mo> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <msup> <mi>k</mi> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msup> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <msubsup> <mi>e</mi> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> <mrow> <mo>(</mo> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <msup> <mi>k</mi> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msup> <mo>)</mo> </mrow> <mo>/</mo> <msup> <mi>k</mi> <mrow> <mi>v</mi> <mi>a</mi> <mi>i</mi> <mi>l</mi> <mi>d</mi> </mrow> </msup> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>21</mn> <mo>)</mo> </mrow> </mrow>
Thus obtained correlation matrix is represented using following formula:
<mrow> <msubsup> <mi>C</mi> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> <mo>=</mo> <msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msubsup> <mi>c</mi> <mn>11</mn> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> </mtd> <mtd> <msubsup> <mi>c</mi> <mn>12</mn> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msubsup> <mi>c</mi> <mrow> <mn>1</mn> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> </mrow> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>c</mi> <mn>21</mn> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> </mtd> <mtd> <msubsup> <mi>c</mi> <mn>22</mn> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> </mtd> <mtd> <mn>....</mn> </mtd> <mtd> <msubsup> <mi>c</mi> <mrow> <mn>2</mn> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> </mrow> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> </mtd> </mtr> <mtr> <mtd> <mn>...</mn> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msubsup> <mi>c</mi> <mrow> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> </mrow> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> </mtd> <mtd> <mn>...</mn> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>c</mi> <mrow> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> <mn>1</mn> </mrow> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> </mtd> <mtd> <msubsup> <mi>c</mi> <mrow> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> <mn>2</mn> </mrow> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msubsup> <mi>c</mi> <mrow> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> </mrow> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> </mtd> </mtr> </mtable> </mfenced> <mrow> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> <mo>&amp;times;</mo> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>22</mn> <mo>)</mo> </mrow> </mrow>
Then, random weight vectors are generated for each candidate's submodelGAOT tool boxes is recycled to be based on correlation matrixDevelop and handle these weight vectors, and then obtain the weight vectors of optimizationThese weight vectors is selected to be more than threshold The conduct of value 1/J ' integrates KRWNN submodels, and the output of these integrated KRWNN submodels is represented by:
<mrow> <msubsup> <mrow> <mo>{</mo> <msubsup> <mover> <mi>y</mi> <mo>^</mo> </mover> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>&amp;prime;</mo> </msubsup> <mo>}</mo> </mrow> <mrow> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <mn>1</mn> </mrow> <msubsup> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </msubsup> <mo>=</mo> <msubsup> <mrow> <mo>{</mo> <msubsup> <mi>f</mi> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msubsup> <msub> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msup> <mo>)</mo> </mrow> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </msub> <mo>}</mo> </mrow> <mrow> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <mn>1</mn> </mrow> <msubsup> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>23</mn> <mo>)</mo> </mrow> </mrow>
The weight of these integrated KRWNN submodels is calculated using adaptive weighted fusion (AWF) algorithm:
<mrow> <msub> <mi>w</mi> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </msub> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&amp;sigma;</mi> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <munderover> <mi>&amp;Sigma;</mi> <mrow> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <mn>1</mn> </mrow> <msubsup> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </munderover> <mfrac> <mn>1</mn> <msup> <mrow> <mo>(</mo> <msub> <mi>&amp;sigma;</mi> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mfrac> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>24</mn> <mo>)</mo> </mrow> </mrow>
Wherein,It is the standard deviation of integrated KRWNN submodels prediction output,
Based on the selective ensemble KRWNN models of KPLS and GA as compensation model, output is expressed as:
<mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> </msub> <mo>=</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <mn>1</mn> </mrow> <msubsup> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </munderover> <msub> <mi>w</mi> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mn>1</mn> </mrow> <mo>&amp;prime;</mo> </msubsup> </msub> <msubsup> <mover> <mi>y</mi> <mo>^</mo> </mover> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>&amp;prime;</mo> </msubsup> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>25</mn> <mo>)</mo> </mrow> </mrow>
7. the intelligent integrated flexible measurement method as described in claim 1 based on the potential feature extraction of multinuclear, which is characterized in that step Rapid 6 are specially:
Master cast is added to the output for obtaining intelligent integrated soft-sensing model with the output of compensation model:
<mrow> <mover> <mi>y</mi> <mo>^</mo> </mover> <mo>=</mo> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> </mrow> </msub> <mo>+</mo> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> </msub> <mo>=</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </munderover> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>+</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <mn>1</mn> </mrow> <msubsup> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </munderover> <msub> <mi>w</mi> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </msub> <msubsup> <mover> <mi>y</mi> <mo>^</mo> </mover> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>&amp;prime;</mo> </msubsup> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>26</mn> <mo>)</mo> </mrow> </mrow>
CN201711327861.0A 2017-12-13 2017-12-13 A kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear Pending CN108062566A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711327861.0A CN108062566A (en) 2017-12-13 2017-12-13 A kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711327861.0A CN108062566A (en) 2017-12-13 2017-12-13 A kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear

Publications (1)

Publication Number Publication Date
CN108062566A true CN108062566A (en) 2018-05-22

Family

ID=62138478

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711327861.0A Pending CN108062566A (en) 2017-12-13 2017-12-13 A kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear

Country Status (1)

Country Link
CN (1) CN108062566A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109144035A (en) * 2018-09-27 2019-01-04 杭州电子科技大学 A kind of Monitoring of Chemical method based on supporting vector
CN109960873A (en) * 2019-03-24 2019-07-02 北京工业大学 A kind of city solid waste burning process dioxin concentration flexible measurement method
CN110135057A (en) * 2019-05-14 2019-08-16 北京工业大学 Solid waste burning process dioxin concentration flexible measurement method based on multilayer feature selection
CN111860934A (en) * 2019-04-26 2020-10-30 开利公司 Method for predicting power consumption
CN112365048A (en) * 2020-11-09 2021-02-12 大连理工大学 Unmanned vehicle reconnaissance method based on opponent behavior prediction

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109144035A (en) * 2018-09-27 2019-01-04 杭州电子科技大学 A kind of Monitoring of Chemical method based on supporting vector
CN109960873A (en) * 2019-03-24 2019-07-02 北京工业大学 A kind of city solid waste burning process dioxin concentration flexible measurement method
CN109960873B (en) * 2019-03-24 2021-09-10 北京工业大学 Soft measurement method for dioxin emission concentration in urban solid waste incineration process
CN111860934A (en) * 2019-04-26 2020-10-30 开利公司 Method for predicting power consumption
CN110135057A (en) * 2019-05-14 2019-08-16 北京工业大学 Solid waste burning process dioxin concentration flexible measurement method based on multilayer feature selection
CN110135057B (en) * 2019-05-14 2021-03-02 北京工业大学 Soft measurement method for dioxin emission concentration in solid waste incineration process based on multilayer characteristic selection
US11976817B2 (en) 2019-05-14 2024-05-07 Beijing University Of Technology Method for detecting a dioxin emission concentration of a municipal solid waste incineration process based on multi-level feature selection
CN112365048A (en) * 2020-11-09 2021-02-12 大连理工大学 Unmanned vehicle reconnaissance method based on opponent behavior prediction

Similar Documents

Publication Publication Date Title
Wang et al. Optimal forecast combination based on neural networks for time series forecasting
Lucca et al. CC-integrals: Choquet-like copula-based aggregation functions and its application in fuzzy rule-based classification systems
CN108062566A (en) A kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear
CN105487526B (en) A kind of Fast RVM sewage treatment method for diagnosing faults
Casillas et al. Interpretability improvements to find the balance interpretability-accuracy in fuzzy modeling: an overview
CN106779087A (en) A kind of general-purpose machinery learning data analysis platform
CN109118013A (en) A kind of management data prediction technique, readable storage medium storing program for executing and forecasting system neural network based
de Campos Souza et al. An evolving neuro-fuzzy system based on uni-nullneurons with advanced interpretability capabilities
Amirteimoori et al. On the environmental performance analysis: a combined fuzzy data envelopment analysis and artificial intelligence algorithms
Naghibi et al. Breast cancer classification based on advanced multi dimensional fuzzy neural network
Verma et al. Gas turbine diagnostics using a soft computing approach
CN115186097A (en) Knowledge graph and reinforcement learning based interactive recommendation method
Zhu et al. Soft sensor based on eXtreme gradient boosting and bidirectional converted gates long short-term memory self-attention network
Araghinejad et al. Application of data-driven models in drought forecasting
Tembusai et al. K-nearest neighbor with k-fold cross validation and analytic hierarchy process on data classification
Khajeh et al. Diffusion coefficient prediction of acids in water at infinite dilution by QSPR method
CN112711912A (en) Air quality monitoring and alarming method, system, device and medium based on cloud computing and machine learning algorithm
Adams et al. An empirical evaluation of techniques for feature selection with cost
Uyar et al. The analysis and optimization of CNN Hyperparameters with fuzzy tree modelfor image classification
CN112651168B (en) Construction land area prediction method based on improved neural network algorithm
Hou et al. Evolving dendritic neuron model by equilibrium optimizer algorithm
Saravanan et al. A comprehensive approach on predicting the crop yield using hybrid machine learning algorithms
Oysal et al. An adaptive fuzzy wavelet network with gradient learning for nonlinear function approximation
El-Hassani et al. A novel model for optimizing multilayer perceptron neural network architecture based on genetic algorithm method
He et al. A fuzzy control system for fitness service based on genetic algorithm during COVID-19 pandemic

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180522

WD01 Invention patent application deemed withdrawn after publication