CN107992895A - A kind of Boosting support vector machines learning method - Google Patents

A kind of Boosting support vector machines learning method Download PDF

Info

Publication number
CN107992895A
CN107992895A CN201711318629.0A CN201711318629A CN107992895A CN 107992895 A CN107992895 A CN 107992895A CN 201711318629 A CN201711318629 A CN 201711318629A CN 107992895 A CN107992895 A CN 107992895A
Authority
CN
China
Prior art keywords
training
classification
support vector
weights
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711318629.0A
Other languages
Chinese (zh)
Inventor
高建彬
赵俊祎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Publication of CN107992895A publication Critical patent/CN107992895A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/285Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a kind of Boosting support vector machines learning method, it is related to field of artificial intelligence, the present invention comprises the following steps:Step 1:Data processing, carries out initial support vector machine classifier the selection of parameter γ;Step 2:Weights are initialized, choose n training sample composition total data set, initialize the weights of each training sample;Step 3:Into loop iteration, the weights of all training samples are updated;Step 4:Pass through T circulation, obtain final classification device H (x), the thought of combination supporting vector machine of the present invention and Adaboost algorithm, improve the study precision of the resampling technique of classifier design in pattern-recognition and ensure stable learning ability, optimize classifying quality so that classification accurate rate greatly promotes.

Description

A kind of Boosting support vector machines learning method
Technical field
The present invention relates to field of artificial intelligence, more particularly to support vector machines study and integrated learning approach, have more Body is to be related to a kind of Boosting support vector machines learning method.
Background technology
By the propositions such as Corinna Cortes and Vapnik support vector machines (Support Vector Machine, SVM it is) a kind of machine learning method based on Statistical Learning Theory, as a kind of new general-purpose machinery learning method, it has Succinct mathematical form, standard efficiently training method and good Generalization Capability, be successfully applied at present pattern-recognition, The various fields such as regression estimates and Multilayer networks.However, more learner learning method researchs at present on SVM are very few, And more learner study can effectively improve the generalization ability of study.Therefore, the more learner learning methods for studying SVM have weight The theory significance wanted and direct application value.
A kind of extensive energy that can effectively improve machine learning of the integrated study technology as more learner learning methods Power, its research start from nineteen nineties, and theoretical to integrated study at present and algorithm research is as machine learning Integrated study is classified as first of the research greatly of machine learning four by one hot spot, internal authority Dietterich.Integrated study technology shows The multiple fields of machine learning are successfully applied to, such as:Recognition of face, optical character identification, accurate image analysis, medical analysis With seismic signal classification etc..
In the evolution of integrated study, there is the impetus that two important process play it key:Firstth, Hansen and
Salamon solved the problems, such as using one group of neutral net, they are attempted all neutral net knots by the method for voting Altogether, experiment generates interesting phenomenon, i.e., the integrated result of this group neutral net is not more a little than best individual difference, than Worst is individual quite a lot of, but not worse than the performance of best individual neutral net.This makes integrated out of the result of intuition Practise and cause the attention of many scholars, early period nineteen nineties integrated study technology be widely applied to multiple fields simultaneously It is subject to good effect.Second, Schapire with Boosting algorithms to the weak learning algorithm that Kearns and Valiant are proposed with The equivalence question of strong learning algorithm has carried out constructive proof, because the requirement of Boosting algorithms knows learning algorithm in advance Generalization ability lower bound, and this lower bound is difficult to obtain, so real problems can not be solved.Freund and Schapire into One step proposes AdaBoost algorithms, which is no longer required for knowing extensive lower bound in advance, can very easily be applied to reality The problem of in.Breiman proposes the technology similar to Boosting Bagging one by one, further promotes integrated study Development.
Integrated study be one develop rapidly in research field, from its appearance up to the present, the short more than ten years when Between, it oneself through being widely used in the various fields such as language identification, text filtering, remote sensing information process, medical diagnosis on disease, especially After Zhou in 2001 et al. proposes " selective ensemble " concept, very big repercussion is at home and abroad caused, integrated study band A brand-new developing stage is entered so that people are expanded into one integrated study from deeper level, more vast field The research of step.
Effective integrated model key is that each base grader of construction should be accurate and discrepant.Otherness requires base Grader is mutually independent, if in fact, being negatively correlated between base grader, more preferable generalization can be obtained by integrating Energy.Some researchs show that integrated study overcomes following three problems to a certain extent so that Generalization Capability is improved:
1), statistical problem.When obtainable training sample number is abundant, some algorithms can find optimal really Habit machine.But actually training sample is limited, learning algorithm is only able to find the equal learning machine of many precision of predictions, although can Therefrom to select most simple complexity in other words minimum, but existing risk is prediction essence of the learning machine for unknown sample Degree is but very low.This risk can be reduced by being combined using several learning machines.
2), computational problem.It is too big to find the learning machine calculation amount best to training data fitting, thus needs using inspiration Formula searching method, but usually there are certain distance with target for the result of its study.Integrated study be to these and it is faulty A kind of compensation of searching method.
3), problem is described.When learning algorithm descriptive power is limited, search range is too small, so that not including object function Or the preferable approximating function on object function, its learning outcome are also just unsatisfactory.Although many learning algorithms With Uniform Approximation, when data are limited, progressive nature is no longer set up, and the search space of learning algorithm is can to obtain trained number According to function, the hypothesis space that is considered under progressive situation may be much smaller than.Integrated study can expand function space, so as to obtain Object function must more accurately be approached.
Boosting algorithms are that a series of learning machines are successively produced in training, and training set used in each learning machine is all It is a subset put forward from total training set, whether each sample appears in the study for depending on producing before this in the subset The performance of machine, existing learning machine judge that the sample of error will be appeared in new training subset with larger probability.This causes it The learning machine produced afterwards focus more on processing have learning machine to oneself for more difficult sample distinguish problem.
Integrated study is a kind of new machine learning normal form, it solved the problems, such as using multiple learners it is same, can The generalization ability of learning system is significantly increased, therefore since the 1990s, integrated study has been increasingly becoming engineering One new hot spot in habit field.In actual classification problem, in order to reduce the probability of loss and error, often to classification side Method is put forward higher requirements, and reaches classification accurate rate as high as possible, for example, planetary detection, seismic wave analysis, Web believe Breath filtering, living things feature recognition, computer-aided medical diagnosis etc. some need the actual items of precise classification.But integrated Learning method is currently not met by such high-precision requirement.Based on the consideration of such realistic problem, invention is a kind of high-precision Integrated learning approach is extremely necessary.
The content of the invention
It is an object of the invention to:It is not high enough in order to solve existing integrated learning approach precision, it is impossible to meet high precision The problem of requirement of rate project, the present invention provide a kind of Boosting support vector machines learning method, combination supporting vector machine with The thought of Boosting algorithms, proposes lifting algorithm of support vector machine, improves the resampling skill of classifier design in pattern-recognition The study precision of art and ensure stable learning ability, Optimum Classification effect so that classification accurate rate greatly promotes.
The present invention specifically uses following technical scheme to achieve these goals:
A kind of Boosting support vector machines learning method, it is characterised in that comprise the following steps:
Step 1:Data processing, carries out initial support vector machine classifier the selection of parameter γ;
Step 2:Weights are initialized, choose n training sample composition total data set [(x1,y1),…,(xn,yn)], wherein
xi∈X,yi∈ Y={ -1 ,+1 } (X x1,...,xnSet), initialize the weights D of each training sample1(i) =1/n;
Step 3:K sample is arbitrarily chosen in the n training sample from step 2, first run data set is formed and combines step Parameter γ in rapid 1 is trained, and obtains training dataset, and into loop iteration, loop iteration total degree is T, previous cycle Iterations is t;
Step 4:Grader h is drawn according to training dataset and SVM learning algorithmst:X→{-1,+1};
Step 5:By grader htApplied to total data set, total data set is predicted, respectively to grader htClassification is just Really it is marked with wrong training sample, and error ε is determined according to the training sample of classification errort, calculate grader htPoint Class error alphat, calculation formula is:
Step 6:The error in classification α drawn according to step 5tUpdate the weights D that training data concentrates all training samplest+1 (i), calculation formula is:
Dt+1(i)=Dt(i)exp(-αtyiht(xi))/Zt
In formula, ZtFor Dt+1The normalized function of distribution;
Step 7:T values are updated, make t=t+1, as t≤T, return to step 4, continues next round loop iteration;Work as t>During T, End loop;
Step 8:By T circulation, final classification device H (x) is just obtained, calculation formula is:
In above-mentioned technical proposal, the data processing in the step 1 specifically includes following steps:
Step 1.1:By practical problem digitization, the manageable data formats of SVM are changed into;
Step 1.2:Normalized is carried out to the data in step 1.1 Jing Guo conversion processing.
In above-mentioned technical proposal, the step 1 makes choice parameter γ using gridding method, when parameter γ is SVM training The kernel function of use, additionally relates to another parameter C (penalty factor), parameter γ and parameter C composition parameters to (γ, C), Grid search is carried out to parameter first, i.e., various possible parameters is attempted to value using the method for exhaustion, then carries out cross validation, look for The highest parameter pair of cross validation accuracy of sening as an envoy to, the target of parameter selection is the parameter pair found so that grader can Calculate to a nicety unknown data.
In Boosting algorithms, in iteration each time according to current sample distribution weights ωi,jLearnt, root Concentrated according to the principle of minimum from Weak Classifier and choose most effective Weak Classifier εi,j=∑iωi,j|hj(xi)-yi|.Weak training Sample by misclassification, then aggravates distribution weights in current iteration.Conversely, then reduce weights.It can so utilize and aggravate mistake point The weights of class sample force algorithm these mistakes of selective learning in ensuing iteration to divide sample.In general, change when each When the training error for the Weak Classifier that generation selects is less than 50%, the training error of strong classifier can be as iterations be with index Form declines and is intended to zero.Whole process is as follows:
(1) first pass through the study to N number of training data and obtain first Weak Classifier h1;
(2) data of h1 misclassifications and other new datas are formed into a new sample for having N number of training data together, led to Cross the study to this sample and obtain second Weak Classifier h2;
(3) data by h1 and h2 all misclassifications, which add other new datas and form another, new has N number of training data Sample, the 3rd Weak Classifier h3 is obtained by the study to this sample;
(4) the strong classifier h of lifting is finally obtainedfinal=MajorityVote (h1,h2,h3), i.e., some data is divided into Which kind of will pass through h1,h2,h3Majority voting.
Boosting algorithms can strengthen the generalization ability of given algorithm, but also there are two shortcomings:This method needs Know the lower limit of weak learning machine study accuracy, and this is difficult to accomplish in practical problem;Secondly, this method may be led Cause later learning machine to concentrate too much on a small number of especially difficult samples, cause to show unstable, and calculated for Boosting For the realization of method, also there are two difficulties:
(1) how adjusting training collection so that on training set training Weak Classifier carried out.
(2) how obtained each Weak Classifier will be trained to jointly form strong classifier.
For two above problem, Adaboost algorithm is adjusted:
(1) training data randomly selected is replaced using the training data chosen after weighting, so by trained focus collection In on more difficult point of training data.
(2) when Weak Classifier is joined together, average voting mechanism is replaced using the voting mechanism of weighting.Allow classifying quality Good Weak Classifier has a larger weight, and the poor grader of classifying quality has less weight.
Unlike Boosting algorithms, Adaboost algorithm need not be known a priori by weak learning algorithm study accuracy Lower limit, that is, Weak Classifier error, and the nicety of grading of the strong classifier finally obtained dependent on all Weak Classifiers point Class precision, so can deeply excavate the potentiality of Weak Classifier algorithm.
Different training sets is realized by adjusting each sample corresponding weight in Adaboost algorithm.Start When, the corresponding weights of each sample are identical, i.e. Ui(i)=1/n (i=1 ..., n), wherein n is number of samples, in this sample One's duty, which plants, trains a Weak Classifier h1.For the sample of h1 classification errors, its corresponding weight is increased;And for classifying just True sample, reduces its weight, and the sample of such misclassification is just projected, so as to obtain a new sample distribution Ui. Under new sample distribution, Weak Classifier is trained again, obtains Weak Classifier h2.And so on, by T circulation, obtain To T Weak Classifier, this T Weak Classifier is got up by certain weighted superposition, the strong classifier finally wanted.
Support vector machines is established on the basis of the VC of Statistical Learning Theory ties up concept and structural risk minimization principle, we Using finite sample information is in model complexity (the study precision i.e. to specific training sample) and learning ability is (i.e. without error Identify arbitrary sample ability) between seek to trade off, with the generalization ability obtained.
Support vector machines seeks the optimal solution under existing information, rather than sample tends to be infinite specifically for finite sample Optimal solution when big.By the way that the Construct question of optimal separating hyper plane is converted into quadratic form optimization problem, so as to obtain the overall situation Optimum point, solves the unavoidable local extremum problem in neural net method.The solution of optimization problem be one group of support to Amount, it is determined that the structure of support vector machines, determines the boundary between classification, and other samples do not play any work in classification With unlike neutral net scheduling algorithm would generally use whole or numerical example mostly statistical information.Due to sample in a model Occur only in the form of dot product, therefore be easy to be generalized to nonlinear model from linear model.By nonlinear function by data High-dimensional feature space is mapped to, then constructs linear discriminant function in this space, realizes the nonlinear discriminant in luv space Function.The explicit construction of mapping function is dexterously avoided, without knowing its concrete form.Support vector machines has clearly several What meaning, can be according to its geometric properties come preference pattern structure, learning of structure method.
Beneficial effects of the present invention are as follows:
1st, present invention incorporates the thought of support vector machines and Adaboost methods, it is proposed that lifting support vector machines this New sorting technique, support vector machines are avoided from the conventional procedure concluded to deduction, be enormously simplify common classification and are returned The problems such as returning, and Adaboost methods, it is easy to implement, Generalization error rate, the method for the invention by support vector machines can be reduced Adaboost methods are incorporated, greatly reduce Generalization error rate, improve study precision, and the method for the present invention can not only answer Classify for two classes, it may also be used for multicategory classification task, has excellent learning performance, can reach the classifying quality of higher.
2nd, the present invention proposes lifting algorithm of support vector machine, combines the thought of SVM and Adaboost, wherein, SVM has Fairly perfect theoretical foundation, preferable learning classification performance, particularly on compared with small data set, can also reach good point Class effect, and as integrated study Adaboost methods have the advantages that to realize it is simple, flexibly, should be readily appreciated that, it is contemplated that shadow The often a few sample of classification accuracy is rung, and lifts the sample that algorithm of support vector machine is the classification of selective analysis mistake, is made The sample of classification error can obtain multiple study, so that model more adapts to the sample that these are easily classified by mistake, Reach more efficient classifying quality, improve the generalization ability of learning machine.
3rd, basic classification device of the present invention using SVM as Adaboost, SVM have very outstanding classification in Weak Classifier Effect, and Weak Classifier can be combined into a strong classifier by Adaboost, the method for combination is the one of weighted majority voting Side, that is, increase the weights of the small Weak Classifier of error in classification rate, it is served in voting larger, and it is big to reduce error in classification rate Weak Classifier weights, it is served in voting less, by the combination of SVM and Adaboost, just can reach More excellent classifying quality, improves classification accuracy.
Brief description of the drawings
Fig. 1 is the flow chart that the present invention is applied to two classification problems.
Fig. 2 is the flow chart that the present invention is applied to more classification problems.
Embodiment
In order to which those skilled in the art are better understood from the present invention, below in conjunction with the accompanying drawings with following embodiments to the present invention It is described in further detail.
Embodiment 1
As shown in Figure 1, the present embodiment proposes a kind of Boosting support vector machines learning method, comprise the following steps:
Step 1:Data processing, carries out initial support vector machine classifier using gridding method the selection of parameter γ;
Specifically, the data processing in step 1 specifically includes following two steps:
Step 1.1:By practical problem digitization, the manageable data formats of SVM are changed into;
Step 1.2:Normalized is carried out to the data in step 1.1 Jing Guo conversion processing;
Step 2:Weights are initialized, choose n training sample composition total data set [(x1,y1),…,(xn,yn)], wherein xi ∈X,yi∈ Y={ -1 ,+1 } (X x1,...,xnSet), initialize the weights D of each training sample1(i)=1/n;
Step 3:K sample is arbitrarily chosen in the n training sample from step 2, first run data set is formed and combines step Parameter γ in rapid 1 is trained, and obtains training dataset, and into loop iteration, loop iteration total degree is T, previous cycle Iterations is t;
Step 4:Grader h is drawn according to training dataset and SVM learning algorithmst:X→{-1,+1};
Step 5:By grader htApplied to total data set, total data set is predicted, respectively to grader htClassification is just Really it is marked with wrong training sample, and error ε is determined according to the training sample of classification errort, calculate grader htPoint Class error alphat, calculation formula is:
Step 6:The error in classification α drawn according to step 5tUpdate the weights D that training data concentrates all training samplest+1 (i), calculation formula is:
Dt+1(i)=Dt(i)exp(-αtyiht(xi))/Zt
In formula, ZtFor Dt+1The normalized function of distribution;
Step 7:T values are updated, make t=t+1, as t≤T, return to step 4, continues next round loop iteration;Work as t>During T, End loop;
Step 8:By T circulation, final classification device H (x) is just obtained, calculation formula is:
In the present embodiment, parameter γ is made choice using gridding method, parameter γ is the kernel function used during SVM training, Another parameter C (penalty factor), parameter γ and parameter C composition parameters are additionally related to (γ, C), first to parameter into Row grid search, i.e., attempt various possible parameters to value using the method for exhaustion, since the target of parameter selection is the ginseng that has found It is several right so that grader can calculate to a nicety unknown data, therefore carry out cross validation to value to parameter, obtain making intersection Verify the highest parameter pair of accuracy.
Basic classification device of the present embodiment using SVM as Adaboost, SVM have very outstanding classification in Weak Classifier Effect, and Weak Classifier can be combined into a strong classifier by Adaboost, the method for combination is the one of weighted majority voting Side, that is, increase the weights of the small Weak Classifier of error in classification rate, it is served in voting larger, and it is big to reduce error in classification rate Weak Classifier weights, it is served in voting less, by the combination of SVM and Adaboost, just can reach More excellent classifying quality, greatly reduces Generalization error rate, improves study precision.
Embodiment 2
Solved as shown in Fig. 2, the present embodiment proposes on the basis of embodiment 1 using lifting support vector machine method The method of more classification problems, initial SVM are to solve two class classification problems, it is impossible to are directly used in multicategory classification, but can have Effect ground is generalized to multicategory classification problem, these algorithms are referred to as " multi-class support vector machine ", can be roughly divided into two major classes:
(1) a series of binary classifier is constructed by certain mode and is combined to them to realize multiclass point Class;
(2) parametric solution of multiple classifying faces is merged into an optimization problem, by solving the optimization problem " disposable " realize multicategory classification.
Second class method is while it appear that succinct, but the variable during duty Optimization is far more than first Class method, training speed are not also dominant not as good as first kind method in nicety of grading, when number of training is very big, This problem is more prominent, and just because of this, first kind method is more commonly used, and the method applied in the present invention, using circulation Iterative operation, after selecting k number according to sample, according to one-against-one method, k (k-1) a SVM is drawn by multiple training data Learning machine, is then predicted all samples with this k (k-1) a grader.
The above, is only presently preferred embodiments of the present invention, is not intended to limit the invention, patent protection model of the invention Enclose and be subject to claims, the equivalent structure change that every specification and accompanying drawing content with the present invention is made, similarly It should include within the scope of the present invention.

Claims (3)

1. a kind of Boosting support vector machines learning method, it is characterised in that comprise the following steps:
Step 1:Data processing, carries out initial support vector machine classifier the selection of parameter γ;
Step 2:Weights are initialized, choose n training sample composition total data set [(x1,y1),…,(xn,yn)], wherein xi∈X, yi∈ Y={ -1 ,+1 }, initialize the weights D of each training sample1(i)=1/n;
Step 3:K sample is arbitrarily chosen in the n training sample from step 2, first run data set is formed and combines in step 1 Parameter γ be trained, obtain training dataset, carry out loop iteration, loop iteration total degree be T, and previous cycle iteration is secondary Number is t;
Step 4:Grader h is drawn according to training dataset and SVM learning algorithmst:X→{-1,+1};
Step 5:By grader htApplied to total data set, total data set is predicted, respectively to grader htClassification correctly and The training sample of mistake is marked, and determines error ε according to the training sample of classification errort, calculate grader htClassification miss Poor αt, calculation formula is:
Step 6:The error in classification α drawn according to step 5tUpdate the weights D that training data concentrates all training samplest+1(i), count Calculating formula is:
Dt+1(i)=Dt(i)exp(-αtyiht(xi))/Zt
In formula, ZtFor Dt+1The normalized function of distribution;
Step 7:T values are updated, make t=t+1, as t≤T, return to step 4, continues next round loop iteration;Work as t>During T, terminate Circulation;
Step 8:By T circulation, final classification device H (x) is just obtained, calculation formula is:
A kind of 2. Boosting support vector machines learning method according to claim 1, it is characterised in that the step 1 In data processing specifically include following steps:
Step 1.1:By practical problem digitization, the manageable data formats of SVM are changed into;
Step 1.2:Normalized is carried out to the data in step 1.1 Jing Guo conversion processing.
A kind of 3. Boosting support vector machines learning method according to claim 1 or 2, it is characterised in that the step Rapid 1 makes choice parameter γ using gridding method.
CN201711318629.0A 2017-10-19 2017-12-12 A kind of Boosting support vector machines learning method Pending CN107992895A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2017109803822 2017-10-19
CN201710980382 2017-10-19

Publications (1)

Publication Number Publication Date
CN107992895A true CN107992895A (en) 2018-05-04

Family

ID=62037396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711318629.0A Pending CN107992895A (en) 2017-10-19 2017-12-12 A kind of Boosting support vector machines learning method

Country Status (1)

Country Link
CN (1) CN107992895A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961468A (en) * 2018-06-27 2018-12-07 大连海事大学 A kind of ship power system method for diagnosing faults based on integrated study
CN109255094A (en) * 2018-08-10 2019-01-22 重庆邮电大学 Commercial truck quality estimation method based on SVR_Adaboost innovatory algorithm
CN109299555A (en) * 2018-09-30 2019-02-01 上海机电工程研究所 Infrared Imaging Seeker anti-jamming performance evaluation method and system
CN109472302A (en) * 2018-10-29 2019-03-15 中国石油大学(华东) A kind of support vector machine ensembles learning method based on AdaBoost
CN110009111A (en) * 2019-03-29 2019-07-12 电子科技大学 The method of optimal training set is generated in a kind of machine learning inverse process
CN110111012A (en) * 2019-05-13 2019-08-09 中南大学 A kind of contact net load recognition methods based on stable state characteristics of image
CN110146817A (en) * 2019-05-13 2019-08-20 上海博强微电子有限公司 The diagnostic method of lithium battery failure
CN111209998A (en) * 2018-11-06 2020-05-29 航天信息股份有限公司 Training method and device of machine learning model based on data type
CN111782042A (en) * 2020-06-30 2020-10-16 西安电子科技大学 Electroencephalogram identity authentication method based on ensemble learning
CN112612897A (en) * 2020-12-30 2021-04-06 湖北大学 Wikipedia concept dependency relationship identification method
CN112819495A (en) * 2019-11-18 2021-05-18 南京财经大学 User shopping intention prediction method based on random polynomial kernel
CN116432871A (en) * 2023-06-13 2023-07-14 北京化工大学 Bus dispatching optimization method based on AdaBoost algorithm

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961468A (en) * 2018-06-27 2018-12-07 大连海事大学 A kind of ship power system method for diagnosing faults based on integrated study
CN109255094A (en) * 2018-08-10 2019-01-22 重庆邮电大学 Commercial truck quality estimation method based on SVR_Adaboost innovatory algorithm
CN109255094B (en) * 2018-08-10 2022-12-27 重庆邮电大学 Commercial truck quality estimation method based on SVR-Adaboost improved algorithm
CN109299555B (en) * 2018-09-30 2020-02-04 上海机电工程研究所 Anti-interference performance evaluation method and system for infrared imaging seeker
CN109299555A (en) * 2018-09-30 2019-02-01 上海机电工程研究所 Infrared Imaging Seeker anti-jamming performance evaluation method and system
CN109472302A (en) * 2018-10-29 2019-03-15 中国石油大学(华东) A kind of support vector machine ensembles learning method based on AdaBoost
CN111209998A (en) * 2018-11-06 2020-05-29 航天信息股份有限公司 Training method and device of machine learning model based on data type
CN111209998B (en) * 2018-11-06 2023-08-18 航天信息股份有限公司 Training method and device of machine learning model based on data type
CN110009111A (en) * 2019-03-29 2019-07-12 电子科技大学 The method of optimal training set is generated in a kind of machine learning inverse process
CN110146817A (en) * 2019-05-13 2019-08-20 上海博强微电子有限公司 The diagnostic method of lithium battery failure
CN110111012A (en) * 2019-05-13 2019-08-09 中南大学 A kind of contact net load recognition methods based on stable state characteristics of image
CN112819495A (en) * 2019-11-18 2021-05-18 南京财经大学 User shopping intention prediction method based on random polynomial kernel
CN111782042A (en) * 2020-06-30 2020-10-16 西安电子科技大学 Electroencephalogram identity authentication method based on ensemble learning
CN112612897A (en) * 2020-12-30 2021-04-06 湖北大学 Wikipedia concept dependency relationship identification method
CN112612897B (en) * 2020-12-30 2023-06-20 湖北大学 Wikipedia concept dependency relationship identification method
CN116432871A (en) * 2023-06-13 2023-07-14 北京化工大学 Bus dispatching optimization method based on AdaBoost algorithm

Similar Documents

Publication Publication Date Title
CN107992895A (en) A kind of Boosting support vector machines learning method
Johnson et al. Survey on deep learning with class imbalance
Elkano et al. Enhancing multiclass classification in FARC-HD fuzzy classifier: On the synergy between $ n $-dimensional overlap functions and decomposition strategies
Liu et al. Recognizing human actions by attributes
Xia et al. An efficient and adaptive granular-ball generation method in classification problem
CN106355192A (en) Support vector machine method based on chaos and grey wolf optimization
CN104573669A (en) Image object detection method
Lin et al. Machine learning templates for QCD factorization in the search for physics beyond the standard model
CN105787557A (en) Design method of deep nerve network structure for computer intelligent identification
Chen et al. Fuzzy rule weight modification with particle swarm optimisation
Cao et al. A PSO-based cost-sensitive neural network for imbalanced data classification
Fong et al. A novel feature selection by clustering coefficients of variations
CN110298434A (en) A kind of integrated deepness belief network based on fuzzy division and FUZZY WEIGHTED
Narayanan et al. A study on the approximation of clustered data to parameterized family of fuzzy membership functions for the induction of fuzzy decision trees
Kamruzzaman et al. ERANN: An algorithm to extract symbolic rules from trained artificial neural networks
Xu et al. Classifier ensemble based on multiview optimization for high-dimensional imbalanced data classification
CN108664562B (en) The text feature selection method of particle group optimizing
CN109948589A (en) Facial expression recognizing method based on quantum deepness belief network
Wang et al. Interpret neural networks by extracting critical subnetworks
Aličković et al. Data mining techniques for medical data classification
Jia et al. Latent task adaptation with large-scale hierarchies
Patidar et al. Decision tree C4. 5 algorithm and its enhanced approach for educational data mining
CN109800854A (en) A kind of Hydrophobicity of Composite Insulator grade determination method based on probabilistic neural network
Cárdenas et al. Multiobjective genetic generation of fuzzy classifiers using the iterative rule learning
Purnomo et al. Synthesis ensemble oversampling and ensemble tree-based machine learning for class imbalance problem in breast cancer diagnosis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180504