CN101923650A - Random forest classification method and classifiers based on comparison mode - Google Patents

Random forest classification method and classifiers based on comparison mode Download PDF

Info

Publication number
CN101923650A
CN101923650A CN201010265846XA CN201010265846A CN101923650A CN 101923650 A CN101923650 A CN 101923650A CN 201010265846X A CN201010265846X A CN 201010265846XA CN 201010265846 A CN201010265846 A CN 201010265846A CN 101923650 A CN101923650 A CN 101923650A
Authority
CN
China
Prior art keywords
classification
data
symbol
differentiation
random forest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201010265846XA
Other languages
Chinese (zh)
Inventor
王亦洲
王亮
高文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN201010265846XA priority Critical patent/CN101923650A/en
Publication of CN101923650A publication Critical patent/CN101923650A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a random forest classification method and classifiers based on a comparison mode, wherein the method comprises the followings steps of: classifying and judging input data by each weak classifier constituting a random forest, and coding the output of the weak classifiers and further quantifying the output of each classifier into a symbol; collecting the symbols output by the weak classifiers into a symbol set, considering the set as an input of a learned judgment rule set, and giving a judgment score for each class according to all the judgment rules in the current symbol set and the judgment rule set; and selecting the class with the highest score as the final judgment class of the input data. The invention well increases the precision of the classifiers of the original random forest.

Description

Random forest classification method and sorter based on the contrastive pattern
Technical field
The present invention relates to pattern-recognition and computer vision field, relate in particular to a kind of random forest classification method and sorter based on the contrastive pattern.
Background technology
Because random forest sorter (random forest) has training, classification speed is fast, the nothing of classification predicted value is estimated partially, with advantage such as be simple and easy to use, in the last few years, this sorter is widely used in content-based image classification, image labeling is in the Computer Vision Task such as Motion Recognition.The random forest sorter is made up of the Weak Classifier (classification tree) of a set.For data to be classified, random forest is by the output of comprehensive all Weak Classifiers, with the final classification of the mode determination data of vote by ballot (voting).
Constantly complicated along with computer vision pattern classification problem, the levels of precision of single sorter is poor.And image or video data need be described with the very high eigenvector of dimension usually, and the characteristics of image dimension ratio that each Weak Classifier covered becomes very little, have revealed significantly different characteristic of levels of precision for the different classes of tables of data of classification.So the classifying quality of lifting Weak Classifier is analyzed the output statistical attribute of Weak Classifier and is set up the key point that more effective differentiation rule becomes enhancing random forest sorter effect.
Summary of the invention
The object of the present invention is to provide a kind of random forest classification method and sorter based on the contrastive pattern, based on the present invention, can from the output set of Weak Classifier, excavate pattern,, more effectively set up the classifying rules of sorter by these discrimination models with identification.
On the one hand, the invention discloses a kind of random forest classification method based on the contrastive pattern, described method is based on the random forest sorter, and described random forest sorter comprises a plurality of Weak Classifiers; This method comprises the steps: the quantification treatment step, by each Weak Classifier that constitutes random forest the input data is carried out discriminant classification, the output of Weak Classifier is encoded, and then the output quantity of each Weak Classifier is turned to a symbol; Differentiate the fractional computation step, the symbol of all described Weak Classifier outputs is constituted assemble of symbol, with the input of this set as the decision rule set of learning, and, provide the differentiation mark of each classification according to each decision rule in numerical value in the current sign set and the decision rule set; The kind judging step selects to differentiate the final decision classification of the classification of mark maximum as described input data.
Above-mentioned sorting technique, in the preferred described differentiation fractional computation step, the set of described decision rule is by correlativity and regularity between the symbol of analyzing described a plurality of Weak Classifiers outputs, the discrimination model that has discriminating power between the different pieces of information classification that obtains based on data digging method.
Above-mentioned sorting technique, in the preferred described differentiation fractional computation step, the differentiation mark of described each classification obtains as follows: steps A, the training data of the individual classification of given N (N>1), for each classification i, as positive example, the training data of other all categories except that i is as counter-example with the training data of i class; Step B delivers to described random forest sorter with the data of described positive example and described counter-example and classifies, and quantification treatment is carried out in the output of all Weak Classifiers of each data be converted into assemble of symbol; Step C utilizes contrastive pattern's method for digging to excavate the pattern that can significantly distinguish positive example and counter-example data from described assemble of symbol; Described pattern is assemble of symbol p; Step D is converted into corresponding differentiation rule P with each discrimination model p, determines the differentiation mark of this differentiation rule for classification i.
Above-mentioned sorting technique, among the preferred described step B, described positive example data corresponding symbol set set representations is PS={ps 1, ps 2..., ps J, counter-example data corresponding symbol set set representations is NS={ns 1, ns 2..., ns K, J wherein, K is the number of positive counter-example, and J>1, K>1; And among the described step C, described numerical value set p meets the following conditions:
Figure BSA00000247965200031
Figure BSA00000247965200032
Wherein, θ SpAnd θ GrGr>1) is respectively preassigned threshold value; Among the described step D, data x calculates according to following formula about the differentiation mark of classification i:
Figure BSA00000247965200033
Wherein, Φ iBe all the differentiation rule of the excavating set of i class, Zi is the regular terms of i class data,
Figure BSA00000247965200034
Wherein Xs is the coded identification set of data x.
On the other hand, the invention also discloses a kind of random forest sorter based on the contrastive pattern, described random forest sorter comprises a plurality of Weak Classifiers, also comprise: the quantification treatment module, be used for the input data being carried out discriminant classification by each Weak Classifier that constitutes random forest, the output of Weak Classifier is encoded, and then the output quantity of each Weak Classifier is turned to a symbol; Differentiate the fractional computation module, be used for the symbol of all described Weak Classifier outputs is constituted assemble of symbol, with the input of this set as the decision rule set of learning, and, provide the differentiation mark of each classification according to each decision rule in numerical value in the current sign set and the decision rule set; The kind judging module is used to select to differentiate the final decision classification of the classification of mark maximum as described input data.
Above-mentioned sorter, in the preferred described differentiation fractional computation module, the set of described decision rule is by correlativity and regularity between the symbol of analyzing described a plurality of Weak Classifiers outputs, the discrimination model that has discriminating power between the different pieces of information classification that obtains based on data digging method.
Above-mentioned sorter, preferred described differentiation fractional computation module comprises: modules A is used for the training data of the individual classification of given N (N>1), for each classification i, as positive example, the training data of other all categories except that i is as counter-example with the training data of i class; Module B is used for that the data of described positive example and described counter-example are delivered to described random forest sorter and classifies, and quantification treatment is carried out in the output of all Weak Classifiers of each data be converted into assemble of symbol; Module C is used for utilizing contrastive pattern's method for digging to excavate the pattern that can significantly distinguish positive example and counter-example data from described assemble of symbol; Described pattern is assemble of symbol p; Module D is used for each discrimination model p is converted into corresponding differentiation rule P, determines the differentiation mark of this differentiation rule for classification i.
Above-mentioned sorting technique, among the preferred described module B, described positive example data corresponding symbol set set representations is PS={ps 1, ps 2..., ps J, counter-example data corresponding symbol set set representations is NS={ns 1, ns 2..., ns K, J wherein, K is the number of positive counter-example, and J>1, K>1; And among the described module C, described numerical value set p meets the following conditions:
Figure BSA00000247965200041
Wherein, θ SpAnd θ GrGr>1) is respectively preassigned threshold value; Among the described module D, data x calculates according to following formula about the differentiation mark of classification i:
Figure BSA00000247965200043
Wherein, Φ iBe all the differentiation rule of the excavating set of i class, Z iBe the regular terms of i class data,
Figure BSA00000247965200044
Wherein Xs is the coded identification set of data x.
In terms of existing technologies, the present invention has following advantage:
The first, the present invention is the output of the Weak Classifier coding that quantizes, and random forest is converted into a number value set to output of each input data, makes a lot of numerical analysis methods can directly use to analyze the inherent law of random forest output.
The second, the present invention use efficiently, the fast data mining algorithm, can the be enough very short time searches the discrimination model with fine discriminating power from large-scale process quantizes the output set of random forest of coding.
Three, the present invention sets up decision rule and binds some Weak Classifier output differentiation results by analyzing the correlativity between the Weak Classifier, has strengthened the discriminating power of original single Weak Classifier.
Four, the present invention's correlativity that can be efficiently search out between the Weak Classifier output from the output of lot of data sample designs the differentiation rule, is suitable for handling high-dimensional data, needs a plurality of Weak Classifiers to set up the complicated classification problem of random forest.
Description of drawings
Fig. 1 is the flow chart of steps that the present invention is based on contrastive pattern's random forest classification method embodiment;
Fig. 2 is the random forest sorter frame diagram based on the contrastive pattern;
Fig. 3 is the numerical coding synoptic diagram of Weak Classifier output;
Fig. 4 is the structured flowchart based on contrastive pattern's random forest sorter.
Embodiment
For above-mentioned purpose of the present invention, feature and advantage can be become apparent more, the present invention is further detailed explanation below in conjunction with the drawings and specific embodiments.
In the present invention, quantification treatment has been carried out in the output of Weak Classifier, based on the output of the Weak Classifier after this numerical quantization, we analyze the regularity of distribution of its numerical value, and from wherein searching out the more intense combinations of values of discriminating power, constitute discrimination model; Based on these discrimination models, rule is differentiated in design, according to differentiating rule the input data is differentiated then, exports final classification results.
With reference to Fig. 1, Fig. 1 is the flow chart of steps that the present invention is based on contrastive pattern's random forest classification method embodiment for Fig. 1; Comprise the steps:
Quantification treatment step 110 is carried out discriminant classification by each Weak Classifier that constitutes random forest to the input data, the output of Weak Classifier is encoded, and then the output quantity of each Weak Classifier is turned to a symbol; Differentiate fractional computation step 120, the quantification round values of all described Weak Classifier outputs is converted into assemble of symbol, with the input of this set as the decision rule set of learning, and, provide the differentiation mark of each classification according to each decision rule in numerical value in the current numerical value code set and the decision rule set; Kind judging step 130 selects to differentiate the final decision classification of the classification of mark maximum as described input data.
Below said method is made further explanation.
For input data, with the differentiation result of each Weak Classifier coding that quantizes.The present invention transforms into a round values with any one output of any one sorter and this sorter when quantizing, this round values is well-determined, does not promptly exist another Weak Classifier and its output to have the one and same coding round values.Then, for the data of each input, the output of each Weak Classifier in the random forest can be encoded to a symbol with it by quantizing method, so random forest just can be with an incompatible description of glossary of symbols for the output of input data.Next, utilize the contrastive pattern to excavate the technology of (emergingpattern mining), from all integer set of training data correspondence, find pattern with discriminating power, promptly, the frequency height that this pattern occurs in the set of integers intersection of positive example data, and the frequency of occurrences is low in the set of integers intersection of counter-example data.At last, each discrimination model is converted into one differentiates rule, differentiate rule by these then and make final judgement for the classification under the input data.
Be illustrated below by an example.
With reference to Fig. 2,, it is carried out discriminant classification by following step for a given input data x:
(1) by each Weak Classifier that constitutes random forest x is carried out discriminant classification, the output of Weak Classifier is encoded, the output quantity that is about to each Weak Classifier turns to a symbol (we use round values as example) here.This quantizing process can be represented by Fig. 2.For given input data x, each nonleaf node of classification tree by Rule of judgment according to will import data be divided into it a left side or right child node.As shown in Figure 2, it is 7 leaf node that classification tree 1 is finally allocated to sequence number to input data x, and then classification tree 1 can be formed by the numbering of classification tree and the numbering link of leaf node for the output of x, promptly 107.In like manner, for classification tree M, it finally assigns to node 6 with x, is exactly M06 so classification tree M numbers for the output of x.With reference to Fig. 3.
(2) according to the quantification treatment step, the output of all Weak Classifiers is converted into a numerical value code set, with the input of this set as the decision rule collection of learning, each decision rule provides a differentiation mark Sc according to the numerical value of current numerical value coded set to each classification i i(x).
Wherein, the mode of learning of decision rule can be summarized by following several steps: i) at first, the training data of a given N classification, for each classification i, as positive example, the training data of other classifications is as counter-example with the training data of i class for we.Ii) positive example and counter-example data are delivered to the random forest sorter and classify, the output of all Weak Classifiers of each data is converted into a number value set according to the described method of quantification treatment step.Wherein, positive example data value corresponding set set representations is PS={ps 1, ps 2..., ps J, counter-example data value corresponding set set representations is NS={ns 1, ns 2..., ns K, J wherein, K is the number of positive counter-example.Iii) utilize contrastive pattern's method for digging (emerging pattern mining) from these numerical value set, to excavate the pattern (pattern) that significantly to distinguish positive example and counter-example data.Each pattern is a number value set p, and it satisfies following conditions
Figure BSA00000247965200081
Figure BSA00000247965200082
Wherein, | A| represents the number of element in the set A, sp and gr (gr>1) are respectively preassigned threshold value, they are determined by cross validation, see that group threshold value performance is relatively good in training data, we get sp=0.6 in the experiment the inside, gr=5, but according to the problem difference, may have different selections according to formula, we can conclude that the pattern of being excavated has certain frequency of occurrences in positive example, will guarantee that simultaneously the frequency of occurrences of this pattern in positive example is than the frequency height that occurs in counter-example.Iv) each discrimination model p is converted into the differentiation rule P of a correspondence, this differentiation rule for the differentiation mark of classification i is
w P = sp p × gr p gr p + 1
Wherein, sp pAnd gr pBe respectively the ratio of the frequency that in positive example, occurs of pattern p and frequency that in positive example, occurs and the frequency that in counter-example, occurs.
So, a given data x and its value corresponding set xs, it is for the differentiation mark Sc of classification i i(x) can be as shown in the formula calculating.
Figure BSA00000247965200084
Wherein, Φ iBe all the differentiation rule of the excavating set of i class, Z iIt is the regular terms of i class data.In the actual computation, Z iValue is the intermediate value that all i class training datas are differentiated mark.
(3) select to differentiate the final decision classification of the classification of mark maximum as input data x
On the other hand, the present invention also provides a kind of random forest sorter based on the contrastive pattern, and this sorter comprises: described random forest sorter comprises a plurality of Weak Classifiers; Also comprise:
Quantification treatment module 40 is used for by each Weak Classifier that constitutes random forest the input data being carried out discriminant classification, the output of Weak Classifier is encoded, and then the output quantity of each Weak Classifier is turned to a round values; Differentiate fractional computation module 41, be used for the quantification round values of all described Weak Classifier outputs is converted into the numeric coding set, with the input of this set as the decision rule set of learning, and, provide the differentiation mark of each classification according to each decision rule in numerical value in the current numerical value code set and the decision rule set; Kind judging module 42 is used to select to differentiate the final decision classification of the classification of mark maximum as described input data.
Wherein, in the described differentiation fractional computation module 41, the set of described decision rule by analyze described a plurality of Weak Classifiers outputs round values between correlativity and regularity, the discrimination model that between the different pieces of information classification, has discriminating power that obtains based on data digging method.
More specifically, comprise A, B, four submodules of C, D in the described differentiation fractional computation module:
Submodule A is used for the training data of the individual classification of given N (N>1), and for each classification i, as positive example, (N-i) training data of individual classification is as counter-example with the training data of i class; Submodule B is used for that the data of described positive example and described counter-example are delivered to described random forest sorter and classifies, and quantification treatment is carried out in the output of all Weak Classifiers of each data be converted into the numerical value set; Submodule C is used for utilizing contrastive pattern's method for digging to excavate the pattern that can significantly distinguish positive example and counter-example data from described numerical value set; Described pattern is numerical value set p; Submodule D is used for each discrimination model p is converted into corresponding differentiation rule P, determines the differentiation mark of this differentiation rule for classification i.
The principle of above-mentioned sorter is identical with sorting technique, and relevant part can not repeat them here with reference to the description of sorting technique.
More than a kind of random forest classification method and sorter based on the contrastive pattern provided by the present invention described in detail, used specific embodiment herein principle of the present invention and embodiment are set forth, the explanation of above embodiment just is used for helping to understand method of the present invention and core concept thereof; Simultaneously, for one of ordinary skill in the art, according to thought of the present invention, part in specific embodiments and applications all can change.In sum, this description should not be construed as limitation of the present invention.

Claims (8)

1. the random forest classification method based on the contrastive pattern is characterized in that, described method is based on the random forest sorter, and described random forest sorter comprises a plurality of Weak Classifiers; This method comprises the steps:
The quantification treatment step is carried out discriminant classification by each Weak Classifier that constitutes random forest to the input data, the output of Weak Classifier is encoded, and then the output quantity of each Weak Classifier is turned to a symbol;
Differentiate the fractional computation step, the symbol of all described Weak Classifier outputs is constituted assemble of symbol, with the input of this set, and, provide the differentiation mark of each classification according to each decision rule in current sign set and the decision rule set as the decision rule set of learning;
The kind judging step selects to differentiate the final decision classification of the classification of mark maximum as described input data.
2. sorting technique according to claim 1 is characterized in that,
In the described differentiation fractional computation step, the set of described decision rule is by correlativity and regularity between the symbol of analyzing described a plurality of Weak Classifiers outputs, the discrimination model that has discriminating power between the different pieces of information classification that obtains based on data digging method.
3. sorting technique according to claim 2 is characterized in that, in the described differentiation fractional computation step, the differentiation mark of described each classification obtains as follows:
Steps A, the training data of the individual classification of given N (N>1), for each classification i, as positive example, the training data of other all categories except that i is as counter-example with the training data of i class;
Step B delivers to described random forest sorter with the data of described positive example and described counter-example and classifies, and quantification treatment is carried out in the output of all Weak Classifiers of each data be converted into assemble of symbol;
Step C utilizes contrastive pattern's method for digging to excavate the pattern that can significantly distinguish positive example and counter-example data from described assemble of symbol; Described pattern is assemble of symbol p;
Step D is converted into corresponding differentiation rule P with each discrimination model p, determines the differentiation mark of this differentiation rule for classification i.
4. sorting technique according to claim 3 is characterized in that,
Among the described step B, described positive example data corresponding symbol set set representations is PS={ps 1, ps 2..., ps J, counter-example data corresponding symbol set set representations is NS={ns 1, ns 2..., ns K, J wherein, K is the number of positive counter-example, and J>1, K>1; And,
Among the described step C, described numerical value set p meets the following conditions:
Figure FSA00000247965100021
Figure FSA00000247965100022
Wherein, θ SpAnd θ GrGr>1) is respectively preassigned threshold value;
Among the described step D, data x calculates according to following formula about the differentiation mark of classification i:
Figure FSA00000247965100023
Wherein, Φ iBe all the differentiation rule of the excavating set of i class, Z iBe the regular terms of i class data,
Figure FSA00000247965100024
Wherein Xs is the coded identification set of data x.
5. random forest sorter based on the contrastive pattern, described random forest sorter comprises a plurality of Weak Classifiers, it is characterized in that, also comprises:
The quantification treatment module is used for by each Weak Classifier that constitutes random forest the input data being carried out discriminant classification, the output of Weak Classifier is encoded, and then the output quantity of each Weak Classifier is turned to a symbol;
Differentiate the fractional computation module, be used for the symbol of all described Weak Classifier outputs is constituted assemble of symbol, with the input of this set as the decision rule set of learning, and, provide the differentiation mark of each classification according to each decision rule in numerical value in the current sign set and the decision rule set;
The kind judging module is used to select to differentiate the final decision classification of the classification of mark maximum as described input data.
6. sorter according to claim 5 is characterized in that,
In the described differentiation fractional computation module, the set of described decision rule by analyze described a plurality of Weak Classifiers outputs symbol between correlativity and regularity, the discrimination model that between the different pieces of information classification, has discriminating power that obtains based on data digging method.
7. sorter according to claim 6 is characterized in that, described differentiation fractional computation module comprises:
Modules A is used for the training data of the individual classification of given N (N>1), and for each classification i, as positive example, the training data of other all categories except that i is as counter-example with the training data of i class;
Module B is used for that the data of described positive example and described counter-example are delivered to described random forest sorter and classifies, and quantification treatment is carried out in the output of all Weak Classifiers of each data be converted into assemble of symbol;
Module C is used for utilizing contrastive pattern's method for digging to excavate the pattern that can significantly distinguish positive example and counter-example data from described assemble of symbol; Described pattern is assemble of symbol p;
Module D is used for each discrimination model p is converted into corresponding differentiation rule P, determines the differentiation mark of this differentiation rule for classification i.
8. sorting technique according to claim 7 is characterized in that,
Among the described module B, described positive example data corresponding symbol set set representations is PS={ps 1, ps 2..., ps J, counter-example data corresponding symbol set set representations is NS={ns 1, ns 2..., ns K, J wherein, K is the number of positive counter-example, and J>1, K>1; And,
Among the described module C, described numerical value set p meets the following conditions:
Figure FSA00000247965100041
Figure FSA00000247965100042
Wherein, θ SpAnd θ GrGr>1) is respectively preassigned threshold value;
Among the described module D, data x calculates according to following formula about the differentiation mark of classification i:
Wherein, Φ iBe all the differentiation rule of the excavating set of i class, Z iBe the regular terms of i class data,
Figure FSA00000247965100044
Wherein Xs is the coded identification set of data x.
CN201010265846XA 2010-08-27 2010-08-27 Random forest classification method and classifiers based on comparison mode Pending CN101923650A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010265846XA CN101923650A (en) 2010-08-27 2010-08-27 Random forest classification method and classifiers based on comparison mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010265846XA CN101923650A (en) 2010-08-27 2010-08-27 Random forest classification method and classifiers based on comparison mode

Publications (1)

Publication Number Publication Date
CN101923650A true CN101923650A (en) 2010-12-22

Family

ID=43338573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010265846XA Pending CN101923650A (en) 2010-08-27 2010-08-27 Random forest classification method and classifiers based on comparison mode

Country Status (1)

Country Link
CN (1) CN101923650A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102221655A (en) * 2011-06-16 2011-10-19 河南省电力公司济源供电公司 Random-forest-model-based power transformer fault diagnosis method
CN103473231A (en) * 2012-06-06 2013-12-25 深圳先进技术研究院 Classifier building method and system
CN104391970A (en) * 2014-12-04 2015-03-04 深圳先进技术研究院 Attribute subspace weighted random forest data processing method
CN106096661A (en) * 2016-06-24 2016-11-09 中国科学院电子学研究所苏州研究院 Zero sample image sorting technique based on relative priority random forest
CN107292296A (en) * 2017-08-04 2017-10-24 西南大学 A kind of human emotion wake-up degree classifying identification method of use EEG signals
CN110108992A (en) * 2019-05-24 2019-08-09 国网湖南省电力有限公司 Based on cable partial discharge fault recognition method, system and the medium for improving random forests algorithm

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102221655A (en) * 2011-06-16 2011-10-19 河南省电力公司济源供电公司 Random-forest-model-based power transformer fault diagnosis method
CN102221655B (en) * 2011-06-16 2013-08-07 河南省电力公司济源供电公司 Random-forest-model-based power transformer fault diagnosis method
CN103473231A (en) * 2012-06-06 2013-12-25 深圳先进技术研究院 Classifier building method and system
CN104391970A (en) * 2014-12-04 2015-03-04 深圳先进技术研究院 Attribute subspace weighted random forest data processing method
CN104391970B (en) * 2014-12-04 2017-11-24 深圳先进技术研究院 A kind of random forest data processing method of attribute subspace weighting
CN106096661A (en) * 2016-06-24 2016-11-09 中国科学院电子学研究所苏州研究院 Zero sample image sorting technique based on relative priority random forest
CN106096661B (en) * 2016-06-24 2019-03-01 中国科学院电子学研究所苏州研究院 The zero sample image classification method based on relative priority random forest
CN107292296A (en) * 2017-08-04 2017-10-24 西南大学 A kind of human emotion wake-up degree classifying identification method of use EEG signals
CN110108992A (en) * 2019-05-24 2019-08-09 国网湖南省电力有限公司 Based on cable partial discharge fault recognition method, system and the medium for improving random forests algorithm
CN110108992B (en) * 2019-05-24 2021-07-23 国网湖南省电力有限公司 Cable partial discharge fault identification method and system based on improved random forest algorithm

Similar Documents

Publication Publication Date Title
CN102622373B (en) Statistic text classification system and statistic text classification method based on term frequency-inverse document frequency (TF*IDF) algorithm
CN106845717B (en) Energy efficiency evaluation method based on multi-model fusion strategy
CN101923650A (en) Random forest classification method and classifiers based on comparison mode
CN102194013A (en) Domain-knowledge-based short text classification method and text classification system
CN104965867A (en) Text event classification method based on CHI feature selection
CN110442568A (en) Acquisition methods and device, storage medium, the electronic device of field label
CN104391835A (en) Method and device for selecting feature words in texts
CN101256631B (en) Method and apparatus for character recognition
CN101183430A (en) Handwriting digital automatic identification method based on module neural network SN9701 rectangular array
CN101876987A (en) Overlapped-between-clusters-oriented method for classifying two types of texts
CN101604322A (en) A kind of decision level text automatic classified fusion method
CN104598586A (en) Large-scale text classifying method
CN105975518A (en) Information entropy-based expected cross entropy feature selection text classification system and method
Elyassami et al. Road crashes analysis and prediction using gradient boosted and random forest trees
CN112733936A (en) Recyclable garbage classification method based on image recognition
CN115811440B (en) Real-time flow detection method based on network situation awareness
CN102004796B (en) Non-retardant hierarchical classification method and device of webpage texts
CN110909542A (en) Intelligent semantic series-parallel analysis method and system
Singh et al. Feature selection based classifier combination approach for handwritten Devanagari numeral recognition
CN116633601A (en) Detection method based on network traffic situation awareness
CN101673305A (en) Industry sorting method, industry sorting device and industry sorting server
CN106557983B (en) Microblog junk user detection method based on fuzzy multi-class SVM
Wang et al. Text categorization rule extraction based on fuzzy decision tree
CN103207893A (en) Classification method of two types of texts on basis of vector group mapping
CN109919463A (en) Technical journal manuscript quality evaluation system based on SVM learning model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20101222