CN103559542A - Extension neural network pattern recognition method based on priori knowledge - Google Patents

Extension neural network pattern recognition method based on priori knowledge Download PDF

Info

Publication number
CN103559542A
CN103559542A CN201310532381.3A CN201310532381A CN103559542A CN 103559542 A CN103559542 A CN 103559542A CN 201310532381 A CN201310532381 A CN 201310532381A CN 103559542 A CN103559542 A CN 103559542A
Authority
CN
China
Prior art keywords
neural network
mode
training
extension neural
quasi
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310532381.3A
Other languages
Chinese (zh)
Inventor
周玉
王亭岭
宫贺
陈建明
熊军华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China University of Water Resources and Electric Power
Original Assignee
North China University of Water Resources and Electric Power
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China University of Water Resources and Electric Power filed Critical North China University of Water Resources and Electric Power
Priority to CN201310532381.3A priority Critical patent/CN103559542A/en
Publication of CN103559542A publication Critical patent/CN103559542A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an extension neural network pattern recognition method based on priori knowledge. The method includes the following steps that (1) a training sample set and a knowledge base are prepared; (2) an initial weight value of an extension neural network is determined according to training samples and the priori knowledge; (3) the extension neural network can be trained by the utilization of the training samples, if a training process is converged or the total error rate reaches a preset value, training is stopped, and a weight value vector, after the training, of the extension neural network is kept, and otherwise the training is continued; (4) the trained extension neural network is used for performing pattern recognition until recognition of all objects to be recognized is completed. According to the extension neural network pattern recognition method, under the common driving of the priori knowledge and the training samples, learning of the extension neural network is guided, training and learning of the extension neural network are completed, the learning burden of the extension neural network is relieved, the performance of the extension neural network is effectively improved, training time is shortened, and recognition accuracy is improved.

Description

Extension neural network mode identification method based on priori
Technical field
The present invention relates to network mode identification field, relate in particular to the extension neural network mode identification method under the common driving of a kind of combination priori and training sample.
Background technology
Extension neural network (Extension Neural Network, ENN) is to open up product theoretical and that nerual network technique organically combines.Extension neural network is another the new network type after fuzzy neural network, genetic neural network, Evolutionary Neural Network, its appearance and development can not only be expanded the further application that can open up theory self, also will promote further developing of nerual network technique and intelligent computation.At present, extension neural network < < Extension neural network and its applications > > (the Neural Networks that a kind of two power that M.H.Wang proposes connects, 2003, the 16th the 5th phase of volume) be most widely used extension neural network model, it is to separate in the problems such as the classification of block eigenvector eigenwert in interval range, identification, cluster effect remarkable.Than traditional neural network, extension neural network generalization ability is strong, plasticity good, the training time is short, can adapt to quickly new information, and in actual mechanical process, having certain rule can follow, therefore, the extension neural network sorter based on two power has had certain application at aspects such as pattern-recognition, detection, fault diagnosises.
Weighing the most important index of neural network classification recognition performance is generalization ability.Affecting one of most important factor of generalization ability is training sample, and high-quality training sample can bring high-quality network performance.Yet obtaining of high quality training sample is not a pipe course, although reason is observation data (sample data) and has no data from same distribution that common observation data limited amount can not be portrayed well raw data and distribute; In addition, although quantity is enough sometimes for sample data, there are the little data of bulk information amount, or have the similar redundant data of bulk information amount; Meanwhile, in sample data production process,, there are various noises in the training sample especially obtaining under rugged surroundings possibly.Therefore, facing training sample quality poor in the situation that, the performance of extension neural network sharply declines, so we need to solve and face under the environment of difference sample, how to improve this realistic problem of performance of extension neural network.
In actual application, sometimes be difficult to obtain on the one hand high-quality sample data, but then, although some system is very complicated, cannot understand all structures of its inside, but conventionally can there is certain understanding to some process mechanism, some priori information of knowing these processes, therefore, hope can make full use of these information, set up observation data and have no contacting of data, in conjunction with the method for empirical modeling, carrying out the foundation of model.Adding priori may be that machine learning realizes the only resource of Generalization Ability (generalization ability) under finite data.Much research shows, utilizes priori, before neural metwork training, they are applied to certain constraint, is conducive to improve the performance of network.Therefore, must, in conjunction with the priori of concrete problem concerning study, could learn out to be applicable to " optimum " sorter of specific problem concerning study.
Summary of the invention
The object of this invention is to provide a kind of extension neural network mode identification method based on priori, can effectively improve the performance of extension neural network; Even if it is poor to face training sample quality, be applied in Complex System Environment, the extension neural network based on priori still has outstanding Classification and Identification performance.
The present invention adopts following technical proposals:
An extension neural network mode identification method for priori, comprises the following steps: (1), preparation training sample set and knowledge base, training sample set is the observation data of having obtained, and supposes that training sample set is n wherein pbe the total number of sample of sample set, i schedule of samples is shown
Figure BDA0000405748490000032
wherein n is the total number of feature that sampling feature vectors comprises, and i sample class label is p; Knowledge base is the information of storage about having known in the face of concrete object; The knowledge feature embodying for extension neural network weights, the classical territory extreme value of each eigenwert of alternative proper vector,
Figure BDA0000405748490000033
l kjrepresent that k kind pattern is about the quantitative scope of j characteristic attribute;
(2), according to training sample and priori, determine the initial weight of extension neural network;
(3), utilize training sample to train extension neural network; If training process convergence or total error rate arrive preset value, deconditioning, preserves the weight vector of the extension neural network after training; Otherwise continue training;
(4), utilize the extension neural network training to carry out pattern-recognition, until the whole identification of all identifying objects is complete.
Described step (2) is specific as follows: the neuron number n of input layer and the neuron number n of output layer that first determine extension neural network c, the neuron number n of input layer equals the number of features training eigenwert, the neuron number n of output layer cequal the number of mode state; The mode that adopts two power to connect between input layer and output layer, one of them weights represents the lower limit in the classical territory of a certain feature, another one weights are representing the higher limit in the classical territory of individual features, and two weights that connect j node of input layer and k node of output layer are used respectively with
Figure BDA0000405748490000041
represent, the initial weight of extension neural network can obtain according to formula (2)~(3):
Figure BDA0000405748490000042
Figure BDA0000405748490000043
Described step (3) specifically comprises the following steps:
Wherein the initial center point of each pattern is as described below:
Z k={z k1,z k2,...,z kn} (4)
z kj = ( w kj L + w kj U ) / 2 , k = 1,2 , &CenterDot; &CenterDot; &CenterDot; , n c ; j = 1,2 , &CenterDot; &CenterDot; &CenterDot; , n - - - ( 5 )
(31), read in i sample and safe state mode class label p corresponding to this sample, i=1,2 ..., N p;
(32), utilize formula (6) to calculate sample
Figure BDA0000405748490000045
opened up distance with k class safe state mode:
ED ik = &Sigma; j = 1 n [ | x ij p - z kj | - ( w kj U - w kj L ) / 2 | ( w kj U - w kj L ) / 2 | + 1 ] , k = 1,2 , &CenterDot; &CenterDot; &CenterDot; , n c z kj = ( w kj U + w kj L ) / 2 - - - ( 6 )
(33), ask for k *, make
Figure BDA0000405748490000047
if k *=p, enters step (35); Otherwise enter step (34), carry out the adjustment of network weight;
(34), according to formula (7)~(10), adjust p class and k *the weights that quasi-mode is corresponding: (a), the adjustment at p quasi-mode class center is suc as formula shown in (7); K *the adjustment at quasi-mode class center is suc as formula shown in (8):
z pj new = z pj old + &eta; ( x ij p - z pj old ) - - - ( 7 )
z k * j new = z k * j old - &eta; ( x ij p - z k * j old ) - - - ( 8 )
(b), the adjustment of p quasi-mode weights is suc as formula shown in (9); K *the adjustment of quasi-mode weights is suc as formula shown in (10):
w pj L ( new ) = w pj L ( old ) + &eta; ( x ij p - z pj old ) w pj U ( new ) = w pj U ( old ) + &eta; ( x ij p - z pj old ) - - - ( 9 )
w k * j L ( new ) = w k * j L ( old ) - &eta; ( x ij p - z k * j old ) w k * j U ( new ) = w k * j U ( old ) - &eta; ( x ij p - z k * j old ) - - - ( 10 )
represent respectively k *quasi-mode is adjusted Hou Lei center and is adjusted Qian Lei center,
Figure BDA0000405748490000056
represent respectively p quasi-mode adjustment Hou Lei center and adjust Qian Lei center;
Figure BDA0000405748490000057
represent the weights after p quasi-mode is adjusted, represent the weights before p quasi-mode is adjusted;
Figure BDA0000405748490000059
represent k *weights after quasi-mode is adjusted,
Figure BDA00004057484900000510
represent k *weights before quasi-mode is adjusted;
η represents learning rate;
(35): circulation execution step (31) to (34); If all samples are all classified, a Learning Step finishes, and enters step (36);
(36) if training process convergence, or total error rate arrives preset value, deconditioning, otherwise return to execution step (31).
Total error rate E in described step (3) t=N m/ N p, N mthe number of misclassification, N pit is number of samples.
The present invention proposes a kind of extension neural network mode identification method based on priori, by extension neural network being added to effective priori, priori is embedded in the middle of neural network weight effectively, under priori and the common driving of training sample, instruct the study of extension neural network, and then complete the training of extension neural network.Extension neural network based on priori has alleviated the learning burden of network effectively, and the performance (learning performance, generalization ability, fault-tolerant ability etc.) of extension neural network is significantly improved.
The present invention utilizes priori, before extension neural network training, the weights of network are applied to certain constraint, can debug sample, imperfect sample adverse effect that study is brought, even face in Practical Project, obtain that training sample quality is not high, under Complex System Environment, also can make extension neural network still there is outstanding Classification and Identification performance, be conducive to the raising of extension neural network performance.Therefore, the present invention has good value for applications, and can be applied to the aspects such as industry spot fault diagnosis, the detection of working environment safe condition and image processing.
Accompanying drawing explanation
Fig. 1 is extension neural network structural drawing;
Fig. 2 is based on data and the common extension neural network illustraton of model driving of priori;
Fig. 3 is opening up apart from schematic diagram between a point and an interval;
Fig. 4 is method flow diagram of the present invention.
Embodiment
The invention discloses a kind of extension neural network mode identification method based on priori, as shown in Figure 4, comprise the following steps:
(1), prepare training sample set and knowledge base, training sample set is the observation data of having obtained, and supposes that training sample set is
Figure BDA0000405748490000071
n wherein pbe the total number of sample of sample set, i schedule of samples is shown
Figure BDA0000405748490000072
wherein n is the total number of feature that sampling feature vectors comprises, and i sample class label is p; Knowledge base is the priori information of storage about having known towards concrete object; The knowledge feature embodying for extension neural network weights, the classical territory extreme value of each eigenwert of alternative proper vector,
Figure BDA0000405748490000073
l kjrepresent that k kind pattern is about the quantitative scope of j characteristic attribute;
In actual process, how much we have gained some understanding to some characteristic interval scope of the proper vector of identifying object sometimes, such as, in coal mine environment safe condition recognition system, carbon monoxide content is an important indicator, when carbon monoxide content is in 0.0006%~0.0012%, can think that mine is in a safe condition, and when carbon monoxide content is in 0.0018%~0.0024% time, need to give a warning, illustrate that now the residing state of coal mine environment is more dangerous.Therefore the accumulation of the information of this class and to obtain be to be relatively easy to.
(2), determine the structure of extension neural network; According to training sample and priori, determine the initial weight of extension neural network;
Specific as follows: first to determine the neuron number n of extension neural network input layer and the neuron number n of output layer c, the neuron number n of input layer equals the number of proper vector eigenwert, the neuron number n of output layer cequal the number of schema category; The mode that adopts two power to connect between input layer and output layer, one of them weights represents the lower limit of the classical territory of a certain feature scope, another one weights are representing the higher limit of the classical territory of individual features scope; Be illustrated in figure 1 extension neural network structural representation, Figure 2 shows that the extension neural network illustraton of model based under training sample and the common driving of priori.
According to training sample and priori, set up the matter-element model of every kind of pattern, as the formula (1).
R k = N k c 1 , V k 1 C 2 , V K 2 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; c n , V Kn , k = 1,2 , &CenterDot; &CenterDot; &CenterDot; , n c - - - ( 1 )
Can open up in theory c jrepresent k quasi-mode N kj feature,
Figure BDA0000405748490000082
represent that k quasi-mode is about the classical territory scope of j characteristic index, classical territory scope is determined jointly by training sample and priori.Two weights that connect j node of input layer and k node of output layer are used respectively
Figure BDA0000405748490000083
with
Figure BDA0000405748490000084
represent, the initial weight of extension neural network can obtain according to formula (2)~(3):
Figure BDA0000405748490000085
Figure BDA0000405748490000086
(3), utilize training sample to train extension neural network;
Wherein the initial center point of each pattern is as described below:
Z k={z k1,z k2,...,z kn} (4)
z kj = ( w kj L + w kj U ) / 2 , k = 1,2 , &CenterDot; &CenterDot; &CenterDot; , n c ; j = 1,2 , &CenterDot; &CenterDot; &CenterDot; , n - - - ( 5 )
Specifically comprise the following steps:
(31), read in i sample and safe state mode class label p corresponding to this sample, i=1,2 ..., N p;
(32), utilize formula (6) to calculate sample
Figure BDA0000405748490000091
opened up distance with k class safe state mode:
ED ik = &Sigma; j = 1 n [ | x ij p - z kj | - ( w kj U - w kj L ) / 2 | ( w kj U - w kj L ) / 2 | + 1 ] , k = 1,2 , &CenterDot; &CenterDot; &CenterDot; , n c z kj = ( w kj U + w kj L ) / 2 - - - ( 6 )
As shown in Figure 3, be a some x and an interval
Figure BDA0000405748490000093
between open up apart from schematic diagram, can open up distance and be used for judging the similarity degree at tested point Yu Lei center,
Figure BDA0000405748490000094
work as ED=0, represent object under test and class center superposition, completely similar, can open up apart from larger, the similarity of expression is less.
From formula (6), even wrong sample input, also can not affect training and the study of extension neural network, its main cause is the information that initial weight has embedded our known systems, this category information can effectively be got rid of the data that depart from classical territory scope, makes this class sample can not carry out wrong adjustment to initial weight, and the study of extension neural network is played to directive function, not only alleviate the training burden of extension neural network, and reduced training time and computed losses.
(33), ask for k *, make
Figure BDA0000405748490000095
if k *=p, enters step (35); Otherwise enter step (34), carry out the adjustment of network weight.
(34), according to formula (7)~(10), adjust p class and k *the weights that quasi-mode is corresponding: (a), the adjustment at p quasi-mode class center is suc as formula shown in (7); K *the adjustment at quasi-mode class center is suc as formula shown in (8):
z pj new = z pj old + &eta; ( x ij p - z pj old ) - - - ( 7 )
z k * j new = z k * j old - &eta; ( x ij p - z k * j old ) - - - ( 8 )
(b), the adjustment of p quasi-mode weights is suc as formula shown in (9); K *the adjustment of quasi-mode weights is suc as formula shown in (10):
w pj L ( new ) = w pj L ( old ) + &eta; ( x ij p - z pj old ) w pj U ( new ) = w pj U ( old ) + &eta; ( x ij p - z pj old ) - - - ( 9 )
w k * j L ( new ) = w k * j L ( old ) - &eta; ( x ij p - z k * j old ) w k * j U ( new ) = w k * j U ( old ) - &eta; ( x ij p - z k * j old ) - - - ( 10 )
Figure BDA0000405748490000105
represent respectively k *quasi-mode is adjusted Hou Lei center and is adjusted Qian Lei center,
Figure BDA0000405748490000106
represent respectively p quasi-mode adjustment Hou Lei center and adjust Qian Lei center;
Figure BDA0000405748490000107
represent the weights after p quasi-mode is adjusted, represent the weights before p quasi-mode is adjusted;
Figure BDA0000405748490000109
represent k *weights after quasi-mode is adjusted,
Figure BDA00004057484900001010
represent k *weights before quasi-mode is adjusted;
η represents learning rate;
(35): circulation execution step (31) to (34).If all samples are all classified, a Learning Step finishes, and enters step (36);
(36) if training process convergence, or total error rate E tarrive preset value, deconditioning.Here E t=N m/ N p, N mthe number of misclassification, N mcan in training process, obtain; N pbe number of samples, otherwise return to execution step (31).
(4), utilize the extension neural network training to carry out pattern-recognition to identifying object.Utilization can be opened up the opened up distance of distance metric target to be measured and each safe mode, tries to achieve k *make
Figure BDA0000405748490000111
output
Figure BDA0000405748490000112
in order to indicate the safe mode of target to be measured, be k *, until the whole identification of all identifying objects is complete.

Claims (3)

1. the extension neural network mode identification method based on priori, is characterized in that: comprise the following steps: (1), preparation training sample set and knowledge base, training sample set is the observation data of having obtained, and supposes that training sample set is
Figure FDA0000405748480000011
n wherein pbe the total number of sample of sample set, i schedule of samples is shown
Figure FDA0000405748480000012
wherein n is the total number of feature that sampling feature vectors comprises, and i sample class label is p; Knowledge base is the information of storage about having known in the face of concrete object; The knowledge feature embodying for extension neural network weights, the classical territory extreme value of each eigenwert of alternative proper vector,
Figure FDA0000405748480000013
l kjrepresent that k kind pattern is about the quantitative scope of j characteristic attribute;
(2), according to training sample and priori, determine the initial weight of extension neural network;
(3), utilize training sample to train extension neural network; If training process convergence or total error rate arrive preset value, deconditioning, preserves the weight vector of the extension neural network after training; Otherwise continue training;
(4), utilize the extension neural network training to carry out pattern-recognition, until the whole identification of all identifying objects is complete.
2. the extension neural network mode identification method based on priori according to claim 1, is characterized in that: described step (2) is specific as follows: the neuron number n of input layer and the neuron number n of output layer that first determine extension neural network c, the neuron number n of input layer equals the number of features training eigenwert, the neuron number n of output layer cequal the number of mode state; The mode that adopts two power to connect between input layer and output layer, one of them weights represents the lower limit in the classical territory of a certain feature, another one weights are representing the higher limit in the classical territory of individual features, and two weights that connect j node of input layer and k node of output layer are used respectively
Figure FDA0000405748480000021
with
Figure FDA0000405748480000022
represent, the initial weight of extension neural network can obtain according to formula (2)~(3):
Figure FDA0000405748480000023
Figure FDA0000405748480000024
3. the extension neural network mode identification method based on priori according to claim 1, is characterized in that: described step (3) specifically comprises the following steps:
Wherein the initial center point of each pattern is as described below:
Z k={z k1,z k2,...,z kn} (4)
Figure FDA0000405748480000025
(31), read in i sample and safe state mode class label p corresponding to this sample, i=1,2 ..., N p;
(32), utilize formula (6) to calculate sample
Figure FDA0000405748480000026
opened up distance with k class safe state mode:
Figure FDA0000405748480000027
(33), ask for k *, make
Figure FDA0000405748480000028
if k *=p, enters step (35); Otherwise enter step (34), carry out the adjustment of network weight;
(34), according to formula (7)~(10), adjust p class and k *the weights that quasi-mode is corresponding: (a), the adjustment at p quasi-mode class center is suc as formula shown in (7); K *the adjustment at quasi-mode class center is suc as formula shown in (8):
Figure FDA0000405748480000031
Figure FDA0000405748480000032
(b), the adjustment of p quasi-mode weights is suc as formula shown in (9); K *the adjustment of quasi-mode weights is suc as formula shown in (10):
Figure FDA0000405748480000033
Figure FDA0000405748480000034
Figure FDA0000405748480000035
represent respectively k *quasi-mode is adjusted Hou Lei center and is adjusted Qian Lei center, represent respectively p quasi-mode adjustment Hou Lei center and adjust Qian Lei center;
Figure FDA0000405748480000037
represent the weights after p quasi-mode is adjusted,
Figure FDA0000405748480000038
represent the weights before p quasi-mode is adjusted;
Figure FDA0000405748480000039
represent k *weights after quasi-mode is adjusted,
Figure FDA00004057484800000310
represent k *weights before quasi-mode is adjusted;
η represents learning rate;
(35): circulation execution step (31) to (34); If all samples are all classified, a Learning Step finishes, and enters step (36);
(36) if training process convergence, or total error rate arrives preset value, deconditioning, otherwise return to execution step (31).
Extension neural network mode identification method based on priori according to claim 3, is characterized in that: the total error rate E in described step (3) t=N m/ N p, N mthe number of misclassification, N pit is number of samples.
CN201310532381.3A 2013-10-31 2013-10-31 Extension neural network pattern recognition method based on priori knowledge Pending CN103559542A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310532381.3A CN103559542A (en) 2013-10-31 2013-10-31 Extension neural network pattern recognition method based on priori knowledge

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310532381.3A CN103559542A (en) 2013-10-31 2013-10-31 Extension neural network pattern recognition method based on priori knowledge

Publications (1)

Publication Number Publication Date
CN103559542A true CN103559542A (en) 2014-02-05

Family

ID=50013783

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310532381.3A Pending CN103559542A (en) 2013-10-31 2013-10-31 Extension neural network pattern recognition method based on priori knowledge

Country Status (1)

Country Link
CN (1) CN103559542A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104569035A (en) * 2015-02-04 2015-04-29 神华集团有限责任公司 Method for acquiring critical property parameters of coal liquefaction oil
CN105989375A (en) * 2015-01-30 2016-10-05 富士通株式会社 Classifier, classification device and classification method for classifying handwritten character images
CN106228185A (en) * 2016-07-20 2016-12-14 武汉盈力科技有限公司 A kind of general image classifying and identifying system based on neutral net and method
CN107533664A (en) * 2015-03-26 2018-01-02 英特尔公司 Pass through the neural network classification of decomposition
CN108197703A (en) * 2018-03-12 2018-06-22 中国矿业大学(北京) The coal rock detection method of dynamic Compensation Fuzzy Neural Networks
CN109325536A (en) * 2018-09-25 2019-02-12 南京审计大学 A kind of biomimetic pattern recognition method and its device
CN112699924A (en) * 2020-12-22 2021-04-23 安徽卡思普智能科技有限公司 Method for identifying lateral stability of vehicle

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989375A (en) * 2015-01-30 2016-10-05 富士通株式会社 Classifier, classification device and classification method for classifying handwritten character images
CN104569035A (en) * 2015-02-04 2015-04-29 神华集团有限责任公司 Method for acquiring critical property parameters of coal liquefaction oil
CN107533664A (en) * 2015-03-26 2018-01-02 英特尔公司 Pass through the neural network classification of decomposition
CN107533664B (en) * 2015-03-26 2023-04-28 英特尔公司 Classification by decomposed neural networks
CN106228185A (en) * 2016-07-20 2016-12-14 武汉盈力科技有限公司 A kind of general image classifying and identifying system based on neutral net and method
CN106228185B (en) * 2016-07-20 2019-10-15 武汉盈力科技有限公司 A kind of general image classifying and identifying system neural network based and method
CN108197703A (en) * 2018-03-12 2018-06-22 中国矿业大学(北京) The coal rock detection method of dynamic Compensation Fuzzy Neural Networks
CN109325536A (en) * 2018-09-25 2019-02-12 南京审计大学 A kind of biomimetic pattern recognition method and its device
CN109325536B (en) * 2018-09-25 2019-09-17 南京审计大学 A kind of biomimetic pattern recognition method and its device
CN112699924A (en) * 2020-12-22 2021-04-23 安徽卡思普智能科技有限公司 Method for identifying lateral stability of vehicle

Similar Documents

Publication Publication Date Title
CN103559542A (en) Extension neural network pattern recognition method based on priori knowledge
CN104155574A (en) Power distribution network fault classification method based on adaptive neuro-fuzzy inference system
CN106053067A (en) Bearing fault diagnosis method based on quantum genetic algorithm optimized support vector machine
CN105260786B (en) A kind of simulation credibility of electric propulsion system assessment models comprehensive optimization method
CN106846155A (en) Submarine pipeline leakage accident methods of risk assessment based on fuzzy Bayesian network
CN103226741A (en) Urban water supply network tube explosion prediction method
CN110542819A (en) transformer fault type diagnosis method based on semi-supervised DBNC
CN102360455B (en) Solar array expansion reliability assessment method based on expert knowledge and neural network
CN106022954A (en) Multiple BP neural network load prediction method based on grey correlation degree
CN105678343A (en) Adaptive-weighted-group-sparse-representation-based diagnosis method for noise abnormity of hydroelectric generating set
CN102663495B (en) Neural net data generation method for nonlinear device modeling
CN105606914A (en) IWO-ELM-based Aviation power converter fault diagnosis method
CN105320987A (en) Satellite telemetry data intelligent interpretation method based on BP neural network
CN105425583A (en) Control method of penicillin production process based on cooperative training local weighted partial least squares (LWPLS)
CN110222844A (en) A kind of compressor performance prediction technique based on artificial neural network
CN104503420A (en) Non-linear process industry fault prediction method based on novel FDE-ELM and EFSM
CN105510729A (en) Overheating fault diagnosis method of transformer
CN111079978A (en) Coal and gas outburst prediction method based on logistic regression and reinforcement learning
CN109919236A (en) A kind of BP neural network multi-tag classification method based on label correlation
CN104835073A (en) Unmanned aerial vehicle control system operation performance evaluating method based on intuitionistic fuzzy entropy weight
Wang et al. A remaining useful life prediction model based on hybrid long-short sequences for engines
CN113836783A (en) Digital regression model modeling method for main beam temperature-induced deflection monitoring reference value of cable-stayed bridge
Wang et al. Application of BP neural network in early-warning analysis of investment financial risk in coastal areas
CN105426923A (en) Semi-supervised classification method and system
CN103279030A (en) Bayesian framework-based dynamic soft measurement modeling method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20140205

RJ01 Rejection of invention patent application after publication