CN107024331B - A kind of neural network is to train motor oscillating online test method - Google Patents
A kind of neural network is to train motor oscillating online test method Download PDFInfo
- Publication number
- CN107024331B CN107024331B CN201710208538.5A CN201710208538A CN107024331B CN 107024331 B CN107024331 B CN 107024331B CN 201710208538 A CN201710208538 A CN 201710208538A CN 107024331 B CN107024331 B CN 107024331B
- Authority
- CN
- China
- Prior art keywords
- layer
- input
- network
- fault
- algorithm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01M—TESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
- G01M7/00—Vibration-testing of structures; Shock-testing of structures
- G01M7/02—Vibration-testing by means of a shake table
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/061—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Theoretical Computer Science (AREA)
- Molecular Biology (AREA)
- Neurology (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Testing And Monitoring For Control Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a kind of neural networks to train motor oscillating online test method, using six layers of convolutional neural networks, the sound characteristic of vibration signal is chosen as failure symptom, weight is updated using LM algorithm and cross entropy, six layers of convolutional neural networks include input layer, hidden layer and output layer.The present invention is based on the methods of neural network to diagnose to fault type, by machine oneself identification fault type without establishing fault file database, reduces sensor and lays quantity, increase system reliability.With the propulsion of time, data are continuously increased, and the fault type of machine self-identifying will more and more precisely, and test effect is good.
Description
Technical field
The invention belongs to nerual network technique fields, and in particular to perception of the sensor to monitoring objective is based on nerve net
The failure of network recognizes mechanism and by complicated fault analysis and handling process more particularly to a kind of neural network to train electrical automatically
Machine vibration online test method.
Background technique
Railway is the main artery and the popular vehicles of the important Strategic Foundation facility of country, national economy, in synthesis
It is in backbone status in communications and transportation system, is played a very important role in national economy.In recent years, national economy is fast
More stringent requirements are proposed for fast development on railway transport.High speed railway construction by many years and rapid turn to existing route
Type transformation, China express railway operating mileage account for about the 50% of world's high-speed rail operating mileage, occupy world's high-speed rail operating mileage
Half of the country.It, will be in the following railway transport of passengers with the high-speed EMUs of 350km/h under the overall background of China railways fast development
Special line is largely equipped, and safety directly influences the life security of passenger.Core as bullet train traction drive
Equipment, traction electric machine, as " intermediate variable " for realizing that electric energy changes to mechanical energy, generate train fortune when traction state is run
Capable power.When in the braking state, traction electric machine will be used as generator, complete EMU regenerative braking force.Traction electric machine due to
It is suspended on bogie, therefore usually in the environment with dust, environment temperature acute variation, and load and often frequently send out
Changing causes bullet train operating condition constantly to change.Due to severe working environment and special structure, bullet train is caused to lead
Draw that driving unit fault is high-incidence, operational safety directly influences the traffic safety of entire bullet train, once occurring can not be pre-
The failure known will likely cause great personnel casualty accidents, generate huge economic losses and social influence.Therefore, height is actively developed
The research of fast train detection technology and diagnostic techniques not only contributes to the safe and reliable operation of EMU, while will be also high speed
The support that provides the necessary technical is reformed in the maintenance of train.Carrying out condition monitoring and fault diagnosis to bullet train traction electric machine is to work as
Modern railway development hot and difficult issues urgently to be resolved.Traditional equipment fault diagnosis includes establishing fault file and status information
Library, diagnosis algorithm is divided into signal detection, fault signature extracts, equipment state identifies and forecasts decision etc., in which:
Signal detection: according to diagnostic device and target, selection is easy to monitor the significant condition signal with acquisition, thus establishes
Fault status information library belongs to initial mode.
Fault signature extracts: by the fault status signal of acquisition by signal processing, fault signature is extracted, and is fault identification
It lays the foundation.
Equipment state identification: use database technology by theory analysis and reference previous failures establish fault file library for
Reference mode comes whether diagnostic device breaks down with reference mode by comparing mode to be checked.
Forecast decision: by diagnostic analysis, if equipment state normally repeats procedure above;Otherwise to there are malfunctions
, then it to find out fault condition, make trend analysis.
According to the above several points it can be found that traditional fault diagnosis relies on a large amount of different types of data, and according to known
Various fault types by these data classifications into fault file database.This needs the sensor of a large amount of monitoring different physical quantities
With all fault types known.Only meet these conditions, can just carry out state recognition, and analyze whether equipment occurs
Failure or the fault type of generation finally complete the decision of diagnosis according to fault type.Reality is that people can not can know that
Whole fault types, which results in the appearance of some abnormal datas, and system has no idea to carry out data state recognition,
Unrecognized state certainly will influence the decision of diagnosis, and then report by mistake.In order to reduce rate of false alarm, must just increase more
Sensor carrys out diversification identification failure, and more sensor collections also will increase rate of false alarm to more data.Therefore tradition
Fault diagnosis need a large amount of sensor, and big quantity sensor can reduce the reliability of total system instead.It is asked based on above-mentioned
Topic, bullet train are badly in need of a kind of monitoring method, reduce the requirement to number of sensors, while needing not know about all failure classes
Type.
Summary of the invention
The present invention is based on the methods of neural network to diagnose to failure, by machine oneself identify fault type without
Fault file database is established, and then greatly reduces the quantity of sensor laying, improves reliability, and propulsion at any time,
Data are continuously increased, and the fault type of machine self-identifying more and more precisely will solve conventional monitoring methods rate of false alarm height, sensor
Problem more than node reduces whole maintenance cost and maintenance time, maintenance personal is helped quick and precisely to find failure components.
The present invention uses improved neural network algorithm (Levenberg-Marquardt).The algorithm be gradient descent method and
The combination of gauss-newton method, the local convergence characteristic of existing gauss-newton method, and have the global property of gradient method.With regard to frequency of training
For precision, since approximate second dervative information is utilized, this algorithm is substantially better than gradient descent method standard.LM algorithm is
A kind of fast algorithm of the numerical optimization technique using standard.It does not need to calculate Hessian matrix as quasi-Newton method.
The technical scheme is that a kind of neural network is to train motor oscillating online test method, which is characterized in that
Using six layers of convolutional neural networks, the sound characteristic of vibration signal is chosen as failure symptom, uses LM algorithm and cross entropy, institute
Stating six layers of convolutional network includes input layer, hidden layer and output layer;It the described method comprises the following steps:
1) it determines input layer (M) number, P is enabled to indicate the input sample vector of network,
2) it determines hidden layer neuron (J) number, rule of thumb chooses,
3) it determines output layer neuron number, is determined by fault type,
4) network function is determined,
5) it is tested with partial data, builds the convolutional neural networks of a 24-500-500-500-1024-4,
6) input data carries out neural network computing.
The present invention uses six layers of convolutional neural networks: for the unit number of input layer for 24, input data is 20*800 frequency square
Battle array.The unit number of output layer is 4, corresponding 4 kinds of typical faults, that is, normal operation, stator failure, rolling bearing fault and rotor
Failure.The unit number of hidden layer is respectively 500-500-500-1024, using Cross-Entropy Algorithm and LM innovatory algorithm, to network into
Row training.Global error E=10 is set in convolutional neural networks learning process-5, the initial value setting of neural network, weight are repaired
Positive quantity is randomly generated by being uniformly distributed in a certain range.Network function described in step 4) includes neuron training function LM,
Learning function LM and cross entropy limit the function min and max of input vector element threshold range.The mind of convolution described in step 5)
Include input port input through network, inputs the input matrix of a 20*800 every time;Input layer-hidden layer weight module
ω.Neural network computing described in step 6) updates weight using LM algorithm and Cross-Entropy Algorithm.
The LM algorithm includes,
1) training error permissible value is provided, weight vector W is initialized(o),
2) network output and error vector E (W are calculated(o)),
3) error vector is calculated to the J (W) of network weight,
4) reach local minimum points Wk=W (k).
The LM algorithm calculation method is as follows:
If wk∈RnIndicate the network weight vector of kth time iteration, new weight vector wk+1It can be asked according to following rule
:
wk+1=wk+Δwk
In formula, t, o are respectively the output of network output layer and desired output (this section is the same below), and E (w) is error energy letter
Number.This formula is single output network, then need to only be added up to output layer square-error if it is the network of multi output, i.e., cumulative item
From m to m*n, ξ indicates the ξ element of weight vector w.Jacob matrix J (w) are as follows:
If ξ indicates the ξ element of weight vector w, for the ξ ξ element of J matrixIf this ξ
Element represents the weight v between output node i and hidden layer section jijPartial derivative is sought, is obtained
di=-(oi-ti)·ti·(1-ti)
In formula, bjIndicate the output of hidden layer corresponding node.
If the ξ element represents the value of output node i, then partial derivative are as follows:
If the ξ element represents the weight of input layer and hidden layer, partial derivative are as follows:
It is above it is various in, VjiIndicate that the weight of output layer, a indicate the input of input layer corresponding node.If the ξ element
The threshold value of hidden layer is represented, then
Cross-Entropy Algorithm is as follows:
Cross entropy is opposite with entropy, such as covariance and variance.
What entropy was investigated is the expectation of single information (distribution):
What cross entropy was investigated is the expectation of two information (distribution):
Cross entropy cost function:
xIndicate original signal, z indicates reconstruction signal, indicates that length is d in the form of vectors, and can be changed easily
Make the form for inner product of vectors.
Cross entropy cost function is introduced for neural network, is to make up the derivative form of sigmoid type function and easily occur
The defect of saturation.Decline this to solve the problems, such as that parameter updates efficiency, traditional square is replaced using cross entropy cost function
Error function.
For the neuronal structure of multiple input single output, as shown in Figure 3:
By its loss function is defined as:
Wherein
Final derivation obtains:
It avoids σ ' (z) and participates in the problem of parameter updates, influence updates efficiency.
The LM algorithm flow chart is as shown in Figure 1.24 groups of voice datas are inputted, carry out feature extraction by 5*5 convolutional layer,
It is activated by Relu.Simplify data volume by maximum pond layer, enters second 5*5 convolutional layer through Relu activation and carry out spy
Sign is extracted.Maximum pond is carried out later and is converted into 1 dimension matrix, as the input of full articulamentum classifier, finally exports 4 kinds of events
Hinder the probability of type.In training, it can be compared according to output result with model answer, pass through LM algorithm and Cross-Entropy Algorithm
Carry out assessment of loss function LOSS, and each layer weight is updated according to result backpropagation stomogastric nerve network.Final training result such as Fig. 2
It is shown.
The present invention chooses the sound characteristic of vibration signal as failure symptom, intercepts voice signal 20*800 as feature
Amount, corresponding spectrum peak are used as failure symptom, and the typical fault conduct that 4 kinds of scenes more often occur after normalized
Network output, constitutes one six layers of convolutional neural networks, and input feature value is that the sample data collected is pre-processed
After obtain.
Effect of the invention, the method based on convolutional neural networks diagnose fault type, are identified by machine oneself
Fault type reduces sensor and lays quantity, increase system reliability without establishing fault file database.With
The propulsion of time, data are continuously increased, and the fault type of machine self-identifying will more and more precisely.
Detailed description of the invention
Fig. 1 is convolutional neural networks architecture diagram of the present invention.
Fig. 2 is neural network computing error result figure of the present invention.
Fig. 3 is the neuronal structure figure of multiple input single output.
Specific embodiment
With reference to the accompanying drawing, the present invention is described in more detail.
A kind of neural network to train motor oscillating online test method,
The structure of network is defined first are as follows: single input layer, single output layer, 4 hidden layers.
Concrete operations are as follows:
1) it determines input layer (M) number, P is enabled to indicate that the input sample vector of network, input layer need 24 minds
Through member, i.e. M=24.
2) hidden layer neuron (J) number is determined,
It due to the method for none fixation of the determination of hidden layer neuron number, is chosen generally according to experience, there are also one
Kind approach can be used for determining the number of hidden unit.Make the number of hidden unit variable first, or be put into enough hidden units, leads to
Overfitting rejects those inoperative hidden units, until can not reject.Equally, it can also be put into when training starts
Fewer neuron is further added by the number of hidden unit if unsuccessful, compares conjunction until reaching after study to certain number
Until the hidden unit number of reason.
3) output layer neuron number is determined
The neuron number of output layer is determined by fault type, and the present invention devises normal operation, stator failure, rolling
Four kinds of fault types such as dynamic bearing failure, rotor fault, output layer need 4 outputs, and the target output of network is as shown in the table:
Fault type | Target output |
It runs well | 0 0 |
Stator failure | 0 1 |
Rolling bearing fault | 1 0 |
Rotor fault | 1 1 |
It can thus be concluded that matrix T, i.e. T=[0 0,01,10,1 1] that object vector is 4 × 2.
4) network function is determined
The hidden layer neuron training function of network is LM and cross entropy, and the speed of service of the function is than very fast, for big
Medium-sized network is relatively suitble to;Learning function takes the cross entropy of default;
5) it is tested with partial data, builds the neural network of a 24-500-500-500-1024-4 first, wherein
Input is input port, every time input one 20 × 800 input matrix;Step is step-length module, and ω is the weight mould of hidden layer
Block updates weight by LM and cross entropy come backpropagation.
Using Google open source deep learning frame tensorflow, relevant parameter is defined according to above-mentioned neural network structure.
Input node parameter is defined first, and the one-dimensional matrix [1200] of dimension is converted into according to collected voice signal [20,800], so
Input node is 1200.Output node is classified as 4 classes, output node 4 according to fault type.It is hidden according to operational capability setting
Node layer is 500.
Input_node=1200
Output_node=4
H_node=500
It is variable by tensorflow function sets weight and offset, creates these changes in training neural network
Amount loads the value of these variables in test by the model of preservation, and the regularization loss by variable in function is also needed to be added
Into loss function.
When setting weight, the value of associated weight is initialized, is allowed to meet Gauss normal distribution, reduces mind in optimization process
The possibility of local optimum is fallen into through network.
Def get_weight_Varibale(shape,regularizer):
Weights=tf.get_variable (' weights ', shape, initializer=tf.truncated_
Normal_initializer (stddev=0.1))
The forward direction transmittance process of neural network is defined, hidden layer weight matrix dimension is by input node and hiding node layer
Number is to determine, and regularization.Deviant is set as constant, and dimension is determined by hiding node layer, therefore is one-dimensional matrix.Later,
Matrix multiplication calculating is carried out by input parameter weight, is activated in addition the value of deviant entirety carries out relu.
Weight=tf.Variable ([5,5, input_node, h_node], regularizer)
Biases=tf.Variable ([h_node])
H_layer=tf.nn.relu (tf.matmul (input_tensor, weights)+biases))
Neural network propagated forward algorithm is defined in this section of code, either training or test can be adjusted directly
With.Next the amendment of weight in back-propagation process is defined.
Since data volume is larger, primary incoming meeting be limited by the operational capability and storage capacity of machine, it is therefore desirable to right
Entire data set is limited, and only reads in 100 groups of data every time, is read in next group again after the completion of training every time, is recycled altogether
30000 groups.
Batch_size=100
Training_step=30000
Loss function, learning rate and relevant training process, learning rate is defined gradually to be dropped according to the increase of training time
It is low, it prevents from convergent not happening.Loss function is calculated using cross entropy and LM algorithm, calculates each group of friendship later
Fork entropy result is simultaneously averaged.
TrainLM=tf.nn.mul (array (LMe, LMa**k))
LMe=tf.nn.matmul (d, y, b_end)
B_end=tf.nn.mul (b (1-b))
Variable_averages=tf.train.ExponentialMovingAverage (learning_decay,
blobal_step)
Cross_entropy=tf.nn.sparse_softmax_cross_entropy_with_lo gits (y,
tf.argmax(y_,1))
Cross_entropy_mean=tf.reduce_mead (cross_entropy)
Loss=cross_entropy_mean+tf.add_n (tf.get_collection (' losses '))+
TrainLm
The majorized function for defining each layer of transmitting, optimizes entire neural network using gradient descent method, evaluation function is for it
The loss function of preceding definition.
Train_step=tf.train.GradientdescentOptimizer (learning_rate) .minimize
(loss,global_step)。
Input data, carries out neural network computing, and error result is as shown in Figure 2.
As can be seen from Figure this algorithm in the early stage when convergence it is very fast, gradually tend towards stability after error drops to 0.1, no
It crosses the error at 450-500 milliseconds and levels off to 0, illustrate that the system reaches stable, test effect is good, to neural network here
It is trained to finish, and prediction fault type that can be relatively good.
The convolutional neural networks architecture diagram is as shown in Figure 1:
24 groups of voice datas are inputted, feature extraction is carried out by 5*5 convolutional layer, is activated by Relu.By maximum
Pond layer simplifies data volume, enters second 5*5 convolutional layer through Relu activation and carries out feature extraction.Maximum pondization is carried out later simultaneously
It is converted into 1 dimension matrix, as the input of full articulamentum classifier, finally exports the probability of 4 kinds of fault types.In training, meeting
It is compared according to output result with model answer, by LM algorithm and Cross-Entropy Algorithm come assessment of loss function LOSS, and root
Each layer weight is updated according to result backpropagation stomogastric nerve network.Final training result is as shown in Figure 2.
Technical solution of the present invention is described in detail above.It is apparent that the present invention is not limited in described
Hold.Based on the above content in the present invention, those skilled in the art can also make a variety of variations, but any and sheet accordingly
Invention is equivalent or similar variation shall fall within the protection scope of the present invention.
Claims (8)
1. a kind of neural network is to train motor oscillating online test method, which is characterized in that six layers of convolutional neural networks are used,
The sound characteristic of vibration signal is chosen as failure symptom, using LM algorithm and cross entropy, six layers of convolutional network includes defeated
Enter layer, hidden layer and output layer;It the described method comprises the following steps:
1) it determines input layer (M) number, P is enabled to indicate the input sample vector of network,
2) it determines hidden layer neuron (J) number, rule of thumb chooses,
3) it determines output layer neuron number, is determined by fault type,
4) network function is determined,
5) it is tested with partial data, builds the convolutional neural networks of a 24-500-500-500-1024-4,
6) input data carries out neural network computing;
The unit number of the input layer is 24, and corresponding 20*800 sound frequency characteristic value, the unit number of the output layer is 4, right
Answer 4 kinds of typical faults.
2. method according to claim 1, which is characterized in that the input layer is single layer, and the hidden layer is 4 layers, described
Output layer is single layer.
3. method according to claim 1, which is characterized in that (M) number of input layer described in step 1) is M=
24, output layer neuron number is 4.
4. method according to claim 1, which is characterized in that fault type described in step 3) is normal operation, stator event
4 kinds of barrier, rolling bearing fault, rotor fault fault types.
5. method according to claim 1, which is characterized in that network function described in step 4) includes neuron training function
LM, learning function LM and cross entropy limit the function min and max of input vector element threshold range.
6. method according to claim 1, which is characterized in that convolutional neural networks described in step 5) include input port
Input inputs the input matrix of a 20*800 every time;Input layer-hidden layer weight module ω.
7. method according to claim 1, which is characterized in that neural network computing described in step 6) uses LM algorithm and friendship
It pitches entropy algorithm and updates weight.
8. method according to claim 7, which is characterized in that the LM algorithm includes,
1) training error permissible value is provided, weight vector W is initialized(o),
2) network output and error vector E (W are calculated(o)),
3) error vector is calculated to the J (W) of network weight,
4) reach local minimum points Wk=W (k).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710208538.5A CN107024331B (en) | 2017-03-31 | 2017-03-31 | A kind of neural network is to train motor oscillating online test method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710208538.5A CN107024331B (en) | 2017-03-31 | 2017-03-31 | A kind of neural network is to train motor oscillating online test method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107024331A CN107024331A (en) | 2017-08-08 |
CN107024331B true CN107024331B (en) | 2019-07-12 |
Family
ID=59526721
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710208538.5A Expired - Fee Related CN107024331B (en) | 2017-03-31 | 2017-03-31 | A kind of neural network is to train motor oscillating online test method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107024331B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108334936A (en) * | 2018-01-30 | 2018-07-27 | 华中科技大学 | Failure prediction method based on migration convolutional neural networks |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108304960A (en) * | 2017-12-29 | 2018-07-20 | 中车工业研究院有限公司 | A kind of Transit Equipment method for diagnosing faults |
CN108280746B (en) * | 2018-02-09 | 2022-05-24 | 艾凯克斯(嘉兴)信息科技有限公司 | Product design method based on bidirectional cyclic neural network |
CN108230121B (en) * | 2018-02-09 | 2022-06-10 | 艾凯克斯(嘉兴)信息科技有限公司 | Product design method based on recurrent neural network |
CN108596470A (en) * | 2018-04-19 | 2018-09-28 | 浙江大学 | A kind of power equipments defect text handling method based on TensorFlow frames |
CN108899048A (en) * | 2018-05-10 | 2018-11-27 | 广东省智能制造研究所 | A kind of voice data classification method based on signal Time-frequency Decomposition |
CN109816094A (en) * | 2019-01-03 | 2019-05-28 | 山东省科学院海洋仪器仪表研究所 | Optical dissolved oxygen sensor non-linear temperature compensation method based on neural network L-M algorithm |
US20210049833A1 (en) * | 2019-08-12 | 2021-02-18 | Micron Technology, Inc. | Predictive maintenance of automotive powertrain |
CN112710486B (en) * | 2019-10-24 | 2022-01-25 | 广东美的白色家电技术创新中心有限公司 | Equipment fault detection method, equipment fault detection device and computer storage medium |
CN111397700B (en) * | 2020-03-02 | 2021-12-10 | 西北工业大学 | Wall-mounted fault detection method of Coriolis mass flow meter |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE4124501A1 (en) * | 1991-07-24 | 1993-01-28 | Dieter Prof Dr Ing Barschdorff | Neuronal network esp. for testing multiple-phase electric motor - has three neuronal layers for classifying input attribute vectors using class function |
JPH11237432A (en) * | 1998-02-24 | 1999-08-31 | Fujikura Ltd | Partial discharge discrimination method |
CN104680233A (en) * | 2014-10-28 | 2015-06-03 | 芜湖杰诺瑞汽车电器***有限公司 | Wavelet neural network-based engine failure diagnosing method |
CN105510038A (en) * | 2015-12-31 | 2016-04-20 | 北京金风科创风电设备有限公司 | Wind turbine generator fault monitoring method and device |
-
2017
- 2017-03-31 CN CN201710208538.5A patent/CN107024331B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE4124501A1 (en) * | 1991-07-24 | 1993-01-28 | Dieter Prof Dr Ing Barschdorff | Neuronal network esp. for testing multiple-phase electric motor - has three neuronal layers for classifying input attribute vectors using class function |
JPH11237432A (en) * | 1998-02-24 | 1999-08-31 | Fujikura Ltd | Partial discharge discrimination method |
CN104680233A (en) * | 2014-10-28 | 2015-06-03 | 芜湖杰诺瑞汽车电器***有限公司 | Wavelet neural network-based engine failure diagnosing method |
CN105510038A (en) * | 2015-12-31 | 2016-04-20 | 北京金风科创风电设备有限公司 | Wind turbine generator fault monitoring method and device |
Non-Patent Citations (2)
Title |
---|
基于小波分析和神经网络的电机故障诊断研究;刘亚军;《中国优秀硕士学位论文全文数据库(电子期刊)》;20091115;正文第五章 |
基于神经网络的电机故障诊断;王歆峪;《中国优秀硕士学位论文全文数据库(电子期刊)》;20130715;正文第三章-第五章 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108334936A (en) * | 2018-01-30 | 2018-07-27 | 华中科技大学 | Failure prediction method based on migration convolutional neural networks |
CN108334936B (en) * | 2018-01-30 | 2019-12-24 | 华中科技大学 | Fault prediction method based on migration convolutional neural network |
Also Published As
Publication number | Publication date |
---|---|
CN107024331A (en) | 2017-08-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107024331B (en) | A kind of neural network is to train motor oscillating online test method | |
CN103914735B (en) | A kind of fault recognition method and system based on Neural Network Self-learning | |
CN110287552B (en) | Motor bearing fault diagnosis method and system based on improved random forest algorithm | |
CN109829236A (en) | A kind of Compressor Fault Diagnosis method based on XGBoost feature extraction | |
CN110375987B (en) | Mechanical bearing fault detection method based on deep forest | |
CN106682688A (en) | Pile-up noise reduction own coding network bearing fault diagnosis method based on particle swarm optimization | |
CN108332970A (en) | A kind of Method for Bearing Fault Diagnosis based on LS-SVM and D-S evidence theory | |
CN109597401A (en) | A kind of equipment fault diagnosis method based on data-driven | |
CN110376522A (en) | A kind of Method of Motor Fault Diagnosis of the deep learning network of data fusion | |
CN108537259A (en) | Train control on board equipment failure modes and recognition methods based on Rough Sets Neural Networks model | |
CN106874963B (en) | A kind of Fault Diagnosis Method for Distribution Networks and system based on big data technology | |
CN112200263B (en) | Self-organizing federal clustering method applied to power distribution internet of things | |
CN106779063A (en) | A kind of hoist braking system method for diagnosing faults based on RBF networks | |
CN112148997A (en) | Multi-modal confrontation model training method and device for disaster event detection | |
Al Tobi et al. | Using MLP‐GABP and SVM with wavelet packet transform‐based feature extraction for fault diagnosis of a centrifugal pump | |
CN111325233B (en) | Transformer fault detection method and device | |
CN114742165B (en) | Aero-engine gas circuit performance abnormity detection system based on depth self-encoder | |
CN112747924A (en) | Bearing life prediction method based on attention mechanism and residual error neural network | |
CN112884179A (en) | Urban rail turn-back fault diagnosis method based on machine fault and text topic analysis | |
CN115587290A (en) | Aero-engine fault diagnosis method based on variational self-coding generation countermeasure network | |
CN113221946B (en) | Method for diagnosing fault types of mechanical equipment | |
CN115293189A (en) | Rotating machinery state monitoring method based on stack self-coding dimension reduction | |
Zhao et al. | Research of elevator fault diagnosis based on decision tree and rough set | |
Xu et al. | Fast and robust neural network based wheel bearing fault detection with optimal wavelet features | |
CN110017989B (en) | Method for diagnosing bearing fault of wind turbine |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190712 |
|
CF01 | Termination of patent right due to non-payment of annual fee |