CN109490814A - Metering automation terminal fault diagnostic method based on deep learning and Support Vector data description - Google Patents

Metering automation terminal fault diagnostic method based on deep learning and Support Vector data description Download PDF

Info

Publication number
CN109490814A
CN109490814A CN201811046099.3A CN201811046099A CN109490814A CN 109490814 A CN109490814 A CN 109490814A CN 201811046099 A CN201811046099 A CN 201811046099A CN 109490814 A CN109490814 A CN 109490814A
Authority
CN
China
Prior art keywords
layer
data
fault
sample
automation terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811046099.3A
Other languages
Chinese (zh)
Other versions
CN109490814B (en
Inventor
陈俊
李捷
周毅波
李刚
韦杏秋
何涌
张智勇
何艺
唐志涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of Guangxi Power Grid Co Ltd
Original Assignee
Electric Power Research Institute of Guangxi Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of Guangxi Power Grid Co Ltd filed Critical Electric Power Research Institute of Guangxi Power Grid Co Ltd
Priority to CN201811046099.3A priority Critical patent/CN109490814B/en
Publication of CN109490814A publication Critical patent/CN109490814A/en
Application granted granted Critical
Publication of CN109490814B publication Critical patent/CN109490814B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R35/00Testing or calibrating of apparatus covered by the other groups of this subclass
    • G01R35/04Testing or calibrating of apparatus covered by the other groups of this subclass of instruments for measuring time integral of power or current
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R35/00Testing or calibrating of apparatus covered by the other groups of this subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Test And Diagnosis Of Digital Computers (AREA)

Abstract

The invention discloses a kind of metering automation terminal fault diagnostic method based on deep learning and Support Vector data description, is related to electric-power metering fault diagnosis technology field.The metering automation terminal fault diagnostic method based on deep learning and Support Vector data description, feature extraction is carried out to the fault data that metering automation terminal acquires by the depth confidence network model in deep learning, and carries out fault diagnosis and classification using Support Vector data description;Its depth confidence network model can be directly from the original signal of low layer, obtaining high-level characteristic by successively greed training indicates, avoid the manual operation of feature extraction and selection, complexity and uncertainty brought by traditional artificial feature extraction and selection feature are effectively eliminated, the intelligence of diagnosis process is enhanced;The present invention carries out Classification and Identification to sample using Support Vector data description, effectively improves the accuracy rate and efficiency of the multicategory classification problem of metering automation terminal fault diagnosis.

Description

Metering automation terminal fault based on deep learning and Support Vector data description is examined Disconnected method
Technical field
The invention belongs to electric-power metering fault diagnosis technology fields, more particularly to one kind to be based on deep learning and supporting vector The metering automation terminal fault diagnostic method of data description.
Background technique
The method that predominantly detects of current metering automation terminal includes terminal acquisition testing (table code, three-phase voltage, three-phase electricity Stream, three phase power), communication protocol detection and accident detection etc..The correlation of traditional metering automation terminal fault diagnosis Technology is comparatively fairly simple, needs the processing of a large amount of manual operation and data, and the inefficiency of fault diagnosis is difficult to protect Demonstrate,prove the accuracy of fault diagnosis, rapidity and reliability.
And at present deep learning quickly grown in fault diagnosis field, but traditional some deep learning methods there is with Under disadvantage:
1, conventional method is examined using single support vector machines (support vector machine) progress failure Disconnected, its advantage is to solve small sample problem, and it is larger to solve metering automation terminal data fault sample, fault signature The problems such as dimension is more.
2, observer is established using BP neural network, the input and output of fault data failure cause is established with mass data Nonlinear Mapping, thus to metering automation terminal carry out status assessment a kind of method for diagnosing faults.The disadvantage is that traditional Shallow-layer neural network is there are gradient decaying, the disadvantages of overfitting, Local Minimum, so that fault diagnosis effect is had a greatly reduced quality.
3, limit of utilization learning machine (ELM) carries out intelligent diagnostics.ELM method training speed is fast, but stability is poor, And belong to shallow-layer machine learning method, and learning ability is limited, it is difficult to be improved again when accuracy rate reach a certain height, and It is required that fault data sample is accurate, complete.
Summary of the invention
In view of the deficiencies of the prior art, the present invention provides a kind of metering based on deep learning and Support Vector data description Automatization terminal method for diagnosing faults.
The present invention is to solve above-mentioned technical problem by the following technical solutions: one kind is based on deep learning and support The metering automation terminal fault diagnostic method of vector data description, including the following steps:
Step (1): the acquisition of sample data;
The voltage data of metering automation terminal, current data, Local Communication Module read-write data flow, remote is acquired in batches Journey communication module data flow and On-off signal output state data, the sampling number of every batch of are consistent;To collecting Data pretreatment is normalized after, be divided into failure training sample and fault test sample;
Step (2): the foundation of DBN model;
Deepness belief network (Deep Belief Network, DBN) model for establishing hidden layer more than one, according to described The sample dimension of step (1) failure training sample and fault test sample determines the input layer number of DBN model, using failure Training sample carries out unsupervised training to DBN model;The output of DBN model is determined according to the fault type of metering automation terminal Node layer number obtains the connection weight and offset parameter of DBN model using unsupervised layer-by-layer greedy training method;To connection weight Tuning is carried out, the fixed reference feature of all kinds of fault types is obtained;
Step (3): fault diagnosis;
Each fault type Support Vector data description (Support is established using the fixed reference feature of the step (2) Vector Domain Description, SVDD) model bandwidth, and each failure suprasphere bandwidth radius is weighted and is returned One change processing, and then the fault type of metering automation terminal is differentiated, realize the fault diagnosis of metering automation terminal.
Further, in the step (2), the training of DBN model includes two parts, and a part is successively to limitation Boltzmann machine (Restricted Boltzmann Machine, RBM) carries out unsupervised training, another part is with anti- DBN model is finely adjusted to propagation algorithm, is optimal the network structure of DBN model.
Further, the specific training step of the DBN model includes following sub-step:
Step (2.1): using failure training sample as the input of DBN model, given training sample is input to first layer RBM It can be seen that node layer, activates all nodes of hidden layer, while swashing using hidden layer node using the joint probability distribution function of RBM It encourages, regains visible node layer;Then, it is distributed, and then obtained using the condition that contrast divergence algorithm calculates visible layer data Implicit layer data, recycles the data of hidden layer condition distribution, calculates visible layer data, visible layer data is reconstructed, right RBM model parameter is adjusted and updates;
Step (2.2): the visible layer by the output of first layer RBM hidden layer as second layer RBM inputs, until stablizing shape State;
Step (2.3): repeating step (2.2), until the last layer RBM, completes RBM parameter θ=(wij,ai,bj) Optimization, wherein aiIt is the biasing of i-th of node of visible layer;bjIt is the biasing of j-th of node of hidden layer, wijIt is visible layer The connection weight of i-th node and j-th of node of hidden layer;
Step (2.4): the event after completing the training of the last layer RBM hidden layer, to the output of DBN model the last layer hidden layer Hinder the training that type carries out counterpropagation network, by the fault type result and the practical class of failure training sample of training prediction output The layer-by-layer back-propagation of the type of error of type result carries out tuning to the connection weight of each layer of entire DBN model, and reconstructing has most The former data sample of small error, to obtain the substantive characteristics of former metering automation terminal data sample, using substantive characteristics as The fixed reference feature of metering automation terminal fault type.
Further, in the step (2.1), the joint probability distribution function of RBM are as follows:
In formula, Z (θ) is normalization factor, and h is hidden layer neuron, and v is visible layer neuron.
Further, in the step (2.1), to sdpecific dispersion learning algorithm are as follows:
Δwij=ε (< vihj>data-<vihj>model)
Δai=ε (< vi>data-<vi>model)
Δbj=ε (< hj>data-<hj>model)
Wherein, because of<>modelIt is difficult to calculate, so reducing operand using comparison disagreement algorithm, obtains improved study Algorithm, as follows:
Δwij=ε (< vihj>data-<vihj>1)
Δai=ε (< vi>data-<vi>1)
Δbj=ε (< hj>data-<hj>1)
Wherein,<>1It is that the reconstructed sample that a gibbs sampler obtains is carried out to sample;ε is learning rate, is represented each The step-length of parameter adjustment;hjFor hidden layer neuron, viFor visible layer neuron.
Further, in the step (3), specific steps packet that the fault type of metering automation terminal is differentiated It includes:
Step (3.1): one minimum comprising fault target training sample of construction in the higher dimensional space by nuclear mapping Suprasphere, the fault test sample data divided using the step (1), recognizes the test data x fallen into outside hypersphere To be non-target class, for falling into suprasphere and the test data on boundary is considered as fault target class;
Assuming that the sample set X={ x of failure training sample fixed reference feature1,x2,...,xn},xi∈Rn, establish Lagrange Function:
In formula, αiAnd βiFor Lagrange factor, ξiiIt >=0) is the slack variable factor, C indicates penalty factor, φ (xi) Nonlinear mapping function is mapping through for luv space and is mapped to higher dimensional space, and r is suprasphere radius;
Step (3.2): to the step (3.1) Lagrangian a, ξi, r asks partial differential to obtain:
By the optimization to above formula, its dual form is converted by optimal hypersphere classification problem:
K(xi,xj) it is kernel function, the inner product of fault data is mapped to kernel function space, constraint condition is
According to KKT condition, the boundary supporting vector x for meeting constraint condition is utilizedk, thereby determine that radius of hypersphere are as follows:
Step (3.3): determining Support Vector data description (SVDD), i.e. 0≤α of satisfactioniThe test data point of≤C, and surpass Radius of sphericity is distance value of any Support Vector data description (SVDD) to center;If test data point is to a certain hypersphere Radius distance≤the r at body center then illustrates that the test point belongs to this fault data type, realizes metering automation terminal fault The purpose of classification of type.
Compared with prior art, the metering provided by the present invention based on deep learning and Support Vector data description is automatic Change terminal fault diagnostic method, feature extraction, and benefit are carried out to the fault data that metering automation terminal acquires by DBN model Fault diagnosis and classification are carried out with SVDD;Its DBN model can pass through successively greed training directly from the original signal of low layer High-level characteristic expression is obtained, the manual operation of feature extraction and selection is avoided, effectively eliminates traditional artificial feature extraction and choosing Complexity and uncertainty brought by feature are selected, the intelligence of diagnosis process is enhanced;
Traditional SVM binary classifier, if handling failure separates this multicategory classification problem, need to be converted into it is one-to-many or One-to-one form, these conversions will lead to training sample reuse, and the present invention is using Support Vector data description SVDD to sample This progress Classification and Identification effectively improves the accuracy rate and efficiency of the multicategory classification problem of metering automation terminal fault diagnosis.
Detailed description of the invention
It, below will be to attached drawing needed in embodiment description in order to illustrate more clearly of technical solution of the present invention It is briefly described, it should be apparent that, the accompanying drawings in the following description is only one embodiment of the present of invention, general for this field For logical technical staff, without creative efforts, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is the network structure and its training process of DBN model of the present invention;
Fig. 2 is the flow chart that SVDD algorithm of the present invention realizes the classification of metering automation terminal fault.
Specific embodiment
With reference to the attached drawing in the embodiment of the present invention, the technical solution in the present invention is clearly and completely described, Obviously, described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Based in the present invention Embodiment, those of ordinary skill in the art's every other embodiment obtained without creative labor, It shall fall within the protection scope of the present invention.
A kind of metering automation terminal fault based on deep learning and Support Vector data description provided by the present invention Diagnostic method, including the following steps:
(1) it is acquired in batches using AC sampling module, Local Communication Module, remote communication module, input and output module The voltage data of metering automation terminal, current data, Local Communication Module read-write data flow, remote communication module data flow with And On-off signal output state data, the sampling number of every batch of are consistent;Collected data are normalized pre- After processing, it is divided into failure training sample and fault test sample;
(2) deepness belief network (Deep Belief Network, DBN) model for establishing hidden layer more than one, according to step Suddenly the sample dimension of (1) failure training sample and fault test sample determines the input layer number of DBN model, is instructed using failure Practice sample and unsupervised training is carried out to DBN model;The output layer of DBN model is determined according to the fault type of metering automation terminal Number of nodes obtains the connection weight and offset parameter of DBN model using unsupervised layer-by-layer greedy training method;To connection weight into Row tuning obtains the fixed reference feature of all kinds of fault types, as shown in Figure 1.
DBN is a kind of typical deep learning method, can form more abstract high-rise table by combination low-level image feature Show, it is found that the distributed nature of data, motivation are to establish the neural network connection structure of modeling human brain, pass through The multilayer perceptron of multiple nonlinear operation hidden layers carries out distributed characterization to input data.DBN is at simulation human brain The more hidden layer neural networks managing the function of external signal and being made of multiple RBM (limited Boltzmann machine), core is exactly to use Successively greediness learning algorithm optimizes, and compared to other traditional method for diagnosing faults, its advantage lies in being able to get rid of to a large amount of The extracted in self-adaptive of fault signature and the intelligent diagnostics of health status are completed in the dependence of signal processing technology and diagnostic experiences.RBM It is a kind of neural perceptron, is made of an aobvious layer and a hidden layer, showing between layer and the neuron of hidden layer is two-way full connection. In RBM, there is a weight w to indicate that its bonding strength, each neuron itself have one between the connected neuron of any two A biasing coefficient b (aobvious layer neuron) and c (hidden neuron) indicates its own weight.In this manner it is possible to lower surface function Indicate the energy of a RBM:
Because Canonical Distribution is obeyed in the state distribution of RBM.Any one group of visible layer, the joint probability distribution of hidden layer are as follows:
In formula, Z (θ) is normalization factor, also referred to as partition function, and h is hidden layer neuron, and v is visible layer neuron.
In a RBM, when giving visible layer node state, hidden neuron hjThe probability being activated:
P(hj| v)=σ (bj+∑iWi,jxi)
Due to being to be bi-directionally connected, showing layer neuron equally can be by the probability of hidden neuron activation:
P(vj| h)=σ (ci+∑jWi,jhj)
Wherein, σ is Sigmoid function.
There is independence between same layer neuron, so probability density also meets independence, therefore obtain following formula:
The training of DBN model includes two parts, and a part is successively to limitation Boltzmann machine (Restricted Boltzmann Machine, RBM) carry out unsupervised training, another part be with back-propagation algorithm to DBN model into Row fine tuning, is optimal the network structure of DBN model;Its specific training step includes following sub-step:
(2.1) using failure training sample as the input of DBN model, it is visible that given training sample is input to first layer RBM Node layer activates all nodes of hidden layer using the joint probability distribution function of RBM, while utilizing the excitation of hidden layer node, Regain visible node layer;Then, it is distributed, and then is implied using the condition that contrast divergence algorithm calculates visible layer data Layer data recycles the data of hidden layer condition distribution, calculates visible layer data, visible layer data is reconstructed, to RBM mould Shape parameter is adjusted and updates.
RBM parameter θ=(wij,ai,bj) to sdpecific dispersion learning algorithm are as follows:
Δwij=ε (< vihj>data-<vihj>model)
Δai=ε (< vi>data-<vi>model)
Δbj=ε (< hj>data-<hj>model)
Wherein, Δ wijIndicate the update difference of j-th of node connection weight of i-th of node of visible layer and hidden layer, Δ ai, ΔbjThe update difference of j-th of node bias parameter of i-th of node of visible layer and hidden layer is respectively indicated,<>data is training The expectation of data distribution,<>model make for the expectation defined after RBM model reconstruction because<>model is difficult to calculate Operand is reduced with comparison disagreement algorithm, it is as follows to obtain improved learning algorithm:
Δwij=ε (< vihj>data-<vihj>1)
Δai=ε (< vi>data-<vi>1)
Δbj=ε (< hj>data-<hj>1)
Wherein,<>1It is that the reconstructed sample that a gibbs sampler obtains is carried out to reconstructed sample;ε is learning rate, is represented The step-length of every subparameter adjustment;hjFor hidden layer neuron, viFor visible layer neuron.
(2.2) first layer RBM hidden layer is exported and is inputted as the visible layer of second layer RBM, until stable state.
(2.3) (2.2) are repeated, until the last layer RBM, completes RBM parameter θ=(wij,ai,bj) optimization, Wherein, aiIt is the biasing of i-th of node of visible layer;bjIt is the biasing of j-th of node of hidden layer, wijThe i-th node of visible layer with The connection weight of j-th of node of hidden layer.
(2.4) after completing the training of the last layer RBM hidden layer, to the failure classes of DBN model the last layer hidden layer output Type carries out the training of counterpropagation network, by the fault type result and failure training sample actual type knot of training prediction output The layer-by-layer back-propagation of the type of error of fruit carries out tuning to the connection weight of each layer of entire DBN model, and reconstructing has minimum miss The former data sample of difference, so that the substantive characteristics of former metering automation terminal data sample is obtained, using substantive characteristics as metering The fixed reference feature of automatization terminal fault type.
(3) each fault type SVDD (Support Vector Domain is established using the fixed reference feature of step (2) Description, SVDD) model bandwidth, and normalized is weighted to each failure suprasphere bandwidth radius, and then right The fault type of metering automation terminal is differentiated, realizes the fault diagnosis of metering automation terminal.
As shown in Fig. 2, the specific steps differentiated to the fault type of metering automation terminal include:
(3.1) one minimum sphere comprising fault target training sample of construction in the higher dimensional space by nuclear mapping Body, the fault test sample data divided using step (1), is all considered to the test data x fallen into outside hypersphere to be non-mesh Class is marked, for falling into suprasphere and the test data on boundary is considered as fault target class;The suprasphere is classifier, Vector on hypersphere is supporting vector;In fault diagnosis, to each fault data training obtain it is corresponding each Failure hypersphere recognizes failure as fault pattern base.
Assuming that the sample set X={ x of failure training sample fixed reference feature1,x2,...,xn},xi∈Rn, establish Lagrange Function:
In formula, αiAnd βiFor Lagrange factor, ξiiIt >=0) is the slack variable factor, C indicates penalty factor, φ (xi) Nonlinear mapping function is mapping through for luv space and is mapped to higher dimensional space, and r is suprasphere radius;
(3.2) to step (3.1) Lagrangian a, ξi, r asks partial differential to obtain:
By the optimization to above formula, its dual form is converted by optimal hypersphere classification problem:
K(xi,xj) it is kernel function, the inner product of fault data is mapped to kernel function space, constraint condition is
According to KKT condition, the boundary supporting vector x for meeting constraint condition is utilizedk, thereby determine that radius of hypersphere are as follows:
(3.3) Support Vector data description SVDD, i.e. 0≤α of satisfaction are determinediThe test data point of≤C, and suprasphere radius Distance value of as any Support Vector data description SVDD to center;If test data point to a certain suprasphere center half Diameter distance≤r then illustrates that the test point belongs to this fault data type, realizes metering automation terminal fault classification of type Purpose.Method for diagnosing faults of the invention, which can judge automatically metering automation terminal type, to be load control terminal, specially becomes eventually End or concentrator improve accuracy, validity and the real-time of the diagnosis of metering automation terminal fault, rapidly and accurately carry out The diagnosis and positioning of failure, can further decrease manual intervention, improve Automation of Fault Diagnosis, intelligent level.
Above disclosed is only a specific embodiment of the invention, but scope of protection of the present invention is not limited thereto, Anyone skilled in the art in the technical scope disclosed by the present invention, can readily occur in variation or modification, It is covered by the protection scope of the present invention.

Claims (6)

1. a kind of metering automation terminal fault diagnostic method based on deep learning and Support Vector data description, feature exist In, including the following steps:
Step (1): the acquisition of sample data;
The voltage data of metering automation terminal is acquired in batches, current data, Local Communication Module read-write data flow, is remotely led to Letter module data stream and On-off signal output state data, the sampling number of every batch of are consistent;To collected number After pretreatment is normalized, it is divided into failure training sample and fault test sample;
Step (2): the foundation of DBN model;
The DBN model for establishing hidden layer more than one, according to the sample of the step (1) failure training sample and fault test sample Dimension determines the input layer number of DBN model, carries out unsupervised training to DBN model using failure training sample;According to meter The fault type of amount automatization terminal determines the output layer number of nodes of DBN model, is obtained using unsupervised layer-by-layer greedy training method To the connection weight and offset parameter of DBN model;Tuning is carried out to connection weight, obtains the fixed reference feature of all kinds of fault types;
Step (3): fault diagnosis;
The bandwidth of each fault type SVDD model is established using the fixed reference feature of the step (2), and to each failure suprasphere Bandwidth radius is weighted normalized, and then differentiates to the fault type of metering automation terminal, realizes metering certainly The fault diagnosis of dynamicization terminal.
2. metering automation terminal fault diagnostic method as described in claim 1, which is characterized in that in the step (2), The training of DBN model includes two parts, and a part is successively to carry out unsupervised training to RBM, another part is to use Back-propagation algorithm is finely adjusted DBN model, is optimal the network structure of DBN model.
3. metering automation terminal fault diagnostic method as claimed in claim 2, which is characterized in that the tool of the DBN model Body training step includes following sub-step:
Step (2.1): using failure training sample as the input of DBN model, it is visible that given training sample is input to first layer RBM Node layer activates all nodes of hidden layer using the joint probability distribution function of RBM, while utilizing the excitation of hidden layer node, Regain visible node layer;Then, it is distributed, and then is implied using the condition that contrast divergence algorithm calculates visible layer data Layer data recycles the data of hidden layer condition distribution, calculates visible layer data, visible layer data is reconstructed, to RBM mould Shape parameter is adjusted and updates;
Step (2.2): the visible layer by the output of first layer RBM hidden layer as second layer RBM inputs, until stable state;
Step (2.3): repeating step (2.2), until the last layer RBM, completes RBM parameter θ=(wij,ai,bj) it is optimal Change, wherein aiIt is the biasing of i-th of node of visible layer;bjIt is the biasing of j-th of node of hidden layer, wijIt is the i-th node of visible layer With the connection weight of j-th of node of hidden layer;
Step (2.4): after completing the training of the last layer RBM hidden layer, to the failure classes of DBN model the last layer hidden layer output Type carries out the training of counterpropagation network, by the fault type result and failure training sample actual type knot of training prediction output The layer-by-layer back-propagation of the type of error of fruit carries out tuning to the connection weight of each layer of entire DBN model, and reconstructing has minimum miss The former data sample of difference, so that the substantive characteristics of former metering automation terminal data sample is obtained, using substantive characteristics as metering The fixed reference feature of automatization terminal fault type.
4. metering automation terminal fault diagnostic method as claimed in claim 3, which is characterized in that in the step (2.1), The joint probability distribution function of RBM are as follows:
In formula, Z (θ) is normalization factor, and h is hidden layer neuron, and v is visible layer neuron.
5. metering automation terminal fault diagnostic method as claimed in claim 3, which is characterized in that in the step (2.1), To sdpecific dispersion learning algorithm are as follows:
△wij=ε (< vihj>data-<vihj>1)
△ai=ε (< vi>data-<vi>1)
△bj=ε (< hj>data-<hj>1)
Wherein, △ wijIndicate the update difference of j-th of node connection weight of i-th of node of visible layer and hidden layer, △ ai,△bj Respectively indicate the update difference of j-th of node bias parameter of i-th of node of visible layer and hidden layer,<>dataFor training data point The expectation of cloth,<>1It is that the reconstructed sample that a gibbs sampler obtains is carried out to sample;ε is learning rate, represents every subparameter tune Whole step-length;hjFor hidden layer neuron, viFor visible layer neuron.
6. metering automation terminal fault diagnostic method as described in claim 1, which is characterized in that right in the step (3) The specific steps that the fault type of metering automation terminal is differentiated include:
Step (3.1): one minimum sphere comprising fault target training sample of construction in the higher dimensional space by nuclear mapping Body, the fault test sample data divided using the step (1), is regarded as the test data x fallen into outside hypersphere Non-target class, for falling into suprasphere and the test data on boundary is considered as fault target class;
Assuming that the sample set X={ x of failure training sample fixed reference feature1,x2,...,xn},xi∈Rn, establish Lagrangian letter Number:
In formula, αiAnd βiFor Lagrange factor, ξiiIt >=0) is the slack variable factor, C indicates penalty factor, φ (xi) it is original Beginning space reflection is mapped to higher dimensional space by nonlinear mapping function, and r is suprasphere radius;
Step (3.2): to the step (3.1) Lagrangian a, ξi, r asks partial differential to obtain:
By the optimization to above formula, its dual form is converted by optimal hypersphere classification problem:
K(xi,xj) it is kernel function, the inner product of fault data is mapped to kernel function space, constraint condition is
According to KKT condition, the boundary supporting vector x for meeting constraint condition is utilizedk, thereby determine that radius of hypersphere are as follows:
Step (3.3): Support Vector data description SVDD, i.e. 0≤α of satisfaction are determinediThe test data point of≤C, and suprasphere radius Distance value of as any Support Vector data description SVDD to center;If test data point to a certain suprasphere center half Diameter distance≤r then illustrates that the test point belongs to this fault data type, realizes metering automation terminal fault classification of type Purpose.
CN201811046099.3A 2018-09-07 2018-09-07 Metering automation terminal fault diagnosis method based on deep learning and support vector data description Active CN109490814B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811046099.3A CN109490814B (en) 2018-09-07 2018-09-07 Metering automation terminal fault diagnosis method based on deep learning and support vector data description

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811046099.3A CN109490814B (en) 2018-09-07 2018-09-07 Metering automation terminal fault diagnosis method based on deep learning and support vector data description

Publications (2)

Publication Number Publication Date
CN109490814A true CN109490814A (en) 2019-03-19
CN109490814B CN109490814B (en) 2021-02-26

Family

ID=65690661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811046099.3A Active CN109490814B (en) 2018-09-07 2018-09-07 Metering automation terminal fault diagnosis method based on deep learning and support vector data description

Country Status (1)

Country Link
CN (1) CN109490814B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222914A (en) * 2019-07-02 2019-09-10 国家电网有限公司 A kind of concentrator that accuracy rate is high operation prediction technique
CN110220725A (en) * 2019-05-30 2019-09-10 河海大学 A kind of wheel for metro vehicle health status prediction technique integrated based on deep learning and BP
CN110568082A (en) * 2019-09-02 2019-12-13 北京理工大学 cable wire breakage distinguishing method based on acoustic emission signals
CN110879377A (en) * 2019-11-22 2020-03-13 国网新疆电力有限公司电力科学研究院 Metering device fault tracing method based on deep belief network
CN110991121A (en) * 2019-11-19 2020-04-10 西安理工大学 Air preheater rotor deformation soft measurement method based on CDBN-SVR
CN111753889A (en) * 2020-06-11 2020-10-09 浙江浙能技术研究院有限公司 Induced draft fan fault identification method based on CNN-SVDD
CN112067053A (en) * 2020-09-07 2020-12-11 北京理工大学 Multi-strategy joint fault diagnosis method for minority class identification
CN112184037A (en) * 2020-09-30 2021-01-05 华中科技大学 Multi-modal process fault detection method based on weighted SVDD
CN113205506A (en) * 2021-05-17 2021-08-03 上海交通大学 Three-dimensional reconstruction method for full-space information of power equipment
CN113341347A (en) * 2021-06-02 2021-09-03 云南大学 Dynamic fault detection method for distribution transformer based on AOELM
CN113486950A (en) * 2021-07-05 2021-10-08 华能国际电力股份有限公司上安电厂 Intelligent pipe network water leakage detection method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489004A (en) * 2013-09-30 2014-01-01 华南理工大学 Method for achieving large category image identification of deep study network
CN104268627A (en) * 2014-09-10 2015-01-07 天津大学 Short-term wind speed forecasting method based on deep neural network transfer model
CN104616033A (en) * 2015-02-13 2015-05-13 重庆大学 Fault diagnosis method for rolling bearing based on deep learning and SVM (Support Vector Machine)
CN106980873A (en) * 2017-03-09 2017-07-25 南京理工大学 Fancy carp screening technique and device based on deep learning
CN107463937A (en) * 2017-06-20 2017-12-12 大连交通大学 A kind of tomato pest and disease damage automatic testing method based on transfer learning
US9875237B2 (en) * 2013-03-14 2018-01-23 Microsfot Technology Licensing, Llc Using human perception in building language understanding models
CN108010029A (en) * 2017-12-27 2018-05-08 江南大学 Fabric defect detection method based on deep learning and support vector data description
US10063582B1 (en) * 2017-05-31 2018-08-28 Symantec Corporation Securing compromised network devices in a network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9875237B2 (en) * 2013-03-14 2018-01-23 Microsfot Technology Licensing, Llc Using human perception in building language understanding models
CN103489004A (en) * 2013-09-30 2014-01-01 华南理工大学 Method for achieving large category image identification of deep study network
CN104268627A (en) * 2014-09-10 2015-01-07 天津大学 Short-term wind speed forecasting method based on deep neural network transfer model
CN104616033A (en) * 2015-02-13 2015-05-13 重庆大学 Fault diagnosis method for rolling bearing based on deep learning and SVM (Support Vector Machine)
CN106980873A (en) * 2017-03-09 2017-07-25 南京理工大学 Fancy carp screening technique and device based on deep learning
US10063582B1 (en) * 2017-05-31 2018-08-28 Symantec Corporation Securing compromised network devices in a network
CN107463937A (en) * 2017-06-20 2017-12-12 大连交通大学 A kind of tomato pest and disease damage automatic testing method based on transfer learning
CN108010029A (en) * 2017-12-27 2018-05-08 江南大学 Fabric defect detection method based on deep learning and support vector data description

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FENG JIA等: "Deep neuralnetworks:Apromisingtoolforfaultcharacteristic mining andintelligentdiagnosisofrotatingmachinery with massivedata", 《MECHANICAL SYSTEMSANDSIGNALPROCESSING》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110220725A (en) * 2019-05-30 2019-09-10 河海大学 A kind of wheel for metro vehicle health status prediction technique integrated based on deep learning and BP
CN110222914A (en) * 2019-07-02 2019-09-10 国家电网有限公司 A kind of concentrator that accuracy rate is high operation prediction technique
CN110568082A (en) * 2019-09-02 2019-12-13 北京理工大学 cable wire breakage distinguishing method based on acoustic emission signals
CN110991121B (en) * 2019-11-19 2023-12-29 西安理工大学 CDBN-SVR-based soft measurement method for deformation of air preheater rotor
CN110991121A (en) * 2019-11-19 2020-04-10 西安理工大学 Air preheater rotor deformation soft measurement method based on CDBN-SVR
CN110879377A (en) * 2019-11-22 2020-03-13 国网新疆电力有限公司电力科学研究院 Metering device fault tracing method based on deep belief network
CN111753889A (en) * 2020-06-11 2020-10-09 浙江浙能技术研究院有限公司 Induced draft fan fault identification method based on CNN-SVDD
CN112067053A (en) * 2020-09-07 2020-12-11 北京理工大学 Multi-strategy joint fault diagnosis method for minority class identification
CN112184037A (en) * 2020-09-30 2021-01-05 华中科技大学 Multi-modal process fault detection method based on weighted SVDD
CN113205506A (en) * 2021-05-17 2021-08-03 上海交通大学 Three-dimensional reconstruction method for full-space information of power equipment
CN113205506B (en) * 2021-05-17 2022-12-27 上海交通大学 Three-dimensional reconstruction method for full-space information of power equipment
CN113341347A (en) * 2021-06-02 2021-09-03 云南大学 Dynamic fault detection method for distribution transformer based on AOELM
CN113341347B (en) * 2021-06-02 2022-05-03 云南大学 Dynamic fault detection method for distribution transformer based on AOELM
CN113486950A (en) * 2021-07-05 2021-10-08 华能国际电力股份有限公司上安电厂 Intelligent pipe network water leakage detection method and system
CN113486950B (en) * 2021-07-05 2023-06-16 华能国际电力股份有限公司上安电厂 Intelligent pipe network water leakage detection method and system

Also Published As

Publication number Publication date
CN109490814B (en) 2021-02-26

Similar Documents

Publication Publication Date Title
CN109490814A (en) Metering automation terminal fault diagnostic method based on deep learning and Support Vector data description
CN109492822B (en) Air pollutant concentration time-space domain correlation prediction method
CN109102126A (en) One kind being based on depth migration learning theory line loss per unit prediction model
CN110095744A (en) A kind of electronic mutual inductor error prediction method
CN106991666B (en) A kind of disease geo-radar image recognition methods suitable for more size pictorial informations
CN108520301A (en) A kind of circuit intermittent fault diagnostic method based on depth confidence network
CN108537337A (en) Lithium ion battery SOC prediction techniques based on optimization depth belief network
WO2021257128A2 (en) Quantum computing based deep learning for detection, diagnosis and other applications
Miao et al. A novel real-time fault diagnosis method for planetary gearbox using transferable hidden layer
CN110414718A (en) A kind of distribution network reliability index optimization method under deep learning
CN114266301A (en) Intelligent power equipment fault prediction method based on graph convolution neural network
CN112596016A (en) Transformer fault diagnosis method based on integration of multiple one-dimensional convolutional neural networks
CN116738339A (en) Multi-classification deep learning recognition detection method for small-sample electric signals
CN115603446A (en) Power distribution station area operation monitoring system based on convolution neural network and cloud edge synergistic effect
CN115757103A (en) Neural network test case generation method based on tree structure
CN111190072A (en) Centralized meter reading system diagnosis model establishing method, fault diagnosis method and fault diagnosis device
CN113901621A (en) SVM power distribution network topology identification method based on artificial fish swarm algorithm optimization
CN109214500A (en) A kind of transformer fault recognition methods based on integrated intelligent algorithm
CN113033898A (en) Electrical load prediction method and system based on K-means clustering and BI-LSTM neural network
CN112836876A (en) Power distribution network line load prediction method based on deep learning
CN117009841A (en) Model training method, motor fault diagnosis method and microcontroller
CN116520074A (en) Active power distribution network fault positioning method and system based on cloud edge cooperation
CN116565877A (en) Automatic voltage partition control method based on spectral cluster analysis
CN116167465A (en) Solar irradiance prediction method based on multivariate time series ensemble learning
CN114707613B (en) Layered depth strategy gradient network-based power grid regulation and control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant