CN110542819A - transformer fault type diagnosis method based on semi-supervised DBNC - Google Patents

transformer fault type diagnosis method based on semi-supervised DBNC Download PDF

Info

Publication number
CN110542819A
CN110542819A CN201910910452.6A CN201910910452A CN110542819A CN 110542819 A CN110542819 A CN 110542819A CN 201910910452 A CN201910910452 A CN 201910910452A CN 110542819 A CN110542819 A CN 110542819A
Authority
CN
China
Prior art keywords
data
network
samples
dbnc
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910910452.6A
Other languages
Chinese (zh)
Other versions
CN110542819B (en
Inventor
张英
张靖
赵靓玮
贺毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou University
Guizhou Power Grid Co Ltd
Original Assignee
Guizhou University
Guizhou Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou University, Guizhou Power Grid Co Ltd filed Critical Guizhou University
Priority to CN201910910452.6A priority Critical patent/CN110542819B/en
Publication of CN110542819A publication Critical patent/CN110542819A/en
Application granted granted Critical
Publication of CN110542819B publication Critical patent/CN110542819B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

the invention discloses a transformer fault type diagnosis method based on semi-supervised DBNC, which comprises the following steps: selecting a sample data set; dividing sample data into a pre-training set without a label, a set with a label, a test set 1 and a test set 2; carrying out state coding on the fault type; establishing a transformer fault diagnosis model based on a deep belief network classifier; initializing parameters of each layer of the model; training each RBM at the bottom layer by using the contrast divergence; the whole network parameters are optimized through back propagation, so that the network classification performance is globally optimal; storing the trained network, and verifying the classification performance of the network by using the sample data of the test set 1; the method solves the problems that the transformer fault diagnosis is analyzed and processed by adopting deep learning network fault data, only a small amount of complete data samples can be obtained under the common condition, a large amount of complete data samples with labels are very difficult to obtain, and a large amount of manpower and material resources are needed.

Description

transformer fault type diagnosis method based on semi-supervised DBNC
Technical Field
the invention belongs to the transformer fault diagnosis technology, and particularly relates to a transformer fault type diagnosis method based on semi-supervised DBNC.
background
The safety and reliability of the power transformer, which is an important device for voltage transformation and power distribution in the power system, are closely related to the stability of the power system. However, due to manufacturing defects, human factors, weather influences and the like, diagnosis of faults of transformers and prediction of development trends of the transformers are always highly concerned. Most of the power transformers in China are oil-immersed transformers. In the initial stage of the transformer failure, the formed gas is dissolved in the oil, and when the failure energy becomes large, free gas is formed. Therefore, Dissolved Gas Analysis (DGA) in oil is a main means for diagnosing transformer faults.
at present, DGA-based power transformer fault diagnosis methods are mainly divided into traditional fault diagnosis methods and intelligent diagnosis methods. The traditional method mainly utilizes an IEC three-ratio method. The method has high application rate in the aspect of transformer fault diagnosis at present, the accuracy rate of the method is ideal in a place far away from a section boundary point, but when the ratio is near a judgment section boundary, the three-ratio method has the problem of inaccurate judgment and even misjudgment. In the face of the defects of the traditional ratio method, researchers take DGA as characteristic quantity and develop a large amount of research on intelligent fault diagnosis methods. For example, a Bayesian network is constructed by a 3-step method, and the Bayesian network is combined with a DGA three-ratio method and then introduced into transformer fault diagnosis; performing cluster analysis on the analysis data of the gas in the oil of the power transformer by adopting a fuzzy ISODATA method; the method comprises the steps of providing a BP network for diagnosing transformer faults; the method is used for diagnosing the fault type of the transformer by using a support vector machine and an improved algorithm thereof. However, these methods are all shallow machine learning methods, the learning ability is limited, and it is difficult to improve the diagnosis accuracy to a certain degree. The prior art further proposes to use a deep learning network to analyze and process a large amount of transformer fault data, so as to diagnose the transformer fault type. However, deep learning requires accurate and complete samples to obtain satisfactory results, and usually only a small amount of complete data samples can be obtained, so that it is very difficult to obtain a large amount of complete data samples with labels, and a large amount of manpower and material resources are required.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the transformer fault type diagnosis method based on semi-supervised DBNC is provided, and aims to solve the problems that in the prior art, a deep learning network is adopted for transformer fault diagnosis to analyze and process a large amount of transformer fault data, and further the transformer fault type is diagnosed; however, deep learning requires accurate and complete samples to obtain satisfactory results, and usually only a small amount of complete data samples can be obtained, so that it is very difficult to obtain a large amount of complete data samples with labels, and a large amount of manpower and material resources are required.
the technical scheme of the invention is as follows:
a transformer fault type diagnosis method based on semi-supervised DBNC comprises the following steps:
step 1, selecting a sample data set, and carrying out normalization processing on the sample data set;
as the threshold values of the content of the dissolved gas in the transformer oil with the voltage class of 220kV or below are the same, all the data selected by the invention are the content values of the dissolved gas in the transformer oil with the voltage class of 220kV or below. The sample data size of each group is 1 × 8 dimension, and is CH4, C2H6, C2H4, C2H2, total hydrocarbons, H2, CO, and CO 2. Because the fault of the transformer has the phenomenon of development and change along with time, and because false fault data caused by monitoring equipment often exist, the significance of independently extracting once monitoring data to carry out fault diagnosis is not great. The sample data selected by the method is transformer oil chromatographic monitoring data monitored three times continuously, the size of each group of sample data is 3 x 8 dimensions, the sample data is converted into a 1 x 24 dimensional input network, all data are normalized to be values between [0 and 1], and the magnitude order difference between the dimensions of the samples is avoided to be large. The normalization method herein employs the max-min method:
x=(x-x)/(x-x)
wherein xmin is the minimum value in the data sequence; xmax is the maximum value in the data sequence.
step 2, dividing sample data into a pre-training set without labels, a labeled set, a test set 1 and a test set 2;
step 3, carrying out state coding on the fault type;
step 4, establishing a transformer fault diagnosis model based on a deep belief network classifier;
step 5, initializing parameters of each layer of the model;
step 6, training each RBM at the bottom layer by using the contrast divergence;
7, optimizing the whole network parameters through back propagation to ensure that the network classification performance reaches global optimum;
step 8, storing the trained network, and verifying the classification performance of the network by using the sample data of the test set 1;
Step 9, adding the samples with the confidence level higher than the threshold value in the test set 1 into a pre-training set, deleting the samples added into the pre-training set from the test set 1, judging whether the data in the test set is used up or not, executing step 10 if the data in the test set is used up, and returning to step 8 if the data in the test set is not used up;
Step 10, storing the trained semi-supervised DBNC network;
and step 11, testing the test set 2 by using the stored semi-supervised DBNC network to obtain a classification result.
step 4, the method for establishing the transformer fault diagnosis model based on the deep belief network classifier comprises the following steps:
step 4.1, let DN ═ { x1, x2, …, xl +1, xl +2, …, xn } be the unlabeled pre-training set, DL { (x1, y1), (x2, y2), …, (xl, yl) } be the labeled set; the algorithm flow is as follows:
Step 4.2, initializing a classifier by using DL;
4.3, randomly selecting data samples from DN, and performing classification prediction on the data samples by using a classifier; then selecting samples with confidence degrees higher than a threshold value, putting the samples into a labeled set, and training the classifier by using the labeled set again; returning the samples with the confidence degrees lower than the threshold value to the unlabeled pre-training set again;
step 4.4, repeating the step 4.2-4.3 until the stop condition is met; the stop condition is that the DN is used up.
the pre-training set and the sample data with the label set are data obtained by generating the original data into an antagonistic network to carry out data balance processing; the method for generating the countermeasure network comprises the following steps: generating a model containing a pair of countermeasures in the framework of the countermeasure network: a discriminator and a generator; the discriminator is used for discriminating data true and false, if the input sample is true, the discriminator outputs 1, if the input sample is false, the discriminator outputs 0; the generator is close to the distribution of real data as much as possible, so that the discriminator cannot judge the truth of the output sample of the generator; and when Nash equilibrium is achieved between the generator and the discriminator, the generation of the countermeasure network target is completed.
The invention has the beneficial effects that:
the invention provides a transformer fault diagnosis method based on a semi-supervised DBNC (digital base network) network, aiming at solving the problems that a few available complete data samples cannot meet the training requirement of the DBNC network, but a large number of unlabelled data samples cannot be used on site. And the number of training set samples is enlarged by adopting a self-training algorithm in semi-supervised learning, so that the DBNC network convergence is better. The generation of a countermeasure network is adopted to improve the problem of imbalance of the original data. Through simulation analysis of an actual data set, the result shows that the DBNC network based on semi-supervised learning can utilize unlabeled data samples and improve the transformer fault diagnosis performance. With the arrival of the era of power big data, the semi-supervised learning algorithm can better adapt to the requirements of the era, and the problem that the prior art adopts a deep learning network to analyze and process a large amount of transformer fault data for transformer fault diagnosis so as to diagnose the fault type of the transformer is solved; however, deep learning requires accurate and complete samples to obtain satisfactory results, and usually only a small amount of complete data samples can be obtained, so that it is very difficult to obtain a large amount of complete data samples with labels, and a large amount of manpower and material resources are required.
Detailed Description
The invention provides a method for selecting samples with high confidence coefficient by using a DBNC network, and the number of training samples is enlarged.
Deep belief network classifier
The network structure of the deep belief network classifier is formed by overlapping an input layer, a plurality of Restricted Boltzmann Machines (RBMs) and a top classification layer, wherein the top classifier selects a Softmax classifier, and is characterized in that the classification result is given, the probability of each result is given, and the deep belief network classifier is very suitable for solving a nonlinear multi-classification problem.
When the deep belief network classifier is used for processing a multi-classification problem, the training process is divided into two stages of pre-training and tuning.
(1) in the pre-training stage, a layer-by-layer training method is adopted to initialize the connection weight and the offset between each layer of the network, and the process is an unsupervised learning process.
taking a single RBM as an example, the network structure includes two layers: a visible layer v and an implicit layer h, the energy of a single RBM can be expressed as:
in the formula: r is the number of cells of the visible layer; t is the number of cells of the hidden layer; vi is the value of the ith cell of the visual layer; hj is the value of the jth unit of the hidden layer; wij represents the connection weight between the ith unit of the visual layer and the jth unit of the hidden layer; a is a vector representing the bias of the visible layer; b represents the bias of the hidden layer; the parameters Wij, a and b of the RBM are collectively denoted as θ because the parameters are too many to be calculated easily. As can be seen from equation (1), the joint probability distribution of (v, h) can be expressed as the following equation:
In the formula: z (θ) ═ Σ v, he-E (v, h | θ) is called a normalization factor. Further, the likelihood function of P (v θ) can be expressed as the following equation:
In order to solve the parameter θ, it is necessary to maximize the equation (3) by the gradient descent method. However, such calculation is very complicated and the amount of calculation is also large. In order to make the calculation easier, the logarithm of equation (3) is maximized. The most important calculation step is to solve the partial derivative of logP (v θ) with respect to θ:
In the formula: < > P represents the mathematical expectation about the distribution P; p (h | v, θ) represents the probability distribution of the hidden layer when the training sample v of the known visual layer is known; p (v, h | θ) represents the joint probability distribution of the v-layer and the h-layer. P (h | v, θ) is denoted by "D" and P (v, h | θ) is denoted by "M", the partial derivatives of logP (v | θ) with respect to the parameter θ in the single sample case are:
In the formula: < > D represents the expectation for the data set; the < > M represents the expected value defined in the model, and is difficult to calculate actually, and people are difficult to obtain unbiased samples in the actual calculation, so when approximately sampling reconstructed data and updating the network parameter theta, the invention adopts a Contrast Divergence (CD) algorithm. Taking a training sample x0 as an example, the CD algorithm has the following specific steps:
the method comprises the following steps: network parameters are initialized. The various parameters include: the initial state value v0 of the visual layer unit is x0, the parameter theta and the maximum iteration number used during each RBM training;
Step two: for each hidden layer element in the network:
extracting h0j E {0, 1} from P (h0j | v0), wherein sigma (x) is a Sigmoid function;
step three: for each visible layer unit in the network:
Extracting v1i epsilon {0, 1} from P (v1i | h 0);
step IV: for each hidden layer element in the network:
step five: the parameters W, a and b are updated according to the following three equations:
a←a+ρ(v-v) (12)
b←b+ρ[P(h=1|v)-P(h=1|v)] (13)
step (c): judging whether the network reconstruction error of each RBM meets the set precision requirement, if not, repeating the steps from the second step to the fifth step; and if so, ending.
(2) The tuning stage is a process with supervised learning. In the stage, the data labels are compared with corresponding network prediction results, and the obtained errors are propagated from the top layer to the bottom layer to modify the parameters of each unit of the whole network, so that the DBNC network achieves the optimal classification performance, and the process is similar to the back propagation of the BP neural network.
therefore, the technical scheme of the invention is as follows:
A transformer fault type diagnosis method based on semi-supervised DBNC comprises the following steps:
Step 1, selecting a sample data set, and carrying out normalization processing on the sample data set;
step 2, dividing sample data into a pre-training set without labels, a labeled set, a test set 1 and a test set 2;
step 3, carrying out state coding on the fault type;
step 4, establishing a transformer fault diagnosis model based on a deep belief network classifier;
step 5, initializing parameters of each layer of the model;
Step 6, training each RBM at the bottom layer by using the contrast divergence;
7, optimizing the whole network parameters through back propagation to ensure that the network classification performance reaches global optimum;
Step 8, storing the trained network, and verifying the classification performance of the network by using the sample data of the test set 1;
Step 9, adding the samples with the confidence level higher than the threshold value in the test set 1 into a pre-training set, deleting the samples added into the pre-training set from the test set 1, judging whether the data in the test set is used up or not, executing step 10 if the data in the test set is used up, and returning to step 8 if the data in the test set is not used up;
Step 10, storing the trained semi-supervised DBNC network;
And step 11, testing the test set 2 by using the stored semi-supervised DBNC network to obtain a classification result.
The method for selecting the sample data set and carrying out normalization processing on the sample data set in the step 1 comprises the following steps:
Selecting sample data of each group with 1 × 8 dimensions, CH4, C2H6, C2H4, C2H2, total hydrocarbon, H2, CO and CO 2;
the selected sample data are transformer oil chromatographic monitoring data monitored three times continuously, the size of each group of sample data is 3 x 8 dimensions, the sample data is converted into a 1 x 24 dimensional input network, all data are normalized to a value between [0 and 1], and a maximum and minimum method is adopted in the normalization method:
x=(x-x)/(x-x)
in the formula: xmin is the minimum value in the data sequence; xmax is the maximum value in the data sequence. Step 4, the method for establishing the transformer fault diagnosis model based on the deep belief network classifier comprises the following steps:
step 4.1, let DN ═ { x1, x2, …, xl +1, xl +2, …, xn } be the unlabeled pre-training set, DL { (x1, y1), (x2, y2), …, (xl, yl) } be the labeled set; the algorithm flow is as follows:
step 4.2, initializing a classifier by using DL;
4.3, randomly selecting data samples from DN, and performing classification prediction on the data samples by using a classifier; then selecting samples with confidence degrees higher than a threshold value, putting the samples into a labeled set, and training the classifier by using the labeled set again; returning the samples with the confidence degrees lower than the threshold value to the unlabeled pre-training set again;
step 4.4, repeating the step 4.2-4.3 until the stop condition is met; the stop condition is that the DN is used up.
In the monitoring of the transformer oil chromatogram, the sample data amount of the normal state and the sample data amount of the abnormal state are different greatly, and the ratio of the normal data to the abnormal data in the transformer oil chromatogram monitoring data obtained by the method is about 50: 1. Due to the data imbalance, the classification effect of the DBNC network is not ideal, and serious overfitting and network non-convergence phenomena exist. To solve the unbalanced data problem, a generation countermeasure network (GAN) is selected for data balancing.
the pre-training set and the sample data with the label set are data obtained by generating the original data into an antagonistic network to carry out data balance processing; the method for generating the countermeasure network comprises the following steps: generating a model containing a pair of countermeasures in the framework of the countermeasure network: a discriminator and a generator; the discriminator is used for discriminating data true and false, if the input sample is true, the discriminator outputs 1, if the input sample is false, the discriminator outputs 0; the generator is close to the distribution of real data as much as possible, so that the discriminator cannot judge the truth of the output sample of the generator; and when Nash equilibrium is achieved between the generator and the discriminator, the generation of the countermeasure network target is completed. By learning and generating each type of data, the sample size of fault data is supplemented, so that the data are balanced, but the generated samples are only used for network training, and network test samples are uniformly performed by using real data.
The simulation part of the invention uniformly uses the operating system of the computer equipment as Windows 10(64 bit), the CPU is Intel i7-6500U, the memory is 8GB, and the modeling simulation platform is MATLAB R2014 a. Table 1 shows five states of the transformer and their corresponding codes. 5800 groups of sampling data are obtained, wherein 5000 groups of label samples are used for training a network, and 5 types of data respectively occupy 1000 groups; 400 unlabeled sets of samples were used to expand the training set, and another 400 unlabeled sets were used to test the network classification performance. Of the 800 unlabeled groups, each of the 5 types of data occupies 160 groups.
TABLE 1 output result code and transformer state correspondence
the parameters that are difficult to determine in the DBNC network construction process are mainly three: the number of layers of the hidden layer network, the number of hidden layer units and the number of pre-training network iterations.
the number of layers of the hidden layer network is increased one by one from the hidden layer in a common method. The classification accuracy reaches a maximum when the number of implicit layers is 2, and then the accuracy of the test set is likely to be reduced due to an overfitting problem. Therefore, the number of RBM layers is 2.
Secondly, no relevant literature summary is available in the method for determining the number of RBM units in each layer, and according to experience, the number of hidden nodes is related to the number of nodes in an input layer and an output layer, and the relationship is as follows:
N=λ(N+N) (15)
Where Nh is the number of hidden layer nodes, Ni is the number of input layer nodes, No is the number of output layer nodes, and λ is a constant, which is generally 2/3. Although there are 24 nodes actually input in the text, actually, it is formed by splicing three groups of 1 × 8 dimensional samples, so Ni in the text should be 8, the output has five types, No is 5, the corresponding Nh value is 8.7, and Nh after the rounding should be about 8-9. In fact, the classification accuracy begins to be stabilized when the number of RBM units of each layer is about 6, the number of RBM units of each layer is smaller than that of the originally predicted RBM of each layer, overfitting can be effectively reduced when the number of the unit of the hidden layer is less, and a good convergence effect can not be achieved possibly; more may have a good convergence, but the iteration results in an excess capacity. After the compromise, the number of hidden layer units is measured to be 7.
and thirdly, for the iteration times during pre-training, in the training process of each layer of RBM, the visible layer firstly transmits data to the hidden layer, and the reconstruction stage means that the activation state of the hidden layer is changed into the input in the reverse transmission process. The activation state a of the hidden layer is multiplied by each corresponding weight, the same procedure as when the visible layer passes information to the hidden layer. The sum of the obtained products is added with the node offset term at each corresponding visual layer, which is a reconstruction process, and the reconstruction result r is an approximation to the original input.
since the weight initialization of each RBM layer is random, the error between the reconstructed result and the original input tends to be large. The difference between the reconstruction result r and the original input is called a reconstruction error, and the error is repeatedly propagated back and forth along with the weight of the RBM in an iterative process until the error precision requirement is met. The number of iterations of each layer of RBM per average reconstruction error remains substantially constant after 10 iterations, so the term herein chooses to stop after 10 iterations of each layer of RBM.
A classifier in self-training learning is replaced by a DBNC network, 100 experiments are carried out by adopting a variable learning rate and a confidence threshold value of 0.9, the average classification accuracy of 400 groups of test data is observed along with the increase of training data, the accuracy of transformer fault diagnosis is compared by using a BP neural network, an SVM and the DBNC method before and after semi-supervised learning improvement, the size of a training set is increased from 500-5000, and the result is shown in Table 2.
TABLE 2 comparison of Classification accuracy before and after three algorithms improvement
as can be seen from table 2, in the increasing trend, the average accuracy of the BP algorithm and that of the SVM algorithm do not increase any more after reaching a certain value, while the average accuracy of the standard DBNC algorithm is not as good as that of the BP algorithm and that of the SVM algorithm, but the average accuracy does not stagnate as training samples increase. The main reason is that both the BP neural network and the SVM belong to the category of shallow learning networks, and the inherent network learning ability and the inherent expansion ability of the BP neural network and the SVM are very limited. Moreover, the shallow learning network is not suitable for training a large number of samples, and the network generalization performance is weak. Compared with the complex topological structure of multiple hidden layers of the DBNC and the network stability brought by the layer-by-layer training algorithm, the DBNC network can train, analyze and learn samples with large data volume, so that the larger the training sample volume is, the more beneficial the feature learning of the DBNC is.
The invention discloses a transformer fault diagnosis method based on a semi-supervised DBNC (digital base network) network, aiming at solving the problems that a few available complete data samples cannot meet the training requirement of the DBNC network, but a large number of unlabelled data samples cannot be used on site. And the number of training set samples is enlarged by adopting a self-training algorithm in semi-supervised learning, so that the DBNC network convergence is better. Adopting a generation countermeasure network to improve the problem of original data imbalance; the simulation analysis is carried out on the actual data set, and the result shows that the DBNC network based on semi-supervised learning can utilize unlabeled data samples and improve the transformer fault diagnosis performance. With the advent of the era of electric power big data, the semi-supervised learning algorithm can better adapt to the requirements of the era.

Claims (4)

1. a transformer fault type diagnosis method based on semi-supervised DBNC comprises the following steps:
step 1, selecting a sample data set, and carrying out normalization processing on the sample data set;
Step 2, dividing sample data into a pre-training set without labels, a labeled set, a test set 1 and a test set 2;
step 3, carrying out state coding on the fault type;
Step 4, establishing a transformer fault diagnosis model based on a deep belief network classifier;
step 5, initializing parameters of each layer of the model;
step 6, training each RBM at the bottom layer by using the contrast divergence;
7, optimizing the whole network parameters through back propagation to ensure that the network classification performance reaches global optimum;
Step 8, storing the trained network, and verifying the classification performance of the network by using the sample data of the test set 1;
step 9, adding the samples with the confidence level higher than the threshold value in the test set 1 into a pre-training set, deleting the samples added into the pre-training set from the test set 1, judging whether the data in the test set is used up or not, executing step 10 if the data in the test set is used up, and returning to step 8 if the data in the test set is not used up;
Step 10, storing the trained semi-supervised DBNC network;
And step 11, testing the test set 2 by using the stored semi-supervised DBNC network to obtain a classification result.
2. the semi-supervised DBNC-based transformer fault type diagnosis method as claimed in claim 1, wherein: the method for selecting the sample data set and carrying out normalization processing on the sample data set in the step 1 comprises the following steps:
selecting sample data of each group with 1 × 8 dimensions, CH4, C2H6, C2H4, C2H2, total hydrocarbon, H2, CO and CO 2;
the selected sample data are transformer oil chromatographic monitoring data monitored three times continuously, the size of each group of sample data is 3 x 8 dimensions, the sample data is converted into a 1 x 24 dimensional input network, all data are normalized to a value between [0 and 1], and a maximum and minimum method is adopted in the normalization method:
x=(x-x)/(x-x)
In the formula: xmin is the minimum value in the data sequence; xmax is the maximum value in the data sequence.
3. the semi-supervised DBNC-based transformer fault type diagnosis method as claimed in claim 1, wherein: step 4, the method for establishing the transformer fault diagnosis model based on the deep belief network classifier comprises the following steps:
step 4.1, let DN ═ { x1, x2, …, xl +1, xl +2, …, xn } be the unlabeled pre-training set, DL { (x1, y1), (x2, y2), …, (xl, yl) } be the labeled set; the algorithm flow is as follows:
step 4.2, initializing a classifier by using DL;
4.3, randomly selecting data samples from DN, and performing classification prediction on the data samples by using a classifier; then selecting samples with confidence degrees higher than a threshold value, putting the samples into a labeled set, and training the classifier by using the labeled set again; returning the samples with the confidence degrees lower than the threshold value to the unlabeled pre-training set again;
step 4.4, repeating the step 4.2-4.3 until the stop condition is met; the stop condition is that the DN is used up.
4. The semi-supervised DBNC-based transformer fault type diagnosis method as claimed in claim 1, wherein: the pre-training set and the sample data with the label set are data obtained by generating the original data into an antagonistic network to carry out data balance processing; the method for generating the countermeasure network comprises the following steps: generating a model containing a pair of countermeasures in the framework of the countermeasure network: a discriminator and a generator; the discriminator is used for discriminating data true and false, if the input sample is true, the discriminator outputs 1, if the input sample is false, the discriminator outputs 0; the generator is close to the distribution of real data as much as possible, so that the discriminator cannot judge the truth of the output sample of the generator; and when Nash equilibrium is achieved between the generator and the discriminator, the generation of the countermeasure network target is completed.
CN201910910452.6A 2019-09-25 2019-09-25 Transformer fault type diagnosis method based on semi-supervised DBNC Active CN110542819B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910910452.6A CN110542819B (en) 2019-09-25 2019-09-25 Transformer fault type diagnosis method based on semi-supervised DBNC

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910910452.6A CN110542819B (en) 2019-09-25 2019-09-25 Transformer fault type diagnosis method based on semi-supervised DBNC

Publications (2)

Publication Number Publication Date
CN110542819A true CN110542819A (en) 2019-12-06
CN110542819B CN110542819B (en) 2022-03-22

Family

ID=68714496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910910452.6A Active CN110542819B (en) 2019-09-25 2019-09-25 Transformer fault type diagnosis method based on semi-supervised DBNC

Country Status (1)

Country Link
CN (1) CN110542819B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111366820A (en) * 2020-03-09 2020-07-03 广东电网有限责任公司电力科学研究院 Pattern recognition method, device, equipment and storage medium for partial discharge signal
CN111539486A (en) * 2020-05-12 2020-08-14 国网四川省电力公司电力科学研究院 Transformer fault diagnosis method based on Dropout deep confidence network
CN111982514A (en) * 2020-08-12 2020-11-24 河北工业大学 Bearing fault diagnosis method based on semi-supervised deep belief network
CN113239469A (en) * 2021-06-15 2021-08-10 南方科技大学 Structure optimization method, device, equipment and storage medium for vehicle body parts
CN113884290A (en) * 2021-09-28 2022-01-04 江南大学 Voltage regulator fault diagnosis method based on self-training semi-supervised generation countermeasure network
CN114994464A (en) * 2022-08-01 2022-09-02 四川中电启明星信息技术有限公司 Distribution network hidden danger identification method based on generation countermeasure network
CN115127192A (en) * 2022-05-20 2022-09-30 中南大学 Semi-supervised water chilling unit fault diagnosis method and system based on graph neural network
CN115935807A (en) * 2021-06-28 2023-04-07 山东华科信息技术有限公司 Diagnostic model training method based on graph Markov neural network
CN117390520A (en) * 2023-12-08 2024-01-12 惠州市宝惠电子科技有限公司 Transformer state monitoring method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107644235A (en) * 2017-10-24 2018-01-30 广西师范大学 Image automatic annotation method based on semi-supervised learning
CN108170994A (en) * 2018-01-29 2018-06-15 河海大学 A kind of oil-immersed electric reactor method for diagnosing faults based on two-way depth network
CN109214416A (en) * 2018-07-23 2019-01-15 华南理工大学 A kind of multidimensional information fusion Diagnosis Method of Transformer Faults based on deep learning
CN109298258A (en) * 2018-09-18 2019-02-01 四川大学 In conjunction with the Diagnosis Method of Transformer Faults and system of RVM and DBN
CN109813542A (en) * 2019-03-15 2019-05-28 中国计量大学 The method for diagnosing faults of air-treatment unit based on production confrontation network
CN110146792A (en) * 2019-05-17 2019-08-20 西安工程大学 Based on the partial discharge of transformer map generation method for generating confrontation network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107644235A (en) * 2017-10-24 2018-01-30 广西师范大学 Image automatic annotation method based on semi-supervised learning
CN108170994A (en) * 2018-01-29 2018-06-15 河海大学 A kind of oil-immersed electric reactor method for diagnosing faults based on two-way depth network
CN109214416A (en) * 2018-07-23 2019-01-15 华南理工大学 A kind of multidimensional information fusion Diagnosis Method of Transformer Faults based on deep learning
CN109298258A (en) * 2018-09-18 2019-02-01 四川大学 In conjunction with the Diagnosis Method of Transformer Faults and system of RVM and DBN
CN109813542A (en) * 2019-03-15 2019-05-28 中国计量大学 The method for diagnosing faults of air-treatment unit based on production confrontation network
CN110146792A (en) * 2019-05-17 2019-08-20 西安工程大学 Based on the partial discharge of transformer map generation method for generating confrontation network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
LIN, J ETC.: "Prediction Method for Power Transformer Running State Based on LSTM_DBN Network", 《ENERGIES》 *
刘坤等: "基于半监督生成对抗网络X光图像分类算法", 《光学学报》 *
吕程程: "增量NIR半监督SVR的集成学习算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
洪洋等: "深度卷积对抗生成网络综述", 《第18届中国***仿真技术及其应用学术年会论文集》 *
石鑫: "基于深度学习的变压器故障诊断技术研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *
石鑫等: "基于深度信念网络的电力变压器故障分类建模", 《电力***保护与控制》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111366820A (en) * 2020-03-09 2020-07-03 广东电网有限责任公司电力科学研究院 Pattern recognition method, device, equipment and storage medium for partial discharge signal
CN111539486A (en) * 2020-05-12 2020-08-14 国网四川省电力公司电力科学研究院 Transformer fault diagnosis method based on Dropout deep confidence network
CN111982514A (en) * 2020-08-12 2020-11-24 河北工业大学 Bearing fault diagnosis method based on semi-supervised deep belief network
CN113239469A (en) * 2021-06-15 2021-08-10 南方科技大学 Structure optimization method, device, equipment and storage medium for vehicle body parts
CN115935807A (en) * 2021-06-28 2023-04-07 山东华科信息技术有限公司 Diagnostic model training method based on graph Markov neural network
CN115935807B (en) * 2021-06-28 2024-06-14 山东华科信息技术有限公司 Diagnostic model training method based on graph Markov neural network
CN113884290A (en) * 2021-09-28 2022-01-04 江南大学 Voltage regulator fault diagnosis method based on self-training semi-supervised generation countermeasure network
CN113884290B (en) * 2021-09-28 2022-08-02 江南大学 Voltage regulator fault diagnosis method based on self-training semi-supervised generation countermeasure network
CN115127192B (en) * 2022-05-20 2024-01-23 中南大学 Semi-supervised water chilling unit fault diagnosis method and system based on graph neural network
CN115127192A (en) * 2022-05-20 2022-09-30 中南大学 Semi-supervised water chilling unit fault diagnosis method and system based on graph neural network
CN114994464A (en) * 2022-08-01 2022-09-02 四川中电启明星信息技术有限公司 Distribution network hidden danger identification method based on generation countermeasure network
CN117390520B (en) * 2023-12-08 2024-04-16 惠州市宝惠电子科技有限公司 Transformer state monitoring method and system
CN117390520A (en) * 2023-12-08 2024-01-12 惠州市宝惠电子科技有限公司 Transformer state monitoring method and system

Also Published As

Publication number Publication date
CN110542819B (en) 2022-03-22

Similar Documents

Publication Publication Date Title
CN110542819B (en) Transformer fault type diagnosis method based on semi-supervised DBNC
CN111860982A (en) Wind power plant short-term wind power prediction method based on VMD-FCM-GRU
CN105930901B (en) A kind of Diagnosis Method of Transformer Faults based on RBPNN
CN108051660A (en) A kind of transformer fault combined diagnosis method for establishing model and diagnostic method
CN104155574A (en) Power distribution network fault classification method based on adaptive neuro-fuzzy inference system
CN110689069A (en) Transformer fault type diagnosis method based on semi-supervised BP network
WO2023005700A1 (en) Model-data-hybrid-driven electrical grid reliability rapid calculation method and device
CN110689068B (en) Transformer fault type diagnosis method based on semi-supervised SVM
CN114167180B (en) Oil-filled electrical equipment fault diagnosis method based on graph attention neural network
Mustafa et al. Fault identification for photovoltaic systems using a multi-output deep learning approach
CN108879732A (en) Transient stability evaluation in power system method and device
CN115563563A (en) Fault diagnosis method and device based on transformer oil chromatographic analysis
CN110879373A (en) Oil-immersed transformer fault diagnosis method with neural network and decision fusion
CN109711435A (en) A kind of support vector machines on-Line Voltage stability monitoring method based on genetic algorithm
CN110705831A (en) Power angle instability mode pre-judgment model construction method after power system fault and application thereof
CN115618732A (en) Nuclear reactor digital twin key parameter autonomous optimization data inversion method
CN116842337A (en) Transformer fault diagnosis method based on LightGBM (gallium nitride based) optimal characteristics and COA-CNN (chip on board) model
CN116562121A (en) XGBoost and FocalLoss combined cable aging state assessment method
CN116562114A (en) Power transformer fault diagnosis method based on graph convolution neural network
CN114266396A (en) Transient stability discrimination method based on intelligent screening of power grid characteristics
CN114595883A (en) Oil-immersed transformer residual life personalized dynamic prediction method based on meta-learning
CN114021758A (en) Operation and maintenance personnel intelligent recommendation method and device based on fusion of gradient lifting decision tree and logistic regression
CN113033898A (en) Electrical load prediction method and system based on K-means clustering and BI-LSTM neural network
CN117421571A (en) Topology real-time identification method and system based on power distribution network
CN111061708A (en) Electric energy prediction and restoration method based on LSTM neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant