CN110309010B - Partial discharge network training method and device for phase resolution of power equipment - Google Patents

Partial discharge network training method and device for phase resolution of power equipment Download PDF

Info

Publication number
CN110309010B
CN110309010B CN201910485801.4A CN201910485801A CN110309010B CN 110309010 B CN110309010 B CN 110309010B CN 201910485801 A CN201910485801 A CN 201910485801A CN 110309010 B CN110309010 B CN 110309010B
Authority
CN
China
Prior art keywords
training
neural network
partial discharge
sample
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910485801.4A
Other languages
Chinese (zh)
Other versions
CN110309010A (en
Inventor
贾骏
杨景刚
胡成博
刘洋
徐阳
张照辉
路永玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Southeast University
Electric Power Research Institute of State Grid Jiangsu Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Southeast University
Electric Power Research Institute of State Grid Jiangsu Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Southeast University, Electric Power Research Institute of State Grid Jiangsu Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN201910485801.4A priority Critical patent/CN110309010B/en
Publication of CN110309010A publication Critical patent/CN110309010A/en
Application granted granted Critical
Publication of CN110309010B publication Critical patent/CN110309010B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/079Root cause analysis, i.e. error or fault diagnosis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
    • Y04S10/52Outage or fault management, e.g. fault detection or location

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Economics (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Quality & Reliability (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

The invention discloses a partial discharge network training method and a partial discharge network training device for power equipment phase resolution, wherein the method comprises the following steps: acquiring a phase-resolved partial discharge map of a partial discharge measurement signal to form a power equipment partial discharge original detection sample set, and preprocessing an original detection sample; a whitening mechanism is adopted to reprocess the preprocessed detection samples, and part of the detection samples are used as training data to be input into a neural network input layer; training a neural network, and optimizing the output of the neural network according to a loss function of the neural network; and predicting the partial discharge fault classification of the power equipment by using the rest detection samples as test data. According to the method, the whitening mechanism is firstly carried out on the sample, so that the dimensionality of the sample is reduced, redundant data of the sample is removed, and the overfitting problem of a neural network in training is prevented; the loss function of the neural network is improved, and the accuracy of the training of the neural network for diagnosing the partial discharge defects of the power equipment is improved.

Description

Partial discharge network training method and device for phase resolution of power equipment
Technical Field
The invention relates to the technical field of power equipment fault diagnosis, in particular to a partial discharge network training method and device for power equipment phase resolution.
Background
In modern power systems, partial discharge is an effective method for diagnosing faults of power equipment, but due to the complexity of partial discharge data, the traditional data classification means has difficulty in obtaining high diagnosis precision. At present, the neural network algorithm can realize effective discrimination of partial discharge faults to a certain extent, but due to the fact that redundant data of training samples of partial discharge are too much and three-dimensional data, a neural network model has an overfitting phenomenon, and discrimination accuracy of partial discharge faults is low.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the defects of the prior art, the invention provides a partial discharge network training method for power equipment phase resolution, which can solve the over-fitting problem of a training sample of partial discharge in the training process, improve the accuracy and efficiency of partial discharge fault type judgment, and further provide a training device for diagnosing partial discharge defects of power equipment.
The technical scheme is as follows: the invention discloses a partial discharge network training method for power equipment phase resolution, which comprises the following steps:
(1) acquiring a phase-resolved partial discharge map of a partial discharge measurement signal to form an original detection sample set D of partial discharge of the power equipment, and preprocessing the original detection sample;
(2) a whitening mechanism is adopted to reprocess the preprocessed detection samples, and part of the detection samples are used as training data to be input into a neural network input layer;
(3) training the neural network, and optimizing the output of the neural network according to the loss function of the neural network;
(4) and predicting the partial discharge fault classification of the power equipment by using the rest detection samples as test data.
Further, comprising:
in the step (1), the original detection sample set D ═ D 1 ,d 2 ,...,d i ,...,d n N is the total amount of the collected samples, i is more than or equal to1 and less than or equal to n; the original test sample is represented by three-dimensional data, d i Where θ is a phase angle, t is time, Q is a defect rate, and θ, t, and Q are respectively disposed in an x-axis direction, a y-axis direction, and a z-axis direction.
Further, comprising:
in the step (2), the step of reprocessing the preprocessed detection sample by using a whitening mechanism comprises the following steps:
(21) respectively obtaining the average value of three characteristics theta, t and Q corresponding to all training samples, subtracting the average value of the corresponding characteristics from all samples to obtain training samples DataAdjust (m & ltx 3) with zero average value and equal variance, wherein m is the number of the training samples;
(22) a feature covariance matrix C (3 × 3) of the training sample DataAdjust is obtained, which is expressed as:
Figure BDA0002085344280000021
wherein cov () is a covariance matrix between the respective eigenvalues;
(23) respectively solving eigenvalues (3 x 1) and eigenvectors (3 x 3) corresponding to the covariance matrix, sorting the eigenvalues according to the descending order, selecting the largest 2 eigenvalues, and respectively taking the 2 corresponding eigenvectors as column vectors to form an eigenvector matrix;
(24) projecting the training sample points to a selected two-dimensional plane consisting of the principal eigenvector u of the covariance matrix 1 And a sub-eigenvector u of a covariance matrix orthogonal to the principal eigenvector 2 Forming;
(25) and selecting the eigenvectors vector with the largest eigenvalue value as a one-dimensional projection basis, and mapping the training sample to a new basis to obtain the training sample ODData after dimension reduction.
Further, it includes:
in the step (3), the loss function S (φ; x, y) of the neural network is expressed as:
Figure BDA0002085344280000022
phi is a parameter set of the neural network training, x and y are input data and output data of the neural network respectively, y' is an artificial labeling value of the sample, eta is a penalty coefficient, eta belongs to [0, infinity ], and P (#) is a penalty term.
Further, comprising:
the penalty term calculation formula is as follows:
Figure BDA0002085344280000023
where Γ (×) is represented as:
Figure BDA0002085344280000024
an partial discharge network training device for phase resolution of power equipment, comprising:
the acquisition module is used for acquiring a phase-resolved partial discharge map of a partial discharge measurement signal to form an original detection sample set D of partial discharge of the power equipment and preprocessing the original detection sample;
the data whitening module is used for reprocessing the preprocessed detection samples by adopting a whitening mechanism and inputting part of the detection samples into a neural network input layer as training data;
the model training module is used for training the neural network and optimizing the output of the neural network according to the loss function of the neural network;
and the prediction module is used for predicting the partial discharge fault classification of the power equipment by using the detection samples of the rest parts as test data.
Further, comprising:
the acquisition module further comprises a sample representation unit for representing a set of raw detection samples: d ═ D 1 ,d 2 ,...,d i ,...,d n N is the total amount of the collected samples, i is more than or equal to1 and less than or equal to n; the original test sample is represented by three-dimensional data, d i Where θ is a phase angle, t is time, Q is a defect rate, and θ, t, and Q are respectively disposed in an x-axis direction, a y-axis direction, and a z-axis direction.
Further, comprising:
the data whitening module further comprising:
the data normalization unit is used for respectively obtaining the average values of the three characteristics theta, t and Q corresponding to all the training samples, subtracting the average value of the corresponding characteristics from all the samples to obtain the training samples DataAdjust (m & ltx 3) with zero average value and equal variance, wherein m is the number of the training samples;
a covariance matrix calculation unit for calculating a feature covariance matrix C (3 × 3) of the training sample DataAdjust, expressed as:
Figure BDA0002085344280000031
wherein cov () is a covariance matrix between the respective eigenvalues;
the two-dimensional plane selection unit is used for respectively solving eigenvalues (3 x 1) and eigenvectors (3 x 3) corresponding to the covariance matrix, sorting the eigenvalues according to the sequence from big to small, selecting the largest 2 eigenvalues, and then respectively taking the 2 eigenvectors corresponding to the eigenvalues as column vectors to form an eigenvector matrix;
a two-dimensional data mapping unit for projecting the training sample points onto a selected two-dimensional plane composed of principal eigenvectors u of the covariance matrix 1 And a sub-eigenvector u of a covariance matrix orthogonal to the principal eigenvector 2 Forming;
and the one-dimensional data mapping unit is used for selecting the eigenvectors with the largest characteristic value as a projection base to be one-dimensional, and mapping the training sample to a new base to obtain the training sample ODData after dimension reduction.
Further, comprising:
the loss function S (φ; x, y) of the neural network in the model training module is expressed as:
Figure BDA0002085344280000041
phi is a parameter set of the neural network training, x and y are input data and output data of the neural network respectively, y' is an artificial labeling value of the sample, eta is a penalty coefficient, eta belongs to [0, infinity ], and P (#) is a penalty term.
Further, it includes:
the penalty term calculation formula is as follows:
Figure BDA0002085344280000042
where Γ (×) is represented as:
Figure BDA0002085344280000043
has the advantages that: according to the method, the whitening mechanism is firstly carried out on the sample, so that the dimensionality of the sample is reduced, redundant data of the sample is removed, and the overfitting problem of a neural network in training is prevented; the loss function of the neural network is improved, the training accuracy of the neural network for diagnosing the partial discharge defects of the power equipment is improved, and therefore the accuracy of the partial discharge fault diagnosis of the power equipment is improved.
Drawings
FIG. 1 is a flow chart of a training method according to the present invention;
FIG. 2 is a profile of a test sample according to the present invention;
FIG. 3 is a flow diagram of a method of a whitening mechanism according to the present invention;
FIG. 4 is a schematic diagram of a neural network according to the present invention;
FIG. 5 is a graph of neural network iteration and error relationships according to the present invention;
FIG. 6 is a schematic structural diagram of an exercise device according to the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Referring to fig. 1, there is provided a training method for partial discharge defect diagnosis of an electric power device, including:
s11, collecting the phase-resolved partial discharge map of the partial discharge measurement signal to form a power equipment partial discharge original detection sample set D, and preprocessing the original detection sample.
As shown in fig. 2, the partial discharge maps of the power equipment are collected in time series, and the original detection sample set D ═ D is collected according to the discharge maps 1 ,d 2 ,...,d i ,...,d n N is the total amount of the collected samples, i is more than or equal to1 and less than or equal to n.
The original test sample is represented by three-dimensional data, d i Where θ is a phase angle, t is time, Q is a defect rate, and θ, t, and Q are respectively disposed in the x-axis direction and the y-axis directionDirection and z-axis direction.
The data preprocessing comprises the following steps: and (4) cleaning the data, and deleting the data which are incomplete and the wrong samples.
And S12, reprocessing the preprocessed detection sample by adopting a whitening mechanism, and inputting part of the detection sample as training data to a neural network input layer.
In this embodiment, 80% of the collected samples are used as training samples, and 20% of the collected samples are used as test samples.
As shown in fig. 3, the method for reprocessing the pre-processed detection sample by using the whitening mechanism includes:
s121, respectively calculating the average values of three characteristics theta, t and Q corresponding to all training samples, subtracting the average value of the corresponding characteristics from all the samples to obtain training samples DataAdjust (m x 3) with zero average value and equal variance, wherein m is the number of the training samples;
s122 calculates a feature covariance matrix C (3 × 3) of the training sample DataAdjust, which is expressed as:
Figure BDA0002085344280000051
wherein cov () is a covariance matrix between the eigenvalues, the formula of covariance is:
Figure BDA0002085344280000052
wherein, p and q are two characteristic values respectively, and m is the total number of samples.
On the diagonal are the variances of x, y and z, respectively, and off-diagonal are the covariances. Covariance is a measure of how much a variable changes at the same time.
S123, respectively calculating eigenvalues (3 x 1) and eigenvectors (3 x 3) corresponding to the covariance matrix, sorting the eigenvalues according to a descending order, selecting the largest 2 eigenvalues, and respectively taking the 2 corresponding eigenvectors as column vectors to form an eigenvector matrix;
s124, projecting the training sample points toOn the selected two-dimensional plane, the two-dimensional plane is composed of principal eigenvectors u of the covariance matrix 1 And a sub-eigenvector u of a covariance matrix orthogonal to the principal eigenvector 2 Forming;
s125, selecting the eigenvectors vector with the largest eigenvalue value as a one-dimensional projection basis, and mapping the training sample to a new basis to obtain the training sample ODData after dimension reduction.
The implementation method can be implemented by Matlab, and the core code is as follows:
base _3to2 ═ eigVector _ sort (: 1: 2); % three-dimensional 2-dimensional reduction
base _3to1 ═ eigVector _ sort (: 1); % three-dimensional 1-dimensional reduction
Rendering 3-D to 2-D projection planes
A base _3to2(1,: b); % new base i cap (relative primordia)
B base _3to2 (2:); % new radical j cap (relative primordium)
C=[0 0 0];
syms x y z;
D ═ ons (4,1), [ [ x, y, z ]; a; b; c ] ]; % knowing that the determinant of D equals zero from the content of the spatially resolved geometry is a planar equation.
detd=det(D);
z=solve(detd,z);
ezmesh(z,[-2,2,-2,2]);
data _ proj3_2 _ base _3to2 data'; % projection to New base coordinates
data _ proj3_1 _ base _3to1 data'; % projection to New base coordinate
data _ proj _ predict _ to _ origination 3_2 [ ]; % new radical coordinate expressed by primordium
data _ proj _ response _ to _ authority 3_1 [ ]; % new radical coordinates are expressed as primordia
for i=1:m
data_proj_respect_to_orienbasis3_2=[data_proj_respect_to_orienbasis3_2;
data_proj3_2(i,1)*base_3to2(1,:)+data_proj3_2(i,2)*base_3to2(2,:)];
data_proj_respect_to_orienbasis3_1=[data_proj_respect_to_orienbasis3_1;
data_proj3_1(i)*base_3to1(1,:)];
end
% data after 3-D to 2-D rendering
h_plot3_2=plot3(gca,data_proj_respect_to_orienbasis3_2(:,1), data_proj_respect_to_orienbasis3_2(:,2),data_proj_respect_to_orienbasis3_2(:,3),
'o','MarkerSize',5,'MarkerEdgeColor','g','MarkerFaceColor','g');
% rendering of the projected dashed 3-dimensional down to2
hl3_2=[];
hl3_1=[];
for i=1:m
dd=[data_proj_respect_to_orienbasis3_2(i,:);data(i,:)];
hl3_2(i)=plot3(gca,dd(:,1),dd(:,2),dd(:,3),'-.','markersize',10);
end
% rendering 3D reduced to1 base line
h_zhixian=plot3(gca,data_proj_respect_to_orienbasis3_1(:,1)
data_proj_respect_to_orienbasis3_1(:,2),data_proj_respect_to_orienbasis3_1(:,3),
'color','r','linewidth',2);
% rendering 3-dimensional down to1 data
h_plot3_1=plot3(gca,data_proj_respect_to_orienbasis3_1(:,1)
data_proj_respect_to_orienbasis3_1(:,2),data_proj_respect_to_orienbasis3_1(:,3),...
'o','MarkerSize',5,'MarkerEdgeColor','r','MarkerFaceColor','r')
);
S13, training the neural network, and optimizing the output of the neural network according to the loss function of the neural network;
the input layer neural network of the power device phase resolved partial discharge defect samples can be executed by a single machine and a single GPU, can be executed by multiple machines and GPUs, or can be executed by multiple machines and multiple GPUs in parallel, as shown in FIG. 4. The method comprises the following specific steps:
s131, defining a class of a BP neural network, and setting network related parameters;
s132 instantiates the neural network to construct a BP neural network with 3 input dimensions and 2 output dimensions, 3 hidden layers and 10 nodes in each hidden layer;
s133, when initializing the BP neural network, initializing the weight, the weight momentum and the error initial value of each layer of network nodes;
s134 introduces learning training data: setting sample data, taking a corresponding training data set as input, setting target data, setting the target data as fault classification, setting iteration for 500 times, and checking the sample data according to a training result;
s135, continuously calculating the output node data layer by layer forwards in the iterative process, and simultaneously calculating reverse modification weight values layer by layer until the set lowest error is reached or the target iteration is finished;
s14 predicts the power equipment partial discharge fault classification using the remaining test samples as test data, as shown in fig. 5.
Based on the similar concept of the load prediction method, referring to fig. 6, a partial discharge network training apparatus for phase resolution of power equipment includes:
the acquisition module 21 is configured to acquire a phase-resolved partial discharge map of the partial discharge measurement signal to form an original detection sample set D for partial discharge of the power equipment, and perform preprocessing on the original detection sample;
the data whitening module 22 is configured to reprocess the preprocessed detection samples by using a whitening mechanism, and input a part of the detection samples as training data to the neural network input layer;
the model training module 23 is configured to train the neural network and optimize an output of the neural network according to a loss function of the neural network;
and the prediction module 24 is used for predicting the partial discharge fault classification of the power equipment by using the rest detection samples as test data.
Further, comprising:
the acquisition module 21 further comprises a sample representation unit for representing a set of raw detection samples: d ═ D 1 ,d 2 ,...,d i ,...,d n N is the total amount of the collected samples, i is more than or equal to1 and less than or equal to n; the original test sample is represented by three-dimensional data, d i Where θ is a phase angle, t is time, Q is a defect rate, and θ, t, and Q are respectively disposed in an x-axis direction, a y-axis direction, and a z-axis direction.
Further, comprising:
the data whitening module 22, further comprising:
the data normalization unit is used for respectively obtaining the average values of the three characteristics theta, t and Q corresponding to all the training samples, subtracting the average value of the corresponding characteristics from all the samples to obtain the training samples DataAdjust (m & ltx 3) with zero average value and equal variance, wherein m is the number of the training samples;
a covariance matrix calculation unit for calculating a feature covariance matrix C (3 × 3) of the training sample DataAdjust, expressed as:
Figure BDA0002085344280000081
wherein cov () is a covariance matrix between the respective eigenvalues;
the two-dimensional plane selection unit is used for respectively solving eigenvalues (3 x 1) and eigenvectors (3 x 3) corresponding to the covariance matrix, sorting the eigenvalues according to the sequence from big to small, selecting the largest 2 eigenvalues, and then respectively taking the 2 eigenvectors corresponding to the eigenvalues as column vectors to form an eigenvector matrix;
a two-dimensional data mapping unit for projecting the training sample points onto a selected two-dimensional plane composed of the principal eigenvector u of the covariance matrix 1 And a sub-eigenvector u of a covariance matrix orthogonal to the principal eigenvector 2 Forming;
and the one-dimensional data mapping unit is used for selecting the eigenvectors with the largest characteristic value as a projection base to be one-dimensional, and mapping the training sample to a new base to obtain the training sample ODData after dimension reduction.
Further, comprising:
the loss function S (φ; x, y) of the neural network in the model training module 23 is expressed as:
Figure BDA0002085344280000091
phi is a parameter set of the neural network training, x and y are input data and output data of the neural network respectively, y' is an artificial labeling value of the sample, eta is a penalty coefficient, eta belongs to [0, infinity ], and P (#) is a penalty term.
Further, comprising:
the penalty term calculation formula is as follows:
Figure BDA0002085344280000092
where Γ (×) is represented as:
Figure BDA0002085344280000093
referring to fig. 7, in an embodiment of the invention, a structural schematic diagram of an electronic device is shown.
An embodiment of the present invention provides an electronic device, which may include a processor 310 (CPU), a memory 320, an input device 330, an output device 340, and the like, wherein the input device 330 may include a keyboard, a mouse, a touch screen, and the like, and the output device 340 may include a Display device, such as a Liquid Crystal Display (LCD), a Cathode Ray Tube (CRT), and the like.
Memory 320 may include Read Only Memory (ROM) and Random Access Memory (RAM), and provides processor 310 with program instructions and data stored in memory 320. In an embodiment of the present invention, the memory 320 may be used for a program of a phase-resolved partial discharge network training method for a power device.
The processor 310 is configured to execute the steps of any one of the above-mentioned partial discharge network training methods for phase resolution of the power device according to the obtained program instructions by calling the program instructions stored in the memory 320.
Based on the above embodiments, in the embodiments of the present invention, there is provided a computer-readable storage medium on which a computer program is stored, the computer program, when being executed by a processor, implementing the partial discharge network training method for power device phase resolution in any of the above method embodiments.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all changes and modifications that fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made in the embodiments of the present invention without departing from the spirit or scope of the embodiments of the invention. Thus, if such modifications and variations of the embodiments of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to encompass such modifications and variations.

Claims (4)

1. A partial discharge network training method for power equipment phase resolution is characterized by comprising the following steps:
(1) acquiring a phase-resolved partial discharge map of a partial discharge measurement signal to form a power equipment partial discharge original detection sample set D, and preprocessing an original detection sample in the original detection sample set D;
original detection sample set D ═ D 1 ,d 2 ,...,d i ,...,d n N is the total amount of the collected samples, i is more than or equal to1 and less than or equal to n; the original test sample is represented by three-dimensional data, d i Where θ is a phase angle, t is time, Q is a defect rate, and θ, t, and Q are respectively disposed in an x-axis direction, a y-axis direction, and a z-axis direction;
(2) A whitening mechanism is adopted to reprocess the preprocessed detection sample, and part of the detection sample is used as training data to be input into a neural network input layer;
and (3) reprocessing the preprocessed detection sample by adopting a whitening mechanism, wherein the whitening mechanism comprises the following steps:
(21) respectively obtaining the average value of three characteristics theta, t and Q corresponding to all training samples, subtracting the average value of the corresponding characteristics from all samples to obtain training samples DataAdjust (m & ltx 3) with zero average value and equal variance, wherein m is the number of the training samples;
(22) the feature covariance matrix C (3 × 3) of the training sample DataAdjust is obtained, and is expressed as:
Figure FDA0003730213320000011
wherein cov () is a covariance matrix between the respective eigenvalues;
(23) respectively solving eigenvalues (3 x 1) and eigenvectors (3 x 3) corresponding to the covariance matrix, sorting the eigenvalues according to the descending order, selecting the largest 2 eigenvalues, and respectively taking the 2 corresponding eigenvectors as column vectors to form an eigenvector matrix;
(24) projecting the training sample points to a selected two-dimensional plane, wherein the two-dimensional plane is formed by a main characteristic vector u of a covariance matrix 1 And a sub eigenvector u of the covariance matrix orthogonal to the principal eigenvector 2 Forming;
(25) selecting the eigenvectors vector with the largest eigenvalue value as a one-dimensional projection basis, and mapping the training sample to a new basis to obtain a training sample ODData after dimension reduction;
(3) training the neural network, and optimizing the output of the neural network according to the loss function of the neural network;
the loss function S (φ; x, y) of the neural network is expressed as:
Figure FDA0003730213320000021
phi is a parameter set of the neural network training, x and y are input data and output data of the neural network respectively, y' is an artificial labeling value of the sample, eta is a penalty coefficient, eta belongs to [0, infinity ], and P (#) is a penalty term;
(4) and predicting the partial discharge fault classification of the power equipment by using the rest detection samples as test data.
2. The partial discharge network training method for phase resolution of power equipment as claimed in claim 1, wherein the penalty term is calculated by the formula:
Figure FDA0003730213320000022
where Γ (×) is represented as:
Figure FDA0003730213320000023
3. an partial discharge network training device for phase resolution of power equipment, comprising:
the acquisition module is used for acquiring a phase-resolved partial discharge map of the partial discharge measurement signal to form an original detection sample set D for partial discharge of the power equipment, and preprocessing an original detection sample in the original detection sample set D;
the acquisition module further comprises a sample representation unit for representing a set of raw detection samples: d ═ D 1 ,d 2 ,...,d i ,...,d n N is the total amount of collected samples, and i is more than or equal to1 and less than or equal to n; the original test sample is represented by three-dimensional data, d i Where θ is a phase angle, t is time, Q is a defect rate, and θ, t, and Q are respectively disposed in an x-axis direction, a y-axis direction, and a z-axis direction;
the data whitening module is used for reprocessing the preprocessed detection samples by adopting a whitening mechanism and inputting part of the detection samples into a neural network input layer as training data;
a data whitening module, further comprising:
the data normalization unit is used for respectively obtaining the average values of three characteristics theta, t and Q corresponding to all the training samples, subtracting the average value of the corresponding characteristics from all the samples to obtain the training samples DataAdjust (m x 3) with zero average value and equal variance, wherein m is the number of the training samples;
a covariance matrix calculation unit for calculating a feature covariance matrix C (3 × 3) of the training sample DataAdjust, expressed as:
Figure FDA0003730213320000024
cov () is a covariance matrix between the characteristic values;
the two-dimensional plane selection unit is used for respectively solving eigenvalues (3 x 1) and eigenvectors (3 x 3) corresponding to the covariance matrix, sorting the eigenvalues according to the sequence from big to small, selecting the largest 2 eigenvalues, and then respectively taking the 2 eigenvectors corresponding to the eigenvalues as column vectors to form an eigenvector matrix;
a two-dimensional data mapping unit for projecting the training sample points onto a selected two-dimensional plane composed of principal eigenvectors u of the covariance matrix 1 And a sub-eigenvector u of a covariance matrix orthogonal to the principal eigenvector 2 Forming;
the one-dimensional data mapping unit is used for selecting the eigenvectors vector with the largest eigenvalue value as a one-dimensional projection basis, and mapping the training sample to a new basis to obtain a training sample ODData after dimension reduction;
the model training module is used for training the neural network and optimizing the output of the neural network according to the loss function of the neural network;
the loss function S (φ; x, y) of the neural network is expressed as:
Figure FDA0003730213320000031
phi is a parameter set of the neural network training, x and y are input data and output data of the neural network respectively, y' is an artificial labeling value of the sample, eta is a penalty coefficient, eta belongs to [0, infinity ], and P (#) is a penalty term;
and the prediction module is used for predicting the partial discharge fault classification of the power equipment by using the detection samples of the rest parts as test data.
4. The partial discharge network training device for phase resolution of power equipment according to claim 3, wherein the penalty term is calculated by the formula:
Figure FDA0003730213320000032
where Γ (×) is represented as:
Figure FDA0003730213320000033
CN201910485801.4A 2019-06-05 2019-06-05 Partial discharge network training method and device for phase resolution of power equipment Active CN110309010B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910485801.4A CN110309010B (en) 2019-06-05 2019-06-05 Partial discharge network training method and device for phase resolution of power equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910485801.4A CN110309010B (en) 2019-06-05 2019-06-05 Partial discharge network training method and device for phase resolution of power equipment

Publications (2)

Publication Number Publication Date
CN110309010A CN110309010A (en) 2019-10-08
CN110309010B true CN110309010B (en) 2022-08-19

Family

ID=68075611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910485801.4A Active CN110309010B (en) 2019-06-05 2019-06-05 Partial discharge network training method and device for phase resolution of power equipment

Country Status (1)

Country Link
CN (1) CN110309010B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111007365A (en) * 2019-11-25 2020-04-14 国网四川省电力公司广安供电公司 Ultrasonic partial discharge identification method and system based on neural network
CN113378960A (en) * 2021-06-25 2021-09-10 海南电网有限责任公司电力科学研究院 Training method of partial discharge detection model, detection information determining method and device
CN113780308A (en) * 2021-08-27 2021-12-10 吉林省电力科学研究院有限公司 GIS partial discharge mode identification method and system based on kernel principal component analysis and neural network
CN115267462B (en) * 2022-09-30 2022-12-23 丝路梵天(甘肃)通信技术有限公司 Partial discharge type identification method based on self-adaptive label generation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105589037A (en) * 2016-03-16 2016-05-18 合肥工业大学 Ensemble learning-based electric power electronic switch device network fault diagnosis method
CN109272115A (en) * 2018-09-05 2019-01-25 宽凳(北京)科技有限公司 A kind of neural network training method and device, equipment, medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105589037A (en) * 2016-03-16 2016-05-18 合肥工业大学 Ensemble learning-based electric power electronic switch device network fault diagnosis method
CN109272115A (en) * 2018-09-05 2019-01-25 宽凳(北京)科技有限公司 A kind of neural network training method and device, equipment, medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于PCA与GA-LM-BP神经网络的变压器故障诊断;禹建丽等;《河北工业大学学报》;20171015(第05期);全文 *

Also Published As

Publication number Publication date
CN110309010A (en) 2019-10-08

Similar Documents

Publication Publication Date Title
CN110309010B (en) Partial discharge network training method and device for phase resolution of power equipment
CN109685152B (en) Image target detection method based on DC-SPP-YOLO
CN109559338B (en) Three-dimensional point cloud registration method based on weighted principal component analysis method and M estimation
CN113096234B (en) Method and device for generating three-dimensional grid model by using multiple color pictures
CN109886307A (en) A kind of image detecting method and system based on convolutional neural networks
CN109934810B (en) Defect classification method based on improved particle swarm wavelet neural network
CN108491817A (en) A kind of event detection model training method, device and event detecting method
CN110208660B (en) Training method and device for diagnosing partial discharge defects of power equipment
CN110879982B (en) Crowd counting system and method
CN109726746B (en) Template matching method and device
CN110288017B (en) High-precision cascade target detection method and device based on dynamic structure optimization
CN110287873A (en) Noncooperative target pose measuring method, system and terminal device based on deep neural network
CN106780546B (en) The personal identification method of motion blur encoded point based on convolutional neural networks
CN108389180A (en) A kind of fabric defect detection method based on deep learning
CN112560967B (en) Multi-source remote sensing image classification method, storage medium and computing device
CN110889399A (en) High-resolution remote sensing image weak and small target detection method based on deep learning
CN109376787A (en) Manifold learning network and computer visual image collection classification method based on it
CN107292337A (en) Ultralow order tensor data filling method
CN110706208A (en) Infrared dim target detection method based on tensor mean square minimum error
CN112233200A (en) Dose determination method and device
CN111611925A (en) Building detection and identification method and device
CN111428555B (en) Joint-divided hand posture estimation method
CN114565842A (en) Unmanned aerial vehicle real-time target detection method and system based on Nvidia Jetson embedded hardware
CN104680190B (en) Object detection method and device
CN110176021A (en) In conjunction with the level set image segmentation method and system of the conspicuousness information of gamma correction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant