CN111382862B - Method for identifying abnormal data of power system - Google Patents

Method for identifying abnormal data of power system Download PDF

Info

Publication number
CN111382862B
CN111382862B CN201811609951.3A CN201811609951A CN111382862B CN 111382862 B CN111382862 B CN 111382862B CN 201811609951 A CN201811609951 A CN 201811609951A CN 111382862 B CN111382862 B CN 111382862B
Authority
CN
China
Prior art keywords
neural network
data
power system
training
abnormal data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811609951.3A
Other languages
Chinese (zh)
Other versions
CN111382862A (en
Inventor
沈力
陈硕
乔林
宋纯贺
刘树吉
王忠锋
李钊
李力刚
吕旭明
崔世界
卢彬
徐志远
周巧妮
付亚同
吴赫
冉冉
刘碧琦
胡楠
曲睿婷
徐立波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Institute of Automation of CAS
Information and Telecommunication Branch of State Grid Liaoning Electric Power Co Ltd
Original Assignee
Shenyang Institute of Automation of CAS
Information and Telecommunication Branch of State Grid Liaoning Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Institute of Automation of CAS, Information and Telecommunication Branch of State Grid Liaoning Electric Power Co Ltd filed Critical Shenyang Institute of Automation of CAS
Priority to CN201811609951.3A priority Critical patent/CN111382862B/en
Publication of CN111382862A publication Critical patent/CN111382862A/en
Application granted granted Critical
Publication of CN111382862B publication Critical patent/CN111382862B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Economics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Tourism & Hospitality (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Probability & Statistics with Applications (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Evolutionary Biology (AREA)
  • General Business, Economics & Management (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

The invention relates to a method for identifying abnormal data of an electric power system, which comprises the steps of taking normal data of the electric power system as a training sample, and training a neural network; inputting data to be detected into the trained neural network to obtain a residual sequence; clustering residual training based on an affine propagation clustering algorithm; and judging abnormal data according to the characteristics of each category and the intra-category distance. The invention utilizes the chaotic particle swarm algorithm to train the neural network, and simultaneously adopts the affine propagation clustering algorithm to realize data clustering, thereby obviously reducing the calculated amount, simultaneously not depending on sampling distribution, and effectively improving the accuracy of identifying abnormal data of the power system.

Description

Method for identifying abnormal data of power system
Technical Field
The invention relates to the field of abnormal data identification, in particular to an abnormal data identification method for an electric power system.
Background
The detection and identification of abnormal data of the power system are one of the important functions of the state estimation of the power system, and the purpose of the method is to eliminate a small amount of accidental bad data in the measurement sampling data and improve the reliability of the state estimation. Complex power networks contain a large amount of real-time data, and the accuracy of the data determines the safety and reliability of the operation of the power system. Bad data in the power system may affect the dispatcher to make wrong decisions, thereby affecting the normal operation of the power system and even possibly threatening the safety of the whole power system.
Therefore, in order to ensure stable and safe operation of the power system, it is important to detect the bad data and extract them from the original data for correction. The quality of the measured data of the power system is an important factor influencing the state estimation efficiency and result of the power system, a small amount of bad data objectively exists in the measured data, and the detection and identification of the bad data are important components of the state estimation of the power system. The presence of bad data in the power system may degrade the convergence performance of the state estimation and may even cause less failure of the state estimation. How to reliably detect and correct bad data becomes a challenge for state estimation applications.
The method for identifying abnormal data of the power system generally comprises the steps of data preprocessing, clustering and abnormal data judgment. The BP neural network commonly adopted in the current data preprocessing method is easy to fall into a local extremum so as to reduce the data preprocessing precision. The clustering algorithm often employs a K-means method and a median-gap statistical method. However, the K-means method requires the number of clusters to be preset, and if the number of clusters is not properly set, the accuracy of the algorithm is seriously affected. Although the middle gap statistical method can automatically calculate the number of clusters, the algorithm depends on the sampling distribution of data, and a large amount of samples are needed to obtain a stable result, so that the algorithm has the defects of complex calculation and large calculation amount, and is not suitable for processing mass data.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an abnormal data identification method for an electric power system, which solves the problems of complex calculation and large calculation amount of the existing abnormal data identification algorithm.
The technical scheme adopted by the invention for realizing the purpose is as follows:
a method for identifying abnormal data of an electric power system comprises the following steps:
step 1: training a neural network based on normal data of the power system;
step 1-1: acquiring a normal electrical appliance operation process state measurement value data set X:
Figure BDA0001924468460000021
each row in X is a measurement value of the operating process state of the electrical appliance, such as a voltage value, a current value, active power, reactive power and the like. In the matrix, there are a total of M measurements, each with N samples.
Step 1-2: normalizing X:
Figure BDA0001924468460000022
wherein xjminAnd xjmaxThe maximum value and the minimum value of the j measurement value are obtained.
Step 1-3: randomly splitting the data set X along the row direction to form a set XM×LAnd XM×(N-L)And L, N-L is approximately equal to 0.8:0.2 and is respectively used as a training set and a test set of the neural network.
Step 1-4: and (5) taking the measured value in the ith as a target value and the measured values of other types as input values, and training the neural network. The neural network is selected as a 3-layer neural network and comprises an input layer, a hidden layer and an output layer, the number of neurons of the input layer is M-1, the number of neurons of the hidden layer is set to be M-1, and the number of neurons of the output layer is 1. The relationship between neural network inputs and outputs is:
Figure BDA0001924468460000023
where f and g are activation functions, xiIs the input to the ith input layer neuron, αijIs the weight between the ith neuron of the input layer and the jth neuron of the hidden layer, βjIs the weight between the jth neuron of the hidden layer to the output node, AjAnd B is a bias coefficient and is initialized by adopting an Nguyen-Widrow algorithm. The error between the neural network output and the target value adopts the mean square error.
Step 1-5: and training the neural network based on the particle swarm algorithm of the chaotic search.
Step 1-5-1: velocity v of randomly initialized particlesiAnd position li
Step 1-5-2: calculating a fitness value f for each particleiI.e. neural network outputs and targetsMean square error between values.
Step 1-5-3: obtaining the historical optimal position of each particle and the optimal positions of all the particles
1-5-4: updating the speed and the position according to a formula;
vid←w*vid+c1r1(Lid-xid)+c2r2(Lgd-xid)
lid←lid+vid
wherein: i is the particle index; d is the data dimension index, c1And c2Is a constant number r1And r2Is a closed interval [0,1]Random number of (3), LiAnd LgThe historical optimal position of the ith particle and the optimal positions of all particles, and w is the inertial weight.
1-5-5: if the iteration maximum times are not reached, returning to the step 2; if the maximum number of iterations is reached, the following steps are continued.
1-5-6: performing chaotic local search according to a formula; if the searched solution is more optimal, updating the historical optimal position;
Figure BDA0001924468460000031
ωi←μωi(1-ωi)
li←ai+di×ωi
wherein a isi,diIs constant, μ is the attractor.
And training the weight and the bias coefficient of the neural network on the training set by using a chaotic particle swarm algorithm, and testing on the training set until the mean square error between the output of the neural network on the testing set and a target value is not reduced.
Step 2: inputting data to be detected into the trained neural network to obtain a residual sequence;
step 2-1: acquiring a data set Y to be tested:
Figure BDA0001924468460000041
y is the same as the data type of each row in the normal data set X, but there are Q samples per measurement.
Step 2-2: normalizing X:
Figure BDA0001924468460000042
wherein xjminAnd xjmaxThe maximum value and the minimum value of the j measurement value in the training set.
Step 2-3: obtaining mean square error sequence E ═ E by using trained neural network1,e2,...,eQ]。
And step 3: clustering residual training based on an affine propagation clustering algorithm;
step 3-1: and randomly initializing an attraction degree matrix R, an attribution degree matrix A and a damping factor lambda between 0 and 1.
Updating the attraction matrix R and the attribution matrix A
Figure BDA0001924468460000043
Figure BDA0001924468460000044
Figure BDA0001924468460000045
D=R+A
Wherein s (i, k) ═ x | |i-xk||2D is the decision matrix for similarity.
Step 3-2: and finding out the point of the decision matrix D with the diagonal larger than 0 as a clustering center. And classifying the sample points according to the distance between the sample points and the clustering representative points.
Step 3-3: and if the iteration times are larger than the initialized maximum iteration times, stopping clustering. Otherwise, returning to the step 2.
And 4, step 4: and judging abnormal data according to the characteristics of each category and the intra-category distance.
Step 4-1: for a category with the number of elements less than or equal to 3, all elements in the category are considered to be abnormal data.
Step 4-2: for classes with element numbers greater than 3, the mean square error e for a certain inputiLet its corresponding cluster center be
Figure BDA0001924468460000051
If it is not
Figure BDA0001924468460000052
E is theniAbnormal values exist in some dimensions of the corresponding measurement data vector.
The invention has the following beneficial effects and advantages:
the chaotic particle swarm algorithm is used for neural network training, and the affine propagation clustering algorithm is used for realizing data clustering, so that the calculated amount can be obviously reduced, the sampling distribution is not depended on, and the accuracy of abnormal data identification of the power system is effectively improved.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
fig. 2 is a schematic diagram of the neural network structure of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, but rather should be construed as modified in the spirit and scope of the present invention as set forth in the appended claims.
It will be understood that when an element is referred to as being "disposed on" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. The terms "front," "back," "left," "right," and the like as used herein are for illustrative purposes only and do not represent a unique embodiment.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Fig. 1 shows a flow chart of the method of the present invention.
The method comprises the following steps:
acquiring a normal electrical appliance operation process state measurement value data set X:
Figure BDA0001924468460000061
each row in X is a measurement value of the operating process state of the electrical appliance, such as a voltage value, a current value, active power, reactive power and the like. In the matrix, there are a total of M measurements, each with N samples.
Normalizing X:
Figure BDA0001924468460000062
wherein xjminAnd xjmaxThe maximum value and the minimum value of the j measurement value are obtained.
Randomly splitting the data set X along the row direction to form a set XM×LAnd XM×(N-L)And L, N-L is approximately equal to 0.8:0.2 and is respectively used as a training set and a test set of the neural network.
And (5) taking the measured value in the ith as a target value and the measured values of other types as input values, and training the neural network.
The neural network is selected as a 3-layer neural network and comprises an input layer, a hidden layer and an output layer, the number of neurons of the input layer is M-1, the number of neurons of the hidden layer is set to be M-1, and the number of neurons of the output layer is 1.
The relationship between neural network inputs and outputs is:
Figure BDA0001924468460000063
where f and g are activation functions, xiIs the input to the ith input layer neuron, αijIs the weight between the ith neuron of the input layer and the jth neuron of the hidden layer, βjIs the weight between the jth neuron of the hidden layer to the output node, AjAnd B is a bias coefficient and is initialized by adopting an Nguyen-Widrow algorithm. The error between the neural network output and the target value adopts the mean square error.
The network is generally trained by using intelligent algorithms such as a gradient back propagation algorithm or particle swarm. But convergence to a local minimum occurs. In order to solve the problem, a particle swarm algorithm based on chaotic search is adopted to train the neural network.
The particle swarm optimization has the core idea that the parameter vector to be optimized is regarded as the position of a particle, the objective function to be optimized is regarded as the fitness f of the particle, and the fitness is optimized by dynamically adjusting the position between the group history optimal position and the individual history optimal position. The basic particle swarm algorithm has the problem of convergence to a local optimal solution, so that the chaotic search is utilized to enhance the search capability of the particle swarm algorithm.
The complete chaotic particle swarm algorithm comprises the following steps:
step 1: velocity v of randomly initialized particlesiAnd position li
Step 2: calculating a fitness value f for each particleiI.e. the mean square error between the neural network output and the target value.
And step 3: obtaining the historical optimal position of each particle and the optimal positions of all the particles
And 4, step 4: updating the speed and the position according to a formula;
vid←w*vid+c1r1(Lid-xid)+c2r2(Lgd-xid)
lid←lid+vid
wherein: i is the particle index; d is the data dimension index, c1And c2Is a constant number r1And r2Is a closed interval [0,1]Random number of (3), LiAnd LgThe historical optimal position of the ith particle and the optimal positions of all particles, and w is the inertial weight.
And 5: if the iteration maximum times are not reached, returning to the step 2; if the maximum number of iterations is reached, the following steps are continued.
Step 6: performing chaotic local search according to a formula; if the searched solution is more optimal, updating the historical optimal position;
Figure BDA0001924468460000081
ωi←μωi(1-ωi)
li←ai+di×ωi
wherein a isi,diIs constant, μ is the attractor.
And training the weight and the bias coefficient of the neural network on the training set by using a chaotic particle swarm algorithm, and testing on the training set until the mean square error between the output of the neural network on the testing set and a target value is not reduced.
Acquiring a data set Y to be tested:
Figure BDA0001924468460000082
y is the same as the data type of each row in the normal data set X, but there are Q samples per measurement.
Normalizing X:
Figure BDA0001924468460000083
wherein xjminAnd xjmaxThe maximum value and the minimum value of the j measurement value in the training set.
Obtaining mean square error sequence E ═ E by using trained neural network1,e2,...,eQ]。
Clustering the elements in the E based on an affine propagation clustering algorithm:
and randomly initializing an attraction degree matrix R, an attribution degree matrix A and a damping factor lambda between 0 and 1.
Updating the attraction matrix R and the attribution matrix A
Figure BDA0001924468460000084
Figure BDA0001924468460000085
Figure BDA0001924468460000091
D=R+A
Wherein s (i, k) ═ x | |i-xk||2D is the decision matrix for similarity.
And step 3: and finding out the point of the decision matrix D with the diagonal larger than 0 as a clustering center. And classifying the sample points according to the distance between the sample points and the clustering representative points.
And 4, step 4: and if the iteration times are larger than the initialized maximum iteration times, stopping clustering. Otherwise, returning to the step 2.
For a category with the number of elements less than or equal to 3, all elements in the category are considered to be abnormal data.
For classes with element numbers greater than 3, the mean square error e for a certain inputiLet its corresponding cluster center be
Figure BDA0001924468460000092
If it is not
Figure BDA0001924468460000093
E is theniAbnormal values exist in some dimensions of the corresponding measurement data vector.
Table 1 shows the accuracy of determining abnormal data with different dimensions, and the number of data samples is 1000 points.
TABLE 1 abnormal data determination accuracy for different dimensions
Data dimension 3 5 7 9 10
BP neural network-based and mesoscopic statistical method 90.2% 91.4% 93.1% 93.6 94.2%
Proposing an algorithm 94.8% 95.2% 96.6% 97.2% 98.5%
As can be seen from table 1, the proposed algorithm has significant advantages in terms of accuracy over other algorithms.

Claims (7)

1. A method for identifying abnormal data of an electric power system is characterized by comprising the following steps:
step 1: taking normal data of the power system as a training sample to train a neural network;
step 2: inputting data to be detected into the trained neural network to obtain a residual sequence;
the step of inputting the data to be detected into the trained neural network to obtain the residual sequence comprises the following steps:
step 2.1: acquiring a data set Y to be tested:
Figure FDA0003189075490000011
the data type of each row in the data set Y to be tested is the same as that of each row in the normal data set X, but each measured value has Q sampling data;
step 2.2: normalizing Y:
Figure FDA0003189075490000012
wherein, yhkIs the element of the h-th row and the k-th column in Y, YkminAnd ykmaxThe minimum and maximum values in the k column of Y,
Figure FDA0003189075490000013
is yhkNormalized values;
step 2.3: obtaining mean square error sequence E ═ E by using trained neural network1,e2,...,eQ];
And step 3: clustering residual training based on an affine propagation clustering algorithm;
the clustering of the residual training based on the affine propagation clustering algorithm comprises the following processes:
step 3.1: randomly initializing an attraction matrix R, an attribution matrix A and a damping factor lambda between 0 and 1; updating an attraction degree matrix R and an attribution degree matrix A:
Figure FDA0003189075490000014
Figure FDA0003189075490000015
Figure FDA0003189075490000016
D=R+A
wherein i, k, i ', k' are indexes, and s (i, k) ═ | xi-xk||2Is similarity, D is decision matrix, R is attraction matrix, A is attribution matrix, lambda is damping factor,
Figure FDA0003189075490000021
denotes the maximum value of A (i, k ') + s (i, k') in the ith row except for the kth column,
Figure FDA0003189075490000022
indicates that i' can only take values in i and k, i.e., max {0, R (i, k) } + max {0, R (k, k) }, where max {0, R (i, k) } indicates the larger of 0 and R (i, k); in a corresponding manner, the first and second optical fibers are,
Figure FDA0003189075490000023
meaning that i' can take the remaining values of 1 to Q except i and k;
step 3.2: finding out a point with a diagonal larger than 0 in the decision matrix D as a clustering center, and classifying the sample points according to the distance between the sample points and the clustering representative points;
step 3.3: if the iteration times are larger than the initialized maximum iteration times, stopping clustering; otherwise, returning to the step 3.2;
and 4, step 4: and judging abnormal data according to the characteristics of each category and the intra-category distance.
2. The method for identifying abnormal data of an electric power system according to claim 1, wherein: the method for training the neural network by taking normal data of the power system as a training sample comprises the following steps:
step 1.1: acquiring a state measurement value data set X in the normal electric appliance operation process:
Figure FDA0003189075490000024
wherein, each row in X is a state measurement value of the running process of the electric appliance, in the matrix, M measurement values are counted, and each measurement value has N sampling data;
step 1.2: normalizing X:
Figure FDA0003189075490000025
wherein x isijIs the element in the ith row and the jth column in X, XjminAnd xjmaxIs the minimum of the jth column in XThe value and the maximum value of the sum,
Figure FDA0003189075490000031
is xijNormalized values;
step 1.3: randomly splitting the state measurement value data set X along the row direction to form a set XM×LAnd XM×(N-L)Wherein L is N-L is approximately equal to 0.8:0.2 and is respectively used as a training set and a test set of the neural network;
step 1.4: setting the ith measurement value as a target value of neural network training, and setting the other measurement values as input values of the neural network training;
step 1.5: and training the neural network based on the particle swarm algorithm of the chaotic search.
3. The method for identifying abnormal data of an electric power system according to claim 2, wherein: the neural network is a three-layer neural network, the three layers comprise an input layer, a hidden layer and an output layer, the number of neurons of the input layer is M-1, the number of neurons of the hidden layer is set to be M-1, and the number of neurons of the output layer is 1;
the relationship between neural network inputs and outputs is:
Figure FDA0003189075490000032
where Y is the output value, f and g are both hyperbolic tangent activation functions, wijIs the input to the ith input layer neuron, αijIs the weight between the ith neuron of the input layer and the jth neuron of the hidden layer, βjIs the weight between the jth neuron of the hidden layer to the output node, AjThe bias coefficient of the jth neuron, the bias coefficient of the B output layer neuron, all weights and bias coefficients of the neural network are initialized randomly between-1 and 1; the error between the neural network output and the target value adopts the mean square error.
4. The method for identifying abnormal data of an electric power system according to claim 2, wherein: the chaotic search-based particle swarm algorithm for training the neural network comprises the following processes:
step 1.5.1: velocity v of randomly initialized particlesiAnd position li
Step 1.5.2: calculating a fitness value f for each particleiI.e. the mean square error between the neural network output and the target value;
step 1.5.3: acquiring the historical optimal position of each particle and the optimal positions of all the particles;
step 1.5.4: updating speed and position;
step 1.5.5: judging whether the maximum iteration number is reached currently, if so, executing a step 1.5.6, otherwise, returning to the step 1.5.2;
step 1.5.6: and performing chaotic local search, and if the searched solution is more optimal, updating the historical optimal position until the mean square error between the neural network output on the test set and the target value is not reduced any more, wherein the obtained particle position is the optimization result.
5. The method for identifying abnormal data of an electric power system according to claim 4, wherein: the update speed and position are:
vid←w*vid+c1r1(Lid-lid)+c2r2(Lgd-lid)
lid←lid+vid
wherein: i is the particle index; d is the data dimension index, c1And c2Is a constant number r1And r2Is a closed interval [0,1]Random number of (3), LidFor the historical optimum position of the ith particle in the d-dimension, LgdFor the optimal position of all particles in the d-dimension, w is the inertial weight, lidAnd vidThe position and velocity of the ith particle in the d-dimension.
6. The method for identifying abnormal data of an electric power system according to claim 4, wherein: the chaotic local search comprises the following steps:
Figure FDA0003189075490000041
ωi←μωi(1-ωi)
li←ai+di×ωi
wherein i is the index of the particle, ai,diIs constant, μ is an attractor, ωiIs a chaotic coefficient.
7. The method for identifying abnormal data of an electric power system according to claim 1, wherein: the judging abnormal data according to the characteristics of each category and the intra-category distance comprises the following steps:
if the number of the elements in a certain category is not more than 3, all the elements in the category are considered to be abnormal data;
if the number of elements in a class is greater than 3, the mean square error e corresponding to an inputiLet its corresponding cluster center be
Figure FDA0003189075490000051
If it is not
Figure FDA0003189075490000052
E is theniAn outlier is present in the corresponding measurement data vector.
CN201811609951.3A 2018-12-27 2018-12-27 Method for identifying abnormal data of power system Active CN111382862B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811609951.3A CN111382862B (en) 2018-12-27 2018-12-27 Method for identifying abnormal data of power system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811609951.3A CN111382862B (en) 2018-12-27 2018-12-27 Method for identifying abnormal data of power system

Publications (2)

Publication Number Publication Date
CN111382862A CN111382862A (en) 2020-07-07
CN111382862B true CN111382862B (en) 2021-09-14

Family

ID=71214490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811609951.3A Active CN111382862B (en) 2018-12-27 2018-12-27 Method for identifying abnormal data of power system

Country Status (1)

Country Link
CN (1) CN111382862B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016248A (en) * 2020-08-31 2020-12-01 华北电力大学 Elman neural network based SCR denitration system bad data identification method
CN112260989B (en) * 2020-09-16 2021-07-30 湖南大学 Power system and network malicious data attack detection method, system and storage medium
CN113075498B (en) * 2021-03-09 2022-05-20 华中科技大学 Power distribution network traveling wave fault positioning method and system based on residual error clustering
CN113421176B (en) * 2021-07-16 2022-11-01 昆明学院 Intelligent screening method for abnormal data in student score scores
CN116881746B (en) * 2023-09-08 2023-11-14 国网江苏省电力有限公司常州供电分公司 Identification method and identification device for abnormal data in electric power system
CN118152763A (en) * 2024-05-11 2024-06-07 北京智芯微电子科技有限公司 Distribution network data sampling method and device and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104361393B (en) * 2014-09-06 2018-02-27 华北电力大学 Data predication method is used for based on the improved neural network model of particle swarm optimization algorithm
US20160358075A1 (en) * 2015-06-08 2016-12-08 The Regents Of The University Of Michigan System for implementing a sparse coding algorithm
CN105224872B (en) * 2015-09-30 2018-04-13 河南科技大学 A kind of user's anomaly detection method based on neural network clustering
CN108399201B (en) * 2018-01-30 2020-05-12 武汉大学 Web user access path prediction method based on recurrent neural network

Also Published As

Publication number Publication date
CN111382862A (en) 2020-07-07

Similar Documents

Publication Publication Date Title
CN111382862B (en) Method for identifying abnormal data of power system
Fathi et al. An improvement in RBF learning algorithm based on PSO for real time applications
CN110504676B (en) Power distribution network state estimation method based on APSO-BP
CN111199252A (en) Fault diagnosis method for intelligent operation and maintenance system of power communication network
CN111783845B (en) Hidden false data injection attack detection method based on local linear embedding and extreme learning machine
CN111912611A (en) Method and device for predicting fault state based on improved neural network
CN109901064B (en) ICA-LVQ-based high-voltage circuit breaker fault diagnosis method
CN113723238A (en) Human face lightweight network model construction method and human face recognition method
Bharill et al. Enhanced cluster validity index for the evaluation of optimal number of clusters for Fuzzy C-Means algorithm
Xie et al. Fuzzy multi-attribute decision making methods based on improved set pair analysis
Gu et al. Efficient intrusion detection based on multiple neural network classifiers with improved genetic algorithm.
CN113255880A (en) Method and system for judging electricity stealing data based on improved neural network model
Purwar et al. Empirical evaluation of algorithms to impute missing values for financial dataset
Lin et al. Some continuous aggregation operators with interval-valued intuitionistic fuzzy information and their application to decision making
CN108446506B (en) Uncertain system modeling method based on interval feedback neural network
CN116680969A (en) Filler evaluation parameter prediction method and device for PSO-BP algorithm
Zhao et al. An effective model selection criterion for mixtures of Gaussian processes
Chen et al. A GPU-accelerated approximate algorithm for incremental learning of Gaussian mixture model
Soltani et al. A pso-based fuzzy c-regression model applied to nonlinear data modeling
CN114692729A (en) New energy station bad data identification and correction method based on deep learning
CN114169007A (en) Medical privacy data identification method based on dynamic neural network
Subbarayan et al. Modular neural network architecture using piece-wise linear mapping
CN113707213A (en) Protein-ligand binding site prediction method based on deep learning
Jun et al. A multi-attribute group decision-making approach based on uncertain linguistic information
Ning Network intrusion classification based on probabilistic neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant