CN112580700B - Data correction method, system and storage medium of electric power Internet of things meter - Google Patents

Data correction method, system and storage medium of electric power Internet of things meter Download PDF

Info

Publication number
CN112580700B
CN112580700B CN202011410515.0A CN202011410515A CN112580700B CN 112580700 B CN112580700 B CN 112580700B CN 202011410515 A CN202011410515 A CN 202011410515A CN 112580700 B CN112580700 B CN 112580700B
Authority
CN
China
Prior art keywords
active power
power data
training set
data
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011410515.0A
Other languages
Chinese (zh)
Other versions
CN112580700A (en
Inventor
姜淏予
宋晋
陈凯
杨文龙
张晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Jiasu Industrial Internet Co ltd
Original Assignee
Hangzhou Jiasu Industrial Internet Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Jiasu Industrial Internet Co ltd filed Critical Hangzhou Jiasu Industrial Internet Co ltd
Priority to CN202011410515.0A priority Critical patent/CN112580700B/en
Publication of CN112580700A publication Critical patent/CN112580700A/en
Application granted granted Critical
Publication of CN112580700B publication Critical patent/CN112580700B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y10/00Economic sectors
    • G16Y10/35Utilities, e.g. electricity, gas or water

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Business, Economics & Management (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Public Health (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Water Supply & Treatment (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

The invention discloses a data correction method of an electric power physical meter, which comprises the steps of obtaining time points in historical meter data in a range, and first active power data and second active power data corresponding to each time point as a first training set, respectively calculating corresponding first active power data deviation degree and second active power data deviation degree in the first training set, and endowing deviation degree labels; setting a PNN smoothing factor parameter, improving a PNN algorithm by using a second training set characteristic, and obtaining a corrected second active power data value through ELM; and subtracting the nameplate loss from the corrected second active power data value to obtain the final first active power data correction value under each time sequence. According to the method, the data of the electric low-voltage side meter can be corrected only by acquiring high evaluation values and nameplate loss at various time points in historical data of several days.

Description

Data correction method, system and storage medium of electric power Internet of things meter
Technical Field
The invention relates to the technical field of computers, in particular to a data correction method, a data correction system and a storage medium for an electric power Internet of things meter.
Background
With the popularization of the internet of things and the arrival of the big data era, the internet of things data becomes a key influencing the accuracy of analysis of big data. For the electric power internet of things data, how to correct the deviation generated by the electric meter in the electric power internet of things link becomes a key problem to be solved. The problem of correcting the power data in the link of the power physical link meter is of great significance to solving the problem of the quality of the power physical link data.
The electric power meter data is based on the generated electric power consumption data of a certain transformer, including voltage, current, power data and the like. In the electric power internet of things system, all energy consumption data are obtained by monitoring through a power meter. Generally, a power meter, especially a monitoring meter on the low-voltage side of a transformer, often has many abnormal situations, and further, the difference between the measured data of the high-voltage meter and the measured data of the transformer is still too large under the condition of considering the loss of the transformer. Generally speaking, for active power, the standard loss of the nameplate of the transformer is generally between 0 and 3kw along with the variation of the load rate. However, compared with the difference between the data monitored by the high-low meter, under certain time nodes, the range is far exceeded, and obvious deviation of the power meter exists. Thus, we need to correct such meter data, especially for low meters.
At present, the problem of deviation exists in correcting the data of the power meter, and the data is usually corrected by simple statistics through historical data or calculated by other meters, but all the methods have certain defects, namely, the deviation is artificially estimated or the condition for obtaining the data value which can be calculated accurately is not enough, so that the final correction result still has the condition of large deviation.
Disclosure of Invention
The invention provides a data correction method of an electric power Internet of things meter, aiming at the defects in the prior art, and the method comprises the following steps:
s1, acquiring time points in historical meter data in a range, and first active power data and second active power data corresponding to the time points as a first training set, wherein the first active power data are active power of a low-voltage side of a transformer, and the second active power data are active power of a high-voltage side of the transformer;
s2, calculating corresponding first active power data deviation degree and second active power data deviation degree in the first training set respectively, and giving deviation degree labels;
s3, establishing a second training set and a third training set, wherein the second training set comprises time points, first active power data corresponding to the time points and the deviation degree of the first active power data, and the third training set comprises the time points, second active power data corresponding to the time points and the deviation degree of the second active power data;
s4, setting a PNN smoothing factor parameter, improving a PNN algorithm by using a second training set characteristic, acquiring a time point in a real-time meter data set to be corrected and corresponding first active power data to construct a first test set serving as a test input characteristic of the improved PNN, and calculating the deviation degree of the corrected first and second active power data corresponding to the first test set after the probability of a mode layer is output;
s5, taking the third training set as an ELM training set, adding the corrected deviation degrees of the first and second active power data and the second active power data in the real-time meter data set into the first test set to construct a second test set as an ELM test set, and obtaining a corrected second active power data value;
and S6, subtracting nameplate loss from the corrected second active power data value to obtain the final first active power data correction value under each time sequence.
Specifically, the PNN probabilistic neural network in step S4 includes at least four layers, which are an input layer, a mode layer, a summation layer, and an output layer;
wherein the input layer is an input vector X ═ X1 x2 ... xn]For importing an input vector into a mode layer; the mode layer is composed of Gaussian radial basis functions phi (x), the number of the neurons is equal to the number of samples in the training set, each neuron has a center, a probability value is calculated through each test set sample, and the probability value is output as follows:
Figure BDA0002817899380000031
wherein d is the sample dimension; l isiThe number of the ith sample in the training set; m is the number of categories; sigma is a smoothing factor; i x-xijI is the distance from the input sample to the center of the sample;
obtaining the number of neurons with the same category number in the summation layer, performing weighted average on the output of hidden neurons belonging to the same category in the mode layer, and obtaining the probability density function f of the ith category by a kernel function estimation methodiThe following were used:
Figure BDA0002817899380000032
the class output with the highest probability is obtained in the output layer, as shown in the following formula: y is argmax (f)j);
Adding a weight between the hidden layer and the summation layer, wherein the weight is obtained by using the time point of calculating the training sample and the posterior probability P (t | class) related to the sample class as the weight between the mode layer and the summation layer of the PNN, as follows:
Figure BDA0002817899380000033
wherein, tkIs the kth time point; classiIs the ith category;
introducing a posterior probability
Figure BDA0002817899380000034
As the weight of each probability output, the improved output of the summation layer is calculated as follows:
Figure BDA0002817899380000041
wherein
Figure BDA0002817899380000042
Is phiij(x) The weight coefficient is used for detecting the first test set after the PNN algorithm is improved and obtaining corresponding corrected first and second active power data deviation rangesAnd (4) degree.
Preferably, the step S5 includes:
step S51, taking the third training set as an ELM training set, adding the corrected deviation degrees of the first and second active power data and the second active power data in the real-time meter data set into the first test set to construct a second test set as an ELM test set;
step S52, an ELM model is constructed, ELM parameters are set, the number of hidden layers is 3, and the hidden layer activation function is sigmode.
Step S53, generating ELM weight omega randomlyiAnd a threshold value bi
Step S54, calculating H from H ═ WI + B; wherein H is a hidden layer response matrix, I is a matrix formed by normalized training data, and W is omegaiA weight matrix is formed, B is an element of BiA threshold vector;
step S55, based on
Figure BDA0002817899380000043
Calculating beta; wherein β is the output weight value,
Figure BDA0002817899380000044
t is a target vector of a training set for the hidden layer response pseudo-inverse matrix.
Step S56, changing I into data I to be tested2Calculating H ═ WI2And + B, calculating a corrected high count output value T 'which is the corrected second active power data value from T' ═ H β.
The invention also discloses a data correction system of the electric power Internet of things meter, which comprises the following steps: the device comprises a first training set acquisition module, a second training set acquisition module and a control module, wherein the first training set acquisition module is used for acquiring time points in historical meter data in a range and first active power data and second active power data corresponding to each time point as a first training set, the first active power data are active power at the low-voltage side of a transformer, and the second active power data are active power at the high-voltage side of the transformer; the deviation obtaining module is used for respectively calculating corresponding first active power data deviation degree and second active power data deviation degree in the first training set and endowing deviation degree labels; the device comprises a second training set and a third training set acquisition module, wherein the second training set characteristics comprise time points, first active power data corresponding to each time point and first active power data deviation degrees, and the third training set characteristics comprise time points, second active power data corresponding to each time point and second active power data deviation degrees; the PNN processing module is used for setting a PNN smoothing factor parameter, improving a PNN algorithm by using a second training set characteristic, acquiring a time point in a real-time meter data set to be corrected and corresponding first active power data to construct a first test set serving as a test input characteristic of the improved PNN, and calculating the deviation degree of the corrected first and second active power data corresponding to the first test set after the probability of a mode layer is output; the ELM processing module is used for taking the third training set as an ELM training set, adding the corrected first and second active power data deviation degrees and second active power data in the real-time meter data set into the first test set to construct a second test set which is taken as an ELM test set, and obtaining a corrected second active power data value; and the correction module is used for subtracting the nameplate loss from the corrected second active power data value to obtain the final first active power data correction value under each time sequence.
Preferably, the PNN probabilistic neural network in the PNN processing module includes at least four layers of structures, which are an input layer, a mode layer, a summation layer and an output layer;
wherein the input layer is an input vector X ═ X1 x2 ... xn]For importing an input vector into a mode layer; the mode layer is composed of Gaussian radial basis functions phi (x), the number of the neurons is equal to the number of samples in the training set, each neuron has a center, a probability value is calculated through each test set sample, and the probability value is output as follows:
Figure BDA0002817899380000051
wherein d is the sample dimension; l isiThe number of the ith sample in the training set; m is the number of categories;sigma is a smoothing factor; i x-xijI is the distance from the input sample to the center of the sample;
obtaining the number of neurons with the same category number in the summation layer, performing weighted average on the output of hidden neurons belonging to the same category in the mode layer, and obtaining the probability density function f of the ith category by a kernel function estimation methodiThe following were used:
Figure BDA0002817899380000061
the class output with the highest probability is obtained in the output layer, as shown in the following formula: y is argmax (f)j);
Adding a weight between the hidden layer and the summation layer, wherein the weight is obtained by using the time point of calculating the training sample and the posterior probability P (t | class) related to the sample class as the weight between the mode layer and the summation layer of the PNN, as follows:
Figure BDA0002817899380000062
wherein, tkIs the kth time point; classiIs the ith category;
introducing a posterior probability
Figure BDA0002817899380000063
As the weight of each probability output, the improved output of the summation layer is calculated as follows:
Figure BDA0002817899380000064
wherein
Figure BDA0002817899380000065
Is phiij(x) The weight coefficient is used for detecting the first test set after the PNN algorithm is improved, and obtaining the corresponding corrected first and second active power data deviation degrees.
The invention also discloses a data correction device of the electric power Internet of things meter, which comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor realizes the steps of any one of the methods when executing the computer program.
The invention also discloses a computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the method as set forth in any one of the above.
The invention provides a power meter counting data correction method based on the improved PNN and ELM, which can correct power low-voltage side meter data only by acquiring high value and nameplate loss at each time point in historical data of several days, and the time point of the data needing to be corrected and the low value which has problems at present. Compared with the manual correction method, the correction deviation of the method under each time sequence can be greatly reduced.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a schematic flow chart of a data correction method for an electric power internet of things meter according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the drawings of the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the invention without any inventive step, are within the scope of protection of the invention.
In the present invention, unless otherwise expressly specified or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, "above" or "below" a first feature means that the first and second features are in direct contact, or that the first and second features are not in direct contact but are in contact with each other via another feature therebetween. Also, the first feature being "on," "above" and "over" the second feature includes the first feature being directly on and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature includes the first feature being directly under and obliquely below the second feature, or simply meaning that the first feature is at a lesser elevation than the second feature.
Unless defined otherwise, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. The use of "first," "second," and similar terms in the description and claims of the present application do not denote any order, quantity, or importance, but rather the terms are used to distinguish one element from another. Also, the use of the terms "a" or "an" and the like do not denote a limitation of quantity, but rather denote the presence of at least one.
Fig. 1 is a data correction method of an electric power instrumented meter, which is based on an improved probabilistic neural network algorithm PNN and ELM. The method can correct the data of the electric low-voltage side meter only by acquiring high value and nameplate loss at each time point in historical data of several days, the time point of the data needing to be corrected and the low value which has problems at present. The data correction method of the electric power internet of things meter in the embodiment specifically comprises the following steps:
step S1, obtaining time points in historical meter data in a range, and first active power data and second active power data corresponding to each time point as a first training set, wherein the first active power data are active power of a low-voltage side of the transformer, and the second active power data are active power of a high-voltage side of the transformer. Time points in the calendar history data, and first active power data and second active power data corresponding to the time points can be acquired as a first training set.
And step S2, calculating the corresponding first active power data deviation degree and the second active power data deviation degree in the first training set respectively, and giving a deviation degree label.
Specifically, the deviation degree between the first active power data, i.e. the low meter, and the second active power data, i.e. the high meter, in the training set is calculated, and a deviation degree label is given, as follows:
Figure BDA0002817899380000091
wherein g (i) is represented as a deviation degree label of the first active power data and the second active power data at a certain time point; g (i) second active power data expressed as a time; d (i) represents first active power data at a certain time, wherein i is a time point sequence in a plurality of ephemeris data.
Step S3, a second training set and a third training set are established, where the second training set includes time points, first active power data corresponding to the time points, and a deviation degree of the first active power data, and the third training set includes time points, second active power data corresponding to the time points, and a deviation degree of the second active power data.
Step S4, setting a PNN smoothing factor parameter, improving the PNN algorithm by using a second training set characteristic, acquiring a time point in a real-time meter data set to be corrected and corresponding first active power data to construct a first test set used as a test input characteristic of the improved PNN, and calculating the deviation degree of the corrected first and second active power data corresponding to the first test set after the probability output of the mode layer. Specifically, a PNN smoothing factor parameter is set, and the meter active power data of a low-voltage side of a transformer, which is a time point in a data set to be tested, i.e. to be corrected, and a low-count value at the time point, i.e. the low-voltage side of the transformer, are used as PNN test input characteristics.
The improved PNN probabilistic neural network comprises at least four layers of structures, namely an input layer, a mode layer, a summation layer (competition layer) and an output layer.
The first layer is input vector X ═ X1 x2 ... xn]Mainly responsible for importing the input vector into the mode layer; the second layer mode layer is mainly composed of Gaussian radial basis functions phi (x), the number of the neurons is equal to the number of samples in the training set, each neuron has a center, a probability value is calculated through each test set sample, and the output of the probability value is as follows:
Figure BDA0002817899380000092
wherein d is the sample dimension; l isiThe number of the ith sample in the training set; m is the number of categories; sigma is a smoothing factor; i x-xijI | is the distance from the input sample to the center of the sample, which may be a euclidean distance in this embodiment.
In a summation layer, namely a competition layer, the number of the normally obtained neurons is the same as the number of the classes, the output of the hidden neurons belonging to the same class in the mode layer is weighted average, and a probability density function f of the class i obtained by a kernel function estimation methodiThe following were used:
Figure BDA0002817899380000101
and finally obtaining the category output with the maximum probability in the output layer as shown in the following formula:
y=argmax(fj)。
in the embodiment, the weight is added between the hidden layer and the summation layer by combining the processing object and considering the time sequence characteristics of the sample, so that the output probability is adjusted, the overlapping and the interleaving in the model are reduced, and the generalization capability of the neural network is improved. The weighting here is mainly achieved by applying the posterior probability P (t | class) of calculating the time point of the training sample and the sample class as the weighting between the mode layer and the summation layer of the PNN, as follows:
Figure BDA0002817899380000102
wherein, tkIs the kth time point; classiIs the ith category.
Introducing a posterior probability
Figure BDA0002817899380000103
As the weight of each probability output, the improved output of the summation layer is calculated as follows:
Figure BDA0002817899380000104
wherein
Figure BDA0002817899380000105
Is phiij(x) The weight coefficient of (2). And after the PNN algorithm is improved, the first test set is detected, and the corresponding corrected first and second active power data deviation degrees are obtained.
And step S5, taking the third training set as an ELM training set, adding the corrected deviation degrees of the first and second active power data and the second active power data in the real-time meter data set into the first test set to construct a second test set as an ELM test set, and obtaining a corrected second active power data value. The step S5 may specifically include the following steps.
Step S51, taking the third training set as an ELM training set, adding the corrected deviation degrees of the first and second active power data and the second active power data in the real-time meter data set into the first test set to construct a second test set as an ELM test set;
step S52, an ELM model is constructed, ELM parameters are set, the number of hidden layers is 3, and the hidden layer activation function is sigmode.
Step S53, generating ELM weight omega randomlyiAnd a threshold value bi
Step S54, calculating H from H ═ WI + B; wherein H is a hidden layer response matrix, I is a matrix formed by normalized training data, and W is omegaiA weight matrix is formed, B is an element of BiA threshold vector;
step S55, based on
Figure BDA0002817899380000111
Calculating beta; wherein β is the output weight value,
Figure BDA0002817899380000112
t is a target vector of a training set for the hidden layer response pseudo-inverse matrix.
Step S56, changing I into data I to be tested2Calculating H ═ WI2And + B, calculating a corrected high count output value T 'which is the corrected second active power data value from T' ═ H β.
And step S6, subtracting nameplate loss from the corrected second active power data value to obtain the final first active power data correction value under each time sequence. The high-count output value T', namely the second active power output value, is subtracted by the nameplate loss, and finally, low-count output correction values, namely first active power correction data, under each time sequence are obtained.
The invention provides a power meter counting data correction method based on the improved PNN and ELM, which can correct power low-voltage side meter data only by acquiring high value and nameplate loss at each time point in historical data of several days, and the time point of the data needing to be corrected and the low value which has problems in the prior art.
The invention also discloses a data correction system of the electric power Internet of things meter, which comprises the following steps: the device comprises a first training set acquisition module, a second training set acquisition module and a control module, wherein the first training set acquisition module is used for acquiring time points in historical meter data in a range and first active power data and second active power data corresponding to each time point as a first training set, the first active power data are active power at the low-voltage side of a transformer, and the second active power data are active power at the high-voltage side of the transformer; the deviation obtaining module is used for respectively calculating corresponding first active power data deviation degree and second active power data deviation degree in the first training set and endowing deviation degree labels; the device comprises a second training set and a third training set acquisition module, wherein the second training set characteristics comprise time points, first active power data corresponding to each time point and first active power data deviation degrees, and the third training set characteristics comprise time points, second active power data corresponding to each time point and second active power data deviation degrees; the PNN processing module is used for setting a PNN smoothing factor parameter, improving a PNN algorithm by using a second training set characteristic, acquiring a time point in a real-time meter data set to be corrected and corresponding first active power data to construct a first test set serving as a test input characteristic of the improved PNN, and calculating the deviation degree of the corrected first and second active power data corresponding to the first test set after the probability of a mode layer is output; the ELM processing module is used for taking the third training set as an ELM training set, adding the corrected first and second active power data deviation degrees and second active power data in the real-time meter data set into the first test set to construct a second test set which is taken as an ELM test set, and obtaining a corrected second active power data value; and the correction module is used for subtracting the nameplate loss from the corrected second active power data value to obtain the final first active power data correction value under each time sequence.
Preferably, the PNN probabilistic neural network in the PNN processing module includes at least four layers of structures, which are an input layer, a mode layer, a summation layer and an output layer;
wherein the input layer is an input vector X ═ X1 x2 ... xn]For importing an input vector into a mode layer; the mode layer is composed of Gaussian radial basis functions phi (x), the number of the neurons is equal to the number of samples in the training set, each neuron has a center, a probability value is calculated through each test set sample, and the probability value is output as follows:
Figure BDA0002817899380000121
wherein d is the sample dimension; l isiThe number of the ith sample in the training set; m is the number of categories; sigma is a smoothing factor; i x-xijI is the distance from the input sample to the center of the sample;
obtaining the number of neurons with the same category number in the summation layer, performing weighted average on the output of hidden neurons belonging to the same category in the mode layer, and obtaining the probability density function f of the ith category by a kernel function estimation methodiThe following were used:
Figure BDA0002817899380000131
the class output with the highest probability is obtained in the output layer, as shown in the following formula: y is argmax (f)j);
Adding a weight between the hidden layer and the summation layer, wherein the weight is obtained by using the time point of calculating the training sample and the posterior probability P (t | class) related to the sample class as the weight between the mode layer and the summation layer of the PNN, as follows:
Figure BDA0002817899380000132
wherein, tkIs the kth time point; classiIs the ith category;
introducing a posterior probability
Figure BDA0002817899380000133
As weights for each probability output, the improved sum layer output is calculated asShown below:
Figure BDA0002817899380000134
wherein
Figure BDA0002817899380000135
Is phiij(x) The weight coefficient is used for detecting the first test set after the PNN algorithm is improved, and obtaining the corresponding corrected first and second active power data deviation degrees.
It should be noted that, in the present specification, the foregoing embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and like parts between the embodiments may be referred to each other. For the data correction system of the electric power thing contact meter disclosed by the embodiment, the data correction system corresponds to the method disclosed by the embodiment, so that the description is relatively simple, and the relevant points can be referred to the method part for description.
The invention also provides a data correction device of the electric power physical meter, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps of the data correction method of the electric power physical meter described in the embodiments.
The data correction device of the electric power instrumented meter can include, but is not limited to, a processor and a memory. It will be understood by those skilled in the art that the schematic diagram is merely an example of the data correction device of the power physical meter, and does not constitute a limitation on the data correction device apparatus of the power physical meter, and may include more or less components than those shown in the diagram, or combine some components, or different components, for example, the data correction device apparatus of the power physical meter may further include an input and output device, a network access device, a bus, and the like.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general processor can be a microprocessor or the processor can be any conventional processor and the like, the processor is a control center of the data correction device equipment of the electric power physical meter, and various interfaces and lines are utilized to connect various parts of the data correction device equipment of the whole electric power physical meter.
The memory can be used for storing the computer program and/or the module, and the processor realizes various functions of the data correction device equipment of the electric power IOT meter by running or executing the computer program and/or the module stored in the memory and calling the data stored in the memory. The memory may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function, and the like, and the memory may include a high speed random access memory, and may further include a non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The data management method of the data correction device of the electric power Internet of things meter can be stored in a computer readable storage medium if the data management method is realized in the form of a software functional unit and is sold or used as an independent product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the embodiments of the data correction method for the electric power physical quantity meters. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
In summary, the above-mentioned embodiments are only preferred embodiments of the present invention, and all equivalent changes and modifications made in the claims of the present invention should be covered by the claims of the present invention.

Claims (5)

1. A data correction method of an electric power Internet of things meter is characterized by comprising the following steps:
s1, acquiring time points in historical meter data in a range, and first active power data and second active power data corresponding to the time points as a first training set, wherein the first active power data are active power of a low-voltage side of a transformer, and the second active power data are active power of a high-voltage side of the transformer;
s2, calculating corresponding first active power data deviation degree and second active power data deviation degree in the first training set respectively, and giving deviation degree labels;
s3, establishing a second training set and a third training set, wherein the second training set comprises time points, first active power data corresponding to the time points and the deviation degree of the first active power data, and the third training set comprises the time points, second active power data corresponding to the time points and the deviation degree of the second active power data;
s4, setting a PNN smoothing factor parameter, improving a PNN algorithm by using a second training set characteristic, acquiring a time point in a real-time meter data set to be corrected and corresponding first active power data to construct a first test set serving as a test input characteristic of the improved PNN, and calculating the deviation degree of the corrected first and second active power data corresponding to the first test set after the probability of a mode layer is output;
the PNN probabilistic neural network in the step S4 includes at least four layers of structures, which are an input layer, a mode layer, a summation layer and an output layer;
wherein the input layer is an input vector X ═ X1 x2 ... xn]For importing an input vector into a mode layer; the mode layer is composed of Gaussian radial basis functions phi (x), the number of the neurons is equal to the number of samples in the training set, each neuron has a center, a probability value is calculated through each test set sample, and the probability value is output as follows:
Figure FDA0003110170300000011
where d is the sample dimension, LiThe number of the ith type samples in the training set, M is the number of categories, sigma is a smoothing factor, | x-xijI is the distance from the input sample to the center of the sample;
obtaining the number of neurons with the same category number in the summation layer, performing weighted average on the output of hidden neurons belonging to the same category in the mode layer, and obtaining the probability density function f of the ith category by a kernel function estimation methodiThe following were used:
Figure FDA0003110170300000021
the class output with the highest probability is obtained in the output layer, as shown in the following formula: y is argmax (f)j);
Adding a weight between the hidden layer and the summation layer, wherein the weight is obtained by using the time point of calculating the training sample and the posterior probability P (t | class) related to the sample class as the weight between the mode layer and the summation layer of the PNN, as follows:
Figure FDA0003110170300000022
wherein, tkIs the kth time point; classiIs the ith category;
introducing a posterior probability
Figure FDA0003110170300000025
As the weight of each probability output, the improved output of the summation layer is calculated as follows:
Figure FDA0003110170300000023
wherein
Figure FDA0003110170300000024
Is phiij(x) The weight coefficient is used for detecting the first test set after the PNN algorithm is improved and obtaining the corresponding corrected deviation degrees of the first and second active power data;
s5, taking the third training set as an ELM training set, adding the corrected deviation degrees of the first and second active power data and the second active power data in the real-time meter data set into the first test set to construct a second test set as an ELM test set, and obtaining a corrected second active power data value;
and S6, subtracting nameplate loss from the corrected second active power data value to obtain the final first active power data correction value under each time sequence.
2. The data correction method of the electric power IOT meter according to claim 1, wherein the step S5 includes:
step S51, taking the third training set as an ELM training set, adding the corrected deviation degrees of the first and second active power data and the second active power data in the real-time meter data set into the first test set to construct a second test set as an ELM test set;
step S52, constructing an ELM model, setting ELM parameters, wherein the number of hidden layers is 3, and the hidden layer activation function is sigmoid;
step S53, generating ELM weight omega randomlyiAnd a threshold value bi
Step S54, calculating H from H ═ WI + B; wherein H is a hidden layer response matrix, I is a matrix formed by normalized training data, and W is omegaiA weight matrix is formed, B is an element of BiA threshold vector;
step S55, based on
Figure FDA0003110170300000031
Calculating beta; wherein β is the output weight value,
Figure FDA0003110170300000032
responding to a pseudo-inverse matrix for a hidden layer, wherein T is a target vector of a training set;
step S56, changing I into data I to be tested2Calculating H ═ WI2And + B, calculating a corrected high count output value T 'which is the corrected second active power data value from T' ═ H β.
3. A data correction system of an electric power Internet of things meter is characterized by comprising:
the device comprises a first training set acquisition module, a second training set acquisition module and a control module, wherein the first training set acquisition module is used for acquiring time points in historical meter data in a range and first active power data and second active power data corresponding to each time point as a first training set, the first active power data are active power at the low-voltage side of a transformer, and the second active power data are active power at the high-voltage side of the transformer;
the deviation obtaining module is used for respectively calculating corresponding first active power data deviation degree and second active power data deviation degree in the first training set and endowing deviation degree labels;
the device comprises a second training set and a third training set acquisition module, wherein the second training set characteristics comprise time points, first active power data corresponding to each time point and first active power data deviation degrees, and the third training set characteristics comprise time points, second active power data corresponding to each time point and second active power data deviation degrees;
the PNN processing module is used for setting a PNN smoothing factor parameter, improving a PNN algorithm by using a second training set characteristic, acquiring a time point in a real-time meter data set to be corrected and corresponding first active power data to construct a first test set serving as a test input characteristic of the improved PNN, and calculating the deviation degree of the corrected first and second active power data corresponding to the first test set after the probability of a mode layer is output;
the ELM processing module is used for taking the third training set as an ELM training set, adding the corrected first and second active power data deviation degrees and second active power data in the real-time meter data set into the first test set to construct a second test set which is taken as an ELM test set, and obtaining a corrected second active power data value;
the correction module is used for subtracting nameplate loss from the corrected second active power data value to obtain a first active power data correction value under each time sequence finally;
the PNN probabilistic neural network in the PNN processing module at least comprises four layers of structures, namely an input layer, a mode layer, a summation layer and an output layer;
wherein the input layer is an input vector X ═ X1 x2 ... xn]For importing an input vector into a mode layer; the mode layer is composed of Gaussian radial basis function phi (x), the number of neurons is equal to the number of samples in the training set, each neuron has a center, and the number of neurons and each sample in the testing set are used for calculatingA probability value, the probability value output of which is:
Figure FDA0003110170300000041
wherein d is the sample dimension; l isiThe number of the ith sample in the training set; m is the number of categories; sigma is a smoothing factor; i x-xijI is the distance from the input sample to the center of the sample;
obtaining the number of neurons with the same category number in the summation layer, performing weighted average on the output of hidden neurons belonging to the same category in the mode layer, and obtaining the probability density function f of the ith category by a kernel function estimation methodiThe following were used:
Figure FDA0003110170300000051
the class output with the highest probability is obtained in the output layer, as shown in the following formula: y is argmax (f)j);
Adding a weight between the hidden layer and the summation layer, wherein the weight is obtained by using the time point of calculating the training sample and the posterior probability P (t | class) related to the sample class as the weight between the mode layer and the summation layer of the PNN, as follows:
Figure FDA0003110170300000052
wherein, tkIs the kth time point; classiIs the ith category;
introducing a posterior probability
Figure FDA0003110170300000053
As the weight of each probability output, the improved output of the summation layer is calculated as follows:
Figure FDA0003110170300000054
wherein
Figure FDA0003110170300000055
Is phiij(x) The weight coefficient is used for detecting the first test set after the PNN algorithm is improved, and obtaining the corresponding corrected first and second active power data deviation degrees.
4. A data correction device for an electric power internet of things meter, comprising a memory, a processor and a computer program stored in the memory and operable on the processor, characterized in that: the processor, when executing the computer program, realizes the steps of the method according to any of claims 1-2.
5. A computer-readable storage medium storing a computer program, characterized in that: the computer program realizing the steps of the method according to any of claims 1-2 when executed by a processor.
CN202011410515.0A 2020-12-04 2020-12-04 Data correction method, system and storage medium of electric power Internet of things meter Active CN112580700B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011410515.0A CN112580700B (en) 2020-12-04 2020-12-04 Data correction method, system and storage medium of electric power Internet of things meter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011410515.0A CN112580700B (en) 2020-12-04 2020-12-04 Data correction method, system and storage medium of electric power Internet of things meter

Publications (2)

Publication Number Publication Date
CN112580700A CN112580700A (en) 2021-03-30
CN112580700B true CN112580700B (en) 2021-07-30

Family

ID=75128276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011410515.0A Active CN112580700B (en) 2020-12-04 2020-12-04 Data correction method, system and storage medium of electric power Internet of things meter

Country Status (1)

Country Link
CN (1) CN112580700B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102570392A (en) * 2012-01-17 2012-07-11 上海电力学院 Method for identifying exciting inrush current of transformer based on improved probability neural network
CN105701543A (en) * 2016-01-13 2016-06-22 济南大学 Traditional transformer state monitoring method based on probabilistic neural network
CN107563069A (en) * 2017-09-06 2018-01-09 国电联合动力技术有限公司 A kind of wind power generating set intelligent fault diagnosis method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9159137B2 (en) * 2013-10-14 2015-10-13 National Taipei University Of Technology Probabilistic neural network based moving object detection method and an apparatus using the same

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102570392A (en) * 2012-01-17 2012-07-11 上海电力学院 Method for identifying exciting inrush current of transformer based on improved probability neural network
CN105701543A (en) * 2016-01-13 2016-06-22 济南大学 Traditional transformer state monitoring method based on probabilistic neural network
CN107563069A (en) * 2017-09-06 2018-01-09 国电联合动力技术有限公司 A kind of wind power generating set intelligent fault diagnosis method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Data Correction of the Meter of Power IOT Based on PNN+Bagging and ELM;Kai Chen;《International Core Journal of Engineering》;20200701;第6卷(第7期);第 185-194页 *
基于改进 PNN 模型的人体健康监测方法;孟柳,周金治;《自动化仪表》;20171130;第38卷(第11期);第1-4、8页 *
改进烟花算法和概率神经网络智能诊断齿轮箱故障;陈如清等;《农业工程学报》;20180930;第34卷(第17期);第192-197页 *

Also Published As

Publication number Publication date
CN112580700A (en) 2021-03-30

Similar Documents

Publication Publication Date Title
CN107506868B (en) Method and device for predicting short-time power load
US20220128614A1 (en) Partial discharge determination apparatus and partial discharge determination method
CN109633448B (en) Method and device for identifying battery health state and terminal equipment
CN116167010B (en) Rapid identification method for abnormal events of power system with intelligent transfer learning capability
CN112990330A (en) User energy abnormal data detection method and device
CN114705990A (en) Battery cluster state of charge estimation method and system, electronic equipment and storage medium
CN110955862B (en) Evaluation method and device for equipment model trend similarity
CN112485652A (en) Analog circuit single fault diagnosis method based on improved sine and cosine algorithm
CN110324178B (en) Network intrusion detection method based on multi-experience nuclear learning
CN115186012A (en) Power consumption data detection method, device, equipment and storage medium
CN111522736A (en) Software defect prediction method and device, electronic equipment and computer storage medium
CN114841253A (en) Electricity stealing detection method and device, storage medium and electronic equipment
CN112417734B (en) Wind speed correction method and device based on geographic information of wind farm
CN112580700B (en) Data correction method, system and storage medium of electric power Internet of things meter
CN114202174A (en) Electricity price risk grade early warning method and device and storage medium
CN117310389A (en) CNN-based power distribution network fault positioning method and device and related equipment
CN117272145A (en) Health state evaluation method and device of switch machine and electronic equipment
CN117330963A (en) Energy storage power station fault detection method, system and equipment
CN111723010A (en) Software BUG classification method based on sparse cost matrix
CN114298265A (en) Radio frequency circuit diagnosis model training and diagnosis method, device, medium and terminal
CN111985524A (en) Improved low-voltage transformer area line loss calculation method
CN111597934A (en) System and method for processing training data for statistical applications
CN112836819B (en) Neural network model generation method and device
CN116956197B (en) Deep learning-based energy facility fault prediction method and device and electronic equipment
CN111814108B (en) Connection type intermittent fault diagnosis method based on self-organizing neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant