CN113804997B - Voltage sag source positioning method based on bidirectional WaveNet deep learning - Google Patents

Voltage sag source positioning method based on bidirectional WaveNet deep learning Download PDF

Info

Publication number
CN113804997B
CN113804997B CN202110965856.2A CN202110965856A CN113804997B CN 113804997 B CN113804997 B CN 113804997B CN 202110965856 A CN202110965856 A CN 202110965856A CN 113804997 B CN113804997 B CN 113804997B
Authority
CN
China
Prior art keywords
data
matrix
bidirectional
wavenet
voltage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110965856.2A
Other languages
Chinese (zh)
Other versions
CN113804997A (en
Inventor
邓亚平
王璐
林邵杰
贾颢
同向前
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202110965856.2A priority Critical patent/CN113804997B/en
Publication of CN113804997A publication Critical patent/CN113804997A/en
Application granted granted Critical
Publication of CN113804997B publication Critical patent/CN113804997B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

The invention discloses a voltage sag source positioning method based on bidirectional WaveNet deep learning, which specifically comprises the following steps: firstly, sampling the amplitude of three-phase voltage of an observation node by using power quality monitoring equipment, preprocessing data, and forming a training data set and a test data set; building a bidirectional WaveNet integral model structure and training the model; and finally, the monitored actual voltage amplitude data is used, the voltage amplitude matrix data is obtained after preprocessing, the voltage matrix data is input into a model, and the output data is the required voltage sag source positioning result. By adopting the bidirectional WaveNet model, the analysis of time-dependent information is converted from implicit state transfer to extraction of data connection characteristics in a time sequence, so that the time sequence relation between longer data is extracted, and the analysis of the voltage sag source positioning problem is carried out.

Description

Voltage sag source positioning method based on bidirectional WaveNet deep learning
Technical Field
The invention belongs to the technical field of electric energy quality analysis, and particularly relates to a voltage sag source positioning method based on bidirectional WaveNet deep learning.
Background
It is counted that more than 70% of the power quality problems are caused by voltage sags. The voltage sag not only brings huge economic losses to stakeholders, but also can cause great social influence. The accurate tracing of the voltage sag source has important significance for dividing responsibility and accelerating the power marketing process. However, the existing voltage sag source positioning method mostly uses feeder unit alarm information as data resources, utilizes a matrix algorithm to position, is simple and easy to realize, but has low positioning precision and poor fault tolerance. On the basis, the problem of poor fault tolerance is solved by combining a matrix algorithm with an optimization algorithm, but the method needs to manually establish complex and complicated rules. Therefore, the existing voltage sag source positioning method faces increasingly complex power grid structures, and the electrical characteristic quantity after voltage sag occurs is obviously changed, so that accurate voltage sag source positioning is difficult to achieve in the existing method. In fact, most voltage sag is caused by faults, and voltage sag source positioning can be achieved by means of widely installed power quality monitoring equipment in a power grid and further combining voltage information acquired by the monitoring equipment.
Disclosure of Invention
The invention aims to provide a voltage sag source positioning method based on bidirectional WaveNet deep learning, which solves the problem of low voltage sag source positioning accuracy in the prior art.
The technical scheme adopted by the invention is that the voltage sag source positioning method based on bidirectional WaveNet deep learning is implemented according to the following steps:
step 1: the method comprises the steps of respectively sampling the amplitude values of three-phase voltages of observation nodes by using electric energy quality monitoring equipment which is already installed in a power grid, preprocessing the collected data to obtain preprocessed monitoring voltage data, and forming a training data set and a testing data set;
step 2: constructing a bidirectional WaveNet integral model structure;
step 3: training the model of the step 2 by using the training data set of the step 1;
step 4: and (3) using the monitored actual voltage amplitude data, preprocessing in the step (1) to obtain voltage amplitude matrix data, inputting the voltage matrix data into the model obtained in the step (3), and outputting data which is the required voltage sag source positioning result.
The present invention is also characterized in that,
in step 1, specifically:
step 1.1, acquiring three-phase voltage amplitude data of each monitoring node by using existing power quality monitoring equipment of an existing power grid; sampling the voltage amplitude of the selected node to obtain voltage amplitude sampling data, and normalizing the data shown in formula (1) to obtain preprocessing data;
wherein x is * For normalized data output, x is the original data, x max X is the maximum value in the input sample data min Is the minimum value in the input sample data;
step 1.2, converting the preprocessed data in the step 1.1 into a matrix form, wherein the matrix size is a three-dimensional matrix, and the matrix form is [ sample number, time step and preprocessed voltage amplitude ];
step 1.3, manually marking the preprocessed data in the step 1.2, inputting a corresponding line number under each sampling point, further forming tag data, and converting the tag data into a tag data matrix in the form of [ sample number, time step, output node and line number ]; and (3) dividing the data matrix processed in the step (1.2) and the label data matrix, wherein 80% of the data matrix is divided into training data sets, and 20% of the data matrix is divided into test data sets.
In the step 2, the bidirectional WaveNet integral model structure can be divided into three parts, namely an input structure, an hidden structure and an output structure, wherein the first part is an input structure part and only comprises one input layer; the second part is an implicit structure part and comprises a plurality of bidirectional WaveNet structures; and the third part is an output structure, and the forward and reverse output data are stacked to obtain bidirectional implicit output data distributed in forward time sequence.
Each bidirectional WaveNet structure comprises a forward channel and a backward channel; a forward channel section extracting history dependent information of a current input time series using causal convolutional layer and dilated causal convolutional layers; the backward path section extracts the history dependency information of the current input time series using causal convolutional layer and dilated causal convolutional layers identical to the forward path and adds a sequence reversing operation before causal convolutional layers.
In the step 3, a back propagation algorithm is used for updating parameters, an Adam optimizer is used for training, and the loss function is a cross soil moisture loss function; the input data is a processed voltage amplitude data matrix, and the output data is a line number label data matrix.
The method has the advantages that by adopting the bidirectional WaveNet model, the analysis of time-dependent information is converted from implicit state transmission to extraction of data connection characteristics in a time sequence, so that the time sequence relation among longer data is extracted, the analysis of voltage sag source positioning problem is carried out, and the analysis processing mode based on manual experience rules is avoided based on a data driving method.
Drawings
FIG. 1 is a flow chart of a voltage sag source positioning method based on bidirectional WaveNet deep learning;
FIG. 2 is a block diagram of a causal convolution layer (causal convolutional layer) in the voltage sag source localization method based on bi-directional WaveNet deep learning of the present invention;
FIG. 3 is a block diagram of a cavitation causal convolution layer (dilatedcausal convolutional layer) in the voltage sag source positioning method based on bidirectional WaveNet deep learning of the present invention;
FIG. 4 is a block diagram of the bidirectional WaveNet in the voltage sag source positioning method based on the bidirectional WaveNet deep learning of the invention;
FIG. 5 is a schematic diagram of skip-connection in the voltage sag source positioning method based on bi-directional WaveNet deep learning of the present invention;
fig. 6 is a training iteration update chart of accuracy in the voltage sag source positioning method based on bidirectional WaveNet deep learning.
Detailed Description
The invention will be described in detail below with reference to the drawings and the detailed description.
The voltage sag source positioning method based on bidirectional WaveNet deep learning, as shown in figure 1, is implemented specifically according to the following steps:
step 1: sampling the amplitude of three-phase voltage of an observation node by using electric energy quality monitoring equipment which is already installed in a power grid to obtain model training and test data of the model, preprocessing the collected data to obtain preprocessed monitoring voltage data, and forming a training data set and a test data set; the specific process is as follows:
and 1.1, acquiring three-phase voltage amplitude data of each monitoring node by using existing power quality monitoring equipment of the existing power grid. Sampling the voltage amplitude of the selected node to obtain voltage amplitude sampling data, and normalizing the data shown in formula (1) to obtain preprocessing data;
wherein x is * For normalized data output, x is the original data, x max X is the maximum value in the input sample data min Is the minimum value in the input sample data;
the monitoring nodes can be less than or equal to the node quantity of the required output of the model, so that the sparsification monitoring of the model is realized.
Step 1.2, converting the preprocessed data in the step 1.1 into a matrix form, wherein the matrix size is a three-dimensional matrix, and the matrix form is [ sample number, time step and preprocessed voltage amplitude ];
wherein, the sample number is the total sample number of the current data; the time steps correspond to sampling points, each time step corresponding to a sampling point position representing the number of sampling points contained in the current input data sequence. The preprocessed voltage amplitude is the data corresponding to the three-phase voltage amplitude of the selected monitoring node. For example, when the selected monitoring point is 4, the voltage value after preprocessing is 4×3=12-dimensional data.
Step 1.3, manually marking the preprocessed data in the step 1.2, inputting a corresponding line number under each sampling point, further forming tag data, and converting the tag data into a tag data matrix in the form of [ sample number, time step, output node and line number ]; dividing the data matrix processed in the step 1.2 and the label data matrix, wherein 80% of the data matrix is divided into training data sets, and 20% of the data matrix is divided into test data sets;
and 2, constructing a bidirectional WaveNet overall model structure, and extracting and describing the time dependence relationship of data by using casual convolutional layer and dilated casual convolutional layers. Because the model adopts a bidirectional network structure, the extraction of the time dependence of the model on the historical information and the extraction of the time dependence of the future information can be simultaneously realized.
The bidirectional WaveNet integral model structure can be divided into three parts, namely an input structure, an implicit structure and an output structure, wherein the first part is an input structure part and only comprises one input layer; the second part is an implicit structure part and comprises a plurality of bidirectional WaveNet structures, each bidirectional WaveNet structure comprises a forward channel and a backward channel and is used for respectively extracting historical time dependence information of current sequence data and extracting future time dependence information of the current sequence data, so that when a certain sampling point is analyzed, the relation between the historical dependence and the future dependence is considered, and the positioning accuracy of a voltage sag source is improved.
The forward channel section extracts history dependency information of the current input time series using casual convolutional layer and dilated causal convolutional layers, and deduces current time output data based on current time input data and input data preceding the current time.
A backward channel part extracting history dependent information of the current input time sequence by using casual convolutional layer and dilated causal convolutional layers same as the forward channel, and adding sequence reversing operation before dilated causal convolutional layers to flip the time sequence data in time dimension, and determining the time sequence data from { x } 1 ,x 2 ,…,x n -n is the time step) and the flip is { x } n ,…,x 2 ,x 1 The model can obtain the reverse time dependency relationship of the data and acquire the future time dependency relationship of the data at the current moment. The backward channel part output data is implicit output of backward time sequence distribution, and the implicit output describes the time dependence { h } of the current time data and the future data bn ,…,h b2 ,h b1 }. Inverting the data to obtain implicit output of forward time series distribution, wherein the current implicit output comprises time dependence { h } of future time b1 ,h b2 ,…,h bn }。
The third part is an output structure, and is bidirectional implicit output data with forward time series distribution, which is obtained by stacking forward and reverse output data, and the data comprehensively describes the time dependence on historical data and the time dependence on future data on time series data. Therefore, when the current data is analyzed, a bidirectional reference is obtained, and accurate positioning is obtained. The comprehensive implicit output is
After the comprehensive implicit output data passes through a Relu activation function, the comprehensive implicit output data is input into a 1D convolution layer to extract the comprehensive implicit time dependency relationship, the implicit output is subjected to linear transformation again through the 1D convolution layer, the convolution kernel size of the layer is 1 multiplied by 1, and the number of the convolution kernels is output multiplied by class. The output data size is [ batch size, time steps, output×class ] and the matrix is converted into a 4-dimensional matrix representing the line number probability activation values of the source of voltage dips for possible outputs to all target output nodes at each time step. That is, [ batch size, time steps, outputs, class ]. Output is the number of output nodes, class is the probability activation value of the line number where the voltage sag source of each node is located. And then, obtaining the line number of the possible voltage sag source by using a Softmax function.
Step 3: training the model in the step 2 by using the training data set in the step 1, updating parameters by using a back propagation algorithm, and training by using an Adam optimizer, wherein the loss function is a cross soil moisture loss function; the input data is a processed observation node voltage amplitude data matrix, and the output data is a line number label data matrix; different super parameters are used for adjusting the model and observing the output accuracy of the model in training, and when the cross soil moisture loss function and the model accuracy are not changed no matter how the model parameters are adjusted, the optimal model can be considered to be obtained at the moment;
the number of layers of the convolutional network, the number of neurons, the activation function type and the learning rate can be adjusted;
step 4, using the monitored actual voltage amplitude data, and obtaining voltage amplitude matrix data after preprocessing in the step 1.2, wherein the format of the voltage amplitude matrix data is [ sample number, time step and preprocessed voltage amplitude ]; and (3) inputting the voltage matrix data into the model obtained in the step (3), and obtaining output data as a required voltage sag source positioning result.
Examples
The invention discloses a voltage sag source positioning method based on a bidirectional WaveNet deep learning model, which specifically comprises the following steps:
step 1: based on an IEEE39 node network model, matlab/simulink simulation software is used for modeling to generate simulation data, and 34 lines where a common voltage sag source can be located are located. The invention selects 4 observation nodes (bus 3, bus8, bus24 and bus 38). Therefore, sparse monitoring of the model is achieved, namely, the line where the voltage sag source is likely to be is located through analysis of the voltage amplitude values of the 4 selected observation nodes. Sampling the voltage amplitude of the selected node, and normalizing the data after obtaining the voltage amplitude sampling data to obtain the preprocessing data. And converting the preprocessed data into a matrix form, wherein the matrix size is a three-dimensional matrix. The total amount of sampling data used in the present invention is 30000 pieces of sample data, each sample data contains 1000 time steps, each time step contains 4 (number of observation nodes) ×3 (three-phase voltage) =12-dimensional signal data. Thereby obtaining voltage signal matrix data of the observation node, the matrix being in the form of [30000, 1000, 12];
step 2: the bidirectional WaveNet whole model structure can be divided into three parts, namely an input structure, an implicit structure and an output structure.
The first part is an input structure part and comprises only one input layer.
The second part is an implicit structure part, and comprises a plurality of bidirectional WaveNet structures, as shown in fig. 4, each bidirectional WaveNet structure comprises a forward channel and a backward channel, and is used for respectively extracting historical time dependent information of current sequence data and extracting future time dependent information of the current sequence data. Therefore, when a certain sampling point is analyzed, the relation between the historical dependence and the future dependence is considered, and the positioning accuracy of the voltage sag source is improved.
The forward channel section extracts history dependency information of the current input time series, mainly using causal convolutional layer and dilated causal convolutional layers, as shown in fig. 2 and 3, respectively, and deduces current time output data based on the current time input data and the input data before the current time. For dilated causal convolutional layers, an expansion coefficient of 2 was used k And stacking is performed to realize rapid increase of the model perception field, so that the model can consider the whole time sequence. The corresponding perception field increases with the increase of the model layer number, and the expansion coefficient is 2 k Where k is the current number of layers, then receptive field is n=2 k And n must be greater than or equal to the time step. In the present invention, the time step is used to be 1000, so k is selected to be 10, i.e., n is 1024. The dilated causal convolutional layers output data in the forward channel are multiplied by the sigmoid activation function and the tanh activation function respectively, and serve as a gating activation unit to output the screened data. The data is subjected to a 1 x 1 convolution operation to obtain an implicit output, and the implicit output is sent into a final output structure through a skip connection operation shown in fig. 5 to be used as a forward implicit output. Meanwhile, the implicit output and the input data are added to obtain residual data, and the residual data are used as output data of the layer and are input into a next forward residual layer. The residual channel and skip connection both improve the training efficiency of the model and the final training accuracy of the model.
The backward pass portion was treated with the same causal convolutional layer and dilated causal convolutional layers treatment structures as the forward pass portion, but at causal convolutional layAdding sequence reverse operation before er to make time sequence data turn over in time dimension, and making it be formed from { x } 1 ,x 2 ,…,x n -n is the time step) and the flip is { x } n ,…,x 2 ,x 1 And the model can obtain the reverse time dependency relationship of the data, and acquire the future time dependency relationship of the data at the current moment. The backward channel part output data is implicit output of backward time sequence distribution, and the implicit output describes the time dependence { h } of the current time data and the future data bn ,…,h b2 ,h b1 }. Inverting the data to obtain implicit output of forward time series distribution, wherein the current implicit output comprises time dependence { h } of future time b1 ,h b2 ,…,h bn }。
The third part is an output structure, mainly stacks forward and reverse output data to obtain bidirectional implicit output data with forward time series distribution, and the data comprehensively describes the time dependence on historical data and the time dependence on future data on the time series data. Therefore, when the current data is analyzed, a bidirectional reference is obtained, and accurate positioning is obtained. The comprehensive implicit output is as follows:
and after the comprehensive implicit output data passes through a Relu activation function, inputting the comprehensive implicit output data to a 1D convolution layer, and extracting the comprehensive implicit time dependency relationship. The implicit output is again linearly transformed through the 1D convolutional layer. The convolution kernel size of the layer is 1×1, and the number of convolution kernels is output×class. In the present invention, the number of output lines monitored is 34, and for this the number of convolution kernels is output×class=1×34=34. The output data size is [ batch size, time steps, output×class ], i.e., [30000, 1000, 34]. And converting the matrix into a 4-dimensional matrix, namely, the probability activation value of the line number where the voltage sag source of the possible output of all the target output nodes in each time step is located, and the size of the probability activation value is [ batch size, time steps, outputs, class ], namely, the probability activation value is [30000, 1000, 34,1]. The output data is then accurately voltage sag source located using a Softmax function.
Step 3: training the model in the step 2 by using the training data in the step 1, updating parameters by using a back propagation algorithm, and training by using an Adam optimizer, wherein the loss function is a cross soil moisture loss function. The input data is a processed observation node voltage data matrix, and the output data is a tag data matrix. In training, different super parameters are used for adjusting the model to obtain an optimal model. The number of layers of the convolution network, the number of neurons (convolution kernel number), the type of activation function and the learning rate can be adjusted. In the present invention, a 10-layer bidirectional WaveNet structure is used, the number of convolution kernels of each layer dilated casualconvolutional layer is 128, the number of convolution kernels of the first 1×1 convolution layer of the output portion is 512, and the number of convolution kernels of the 1×1 convolution layer of the second output portion is output×class=1×34=34. The output data structure is [ number of samples, time step, output node, line number ]. The learning rate was 0.0001 with no additional learning rate decay settings.
FIG. 6 shows the trend of accuracy change in the model training process, and the model accuracy increases with the number of iterations. Through 500 generations of iterations, the accuracy exceeds 98.5%.
Step 4: and (3) preprocessing the real-time voltage data which are actually monitored to obtain voltage matrix data in the format of [ sample number, time step and preprocessed voltage amplitude ]. And (3) sending the voltage matrix data into the obtained optimal model in the step (3), and outputting data which is the required voltage sag source positioning result. The optimal model is the model obtained under the training parameters described in the step 3.

Claims (1)

1. The voltage sag source positioning method based on bidirectional WaveNet deep learning is characterized by comprising the following steps of:
step 1: the method comprises the steps of respectively sampling the amplitude values of three-phase voltages of observation nodes by using electric energy quality monitoring equipment which is already installed in a power grid, preprocessing the collected data to obtain preprocessed monitoring voltage data, and forming a training data set and a testing data set; the method comprises the following steps:
step 1.1, acquiring three-phase voltage amplitude data of each monitoring node by using existing power quality monitoring equipment of an existing power grid; sampling the voltage amplitude of the selected node to obtain voltage amplitude sampling data, and normalizing the data shown in formula (1) to obtain preprocessing data;
wherein x is * For normalized data output, x is the original data, x max X is the maximum value in the input sample data min Is the minimum value in the input sample data;
step 1.2, converting the preprocessed data in the step 1.1 into a matrix form, wherein the matrix size is a three-dimensional matrix, and the matrix form is [ sample number, time step and preprocessed voltage amplitude ];
step 1.3, manually marking the preprocessed data in the step 1.2, inputting a corresponding line number under each sampling point, further forming tag data, and converting the tag data into a tag data matrix in the form of [ sample number, time step, output node and line number ]; dividing the data matrix processed in the step 1.2 and the label data matrix, wherein 80% of the data matrix is divided into training data sets, and 20% of the data matrix is divided into test data sets;
step 2: constructing a bidirectional WaveNet integral model structure;
the bidirectional WaveNet integral model structure is divided into three parts, namely an input structure, an implicit structure and an output structure, wherein the first part is an input structure part and only comprises one input layer; the second part is an implicit structure part and comprises a plurality of bidirectional WaveNet structures; the third part is an output structure, and the forward and reverse output data are stacked to obtain bidirectional implicit output data distributed in forward time sequence;
each bidirectional WaveNet structure comprises a forward channel and a backward channel; a forward channel section extracting history dependent information of a current input time series using casual convolutional layer and dilated causal convolutional layers; a backward path section extracting history dependency information of the current input time series using casual convolutional layer and dilated causal convolutional layers identical to the forward path and adding a sequence reversing operation before causal convolutional layers;
step 3: training the model of the step 2 by using the training data set of the step 1;
updating parameters by using a back propagation algorithm, training by using an Adam optimizer, and obtaining a loss function which is a cross soil moisture loss function; the input data is a processed voltage amplitude data matrix, and the output data is a line number label data matrix;
step 4: and (3) using the monitored actual voltage amplitude data, preprocessing in the step (1) to obtain voltage amplitude matrix data, inputting the voltage matrix data into the model obtained in the step (3), and outputting data which is the required voltage sag source positioning result.
CN202110965856.2A 2021-08-23 2021-08-23 Voltage sag source positioning method based on bidirectional WaveNet deep learning Active CN113804997B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110965856.2A CN113804997B (en) 2021-08-23 2021-08-23 Voltage sag source positioning method based on bidirectional WaveNet deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110965856.2A CN113804997B (en) 2021-08-23 2021-08-23 Voltage sag source positioning method based on bidirectional WaveNet deep learning

Publications (2)

Publication Number Publication Date
CN113804997A CN113804997A (en) 2021-12-17
CN113804997B true CN113804997B (en) 2023-12-26

Family

ID=78893839

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110965856.2A Active CN113804997B (en) 2021-08-23 2021-08-23 Voltage sag source positioning method based on bidirectional WaveNet deep learning

Country Status (1)

Country Link
CN (1) CN113804997B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115469192B (en) * 2022-11-02 2023-04-25 国网信息通信产业集团有限公司 Voltage sag source positioning method and positioning system
CN116298482B (en) * 2023-05-25 2023-08-01 常州满旺半导体科技有限公司 Intelligent early warning system and method for voltage data monitoring

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104155580A (en) * 2014-08-19 2014-11-19 国家电网公司 Voltage sag source positioning method with association analysis and electric power calculation being combined
CN109635928A (en) * 2018-12-06 2019-04-16 华北电力大学 A kind of voltage sag reason recognition methods based on deep learning Model Fusion
CN110070172A (en) * 2019-03-13 2019-07-30 西安理工大学 The method for building up of sequential forecasting models based on two-way independent loops neural network
CN110610030A (en) * 2019-08-19 2019-12-24 南京航空航天大学 Power amplifier behavior modeling method based on WaveNet neural network structure
CN110672905A (en) * 2019-09-16 2020-01-10 东南大学 CNN-based self-supervision voltage sag source identification method
CN110808580A (en) * 2019-10-25 2020-02-18 国网天津市电力公司电力科学研究院 Quick identification method for voltage sag source based on wavelet transformation and extreme learning machine

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10540591B2 (en) * 2017-10-16 2020-01-21 Illumina, Inc. Deep learning-based techniques for pre-training deep convolutional neural networks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104155580A (en) * 2014-08-19 2014-11-19 国家电网公司 Voltage sag source positioning method with association analysis and electric power calculation being combined
CN109635928A (en) * 2018-12-06 2019-04-16 华北电力大学 A kind of voltage sag reason recognition methods based on deep learning Model Fusion
CN110070172A (en) * 2019-03-13 2019-07-30 西安理工大学 The method for building up of sequential forecasting models based on two-way independent loops neural network
CN110610030A (en) * 2019-08-19 2019-12-24 南京航空航天大学 Power amplifier behavior modeling method based on WaveNet neural network structure
CN110672905A (en) * 2019-09-16 2020-01-10 东南大学 CNN-based self-supervision voltage sag source identification method
CN110808580A (en) * 2019-10-25 2020-02-18 国网天津市电力公司电力科学研究院 Quick identification method for voltage sag source based on wavelet transformation and extreme learning machine

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度神经网络的电压暂降经济损失评估模型;王璐 等;《电力自动化设备》;第第40卷卷(第第6期期);全文 *

Also Published As

Publication number Publication date
CN113804997A (en) 2021-12-17

Similar Documents

Publication Publication Date Title
CN112149316B (en) Aero-engine residual life prediction method based on improved CNN model
CN109884459B (en) Intelligent online diagnosis and positioning method for winding deformation of power transformer
CN113804997B (en) Voltage sag source positioning method based on bidirectional WaveNet deep learning
CN109635928B (en) Voltage sag reason identification method based on deep learning model fusion
CN112051481B (en) Alternating current-direct current hybrid power grid fault area diagnosis method and system based on LSTM
CN112131783B (en) Power distribution station area big data-based household transformer topology relation identification method
CN100367620C (en) Power network topology error identification method based on mixed state estimation
CN110969194B (en) Cable early fault positioning method based on improved convolutional neural network
CN110726898B (en) Power distribution network fault type identification method
Gai et al. A parameter-optimized DBN using GOA and its application in fault diagnosis of gearbox
CN110070228B (en) BP neural network wind speed prediction method for neuron branch evolution
CN110672905A (en) CNN-based self-supervision voltage sag source identification method
CN113988449A (en) Wind power prediction method based on Transformer model
CN112990553A (en) Wind power ultra-short-term power prediction method using self-attention mechanism and bilinear fusion
CN110991737A (en) Ultra-short-term wind power prediction method based on deep belief network
CN110059737B (en) Distribution transformer connection relation identification method based on integrated deep neural network
CN111612242A (en) Motor state parameter prediction method based on LSTM deep learning model
CN111091141B (en) Photovoltaic backboard fault diagnosis method based on layered Softmax
CN116995670A (en) Photovoltaic power ultra-short-term prediction method based on multi-mode decomposition and multi-branch input
CN113722951B (en) Scatterer three-dimensional finite element grid optimization method based on neural network
CN112016684B (en) Electric power terminal fingerprint identification method of deep parallel flexible transmission network
CN114626957A (en) Voltage sag state evaluation method based on gate control cycle unit deep learning model
CN114545147A (en) Voltage sag source positioning method based on deep learning in consideration of time-varying topology
CN110829434B (en) Method for improving expansibility of deep neural network tidal current model
CN104037756A (en) Electric power system stability evaluation method including complex electric-power device model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant