CN115496153A - Multi-head self-attention deep convolution embedded clustering wind-light-load combined scene method - Google Patents

Multi-head self-attention deep convolution embedded clustering wind-light-load combined scene method Download PDF

Info

Publication number
CN115496153A
CN115496153A CN202211176681.8A CN202211176681A CN115496153A CN 115496153 A CN115496153 A CN 115496153A CN 202211176681 A CN202211176681 A CN 202211176681A CN 115496153 A CN115496153 A CN 115496153A
Authority
CN
China
Prior art keywords
formula
wind
clustering
data
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211176681.8A
Other languages
Chinese (zh)
Inventor
刘丽军
胡鑫
陈俊生
徐韩伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN202211176681.8A priority Critical patent/CN115496153A/en
Publication of CN115496153A publication Critical patent/CN115496153A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Pure & Applied Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Computing Systems (AREA)
  • Algebra (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Economics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Marketing (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Human Resources & Organizations (AREA)
  • General Business, Economics & Management (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a multi-head self-attention depth convolution embedded clustering wind, light and load combined scene method, which comprises the following steps of; optimizing VMD model parameter combination by fusing and improving a slime algorithm through multiple strategies, and cleaning wind-light-load time sequence data based on the optimal parameter combination; step two, establishing a convolution self-encoder based on multi-head self attention, and reconstructing an original time sequence signal by utilizing a convolution decoder; thirdly, obtaining a proper clustering number based on an elbow method, and utilizing a Kmeans initial clustering center according to characteristics; adjusting network structure parameters and updating clustering results, solving clustering centers of various scenes based on an averaging method to serve as typical scenes of the type, and providing a basis for optimized operation and planning of a power system; the method can accurately capture the coupling characteristic information between the wind, light and charge data, combines a characteristic extraction process with a clustering process, ensures the representativeness of embedded space characteristics, can generate a wind, light and charge combined scene and accurately capture the coupling characteristic information between the wind, light and charge data.

Description

Multi-head self-attention depth convolution embedded clustering wind-solar-load combined scene method
Technical Field
The invention relates to the technical field of power grids, in particular to a multi-head self-attention deep convolution embedded clustering wind-solar-load combined scene method.
Background
With the construction of a high-proportion renewable energy power system, the fluctuation and periodicity of wind power, photovoltaic and load bring challenges to the work of power grid planning, dispatching operation and the like.
If the scene method is used, the scene of uncertainty of wind, light and load is converted into a plurality of scenes of certainty, and a good foundation can be laid for optimizing power grid dispatching and planning.
The variation of wind power, photovoltaic power generation and electric load along with time shows fixed seasonal or daily periodicity, most of the current scene generation methods cannot fully mine the information value of electric power data, and the method has limitations in capturing the complementary relation of wind and photovoltaic power generation and the energy coupling relation between wind and photovoltaic power generation.
The existing method for generating the scene of wind, light and charge coupling mainly utilizes clustering to extract potential characteristic information among time sequence data and classifies the information. The traditional clustering models such as mean clustering, spectral clustering, hierarchical clustering and Gaussian mixture clustering are applied to optimizing power grid scheduling and planning, but the traditional clustering cannot accurately extract potential coupling characteristics among time sequence data, and the calculation accuracy of the clustering models is reduced when large-scale high-dimensional data is faced. In order to improve the clustering precision of high-dimensional data, methods such as PCA (principal component analysis), singular value decomposition and the like are generally used for reducing data dimensionality and feature extraction, and then clustering is performed based on feature information of a low-dimensional space. In other combined scene generation methods based on deep embedding clustering, the distortion phenomenon of a low-dimensional embedding space caused by later-stage clustering training is not considered, potential feature information among data is weakened, clustering precision is influenced, and in addition, the defect that captured feature information is more unilateral exists.
Disclosure of Invention
The invention provides a multi-head self-attention deep convolution embedded clustering wind-light-load combined scene method which can accurately capture coupling characteristic information between wind-light-load data, combine a characteristic extraction process and a clustering process to ensure the representativeness of embedded spatial characteristics, establish a deep convolution embedded clustering (DCEC-MS) model based on multi-head self-attention improvement, generate a wind-light-load combined scene and accurately capture the coupling characteristic information between the wind-light-load data.
The invention adopts the following technical scheme.
The method is used for generating a wind-light-load coupling scene, combines a feature extraction process and a clustering process by accurately capturing coupling feature information between wind-light-load data to ensure the representativeness of embedding space features, and comprises the following steps;
optimizing VMD model parameter combination by a multi-strategy fusion improved slime algorithm (SMA), cleaning wind-solar-charge time sequence data based on the optimal parameter combination, and weakening the influence of noise signals on a data feature extraction process;
establishing a multi-head self-attention-based convolution self-encoder, extracting deep characteristic information of the processed wind-solar-charged data, and reconstructing an original time sequence signal by using a convolution decoder;
thirdly, obtaining a proper clustering number based on an elbow method, and utilizing a Kmeans initial clustering center according to characteristics; and then, based on a joint loss function formed by the sum of the encoder reconstruction loss and the clustering loss, adjusting network structure parameters and updating clustering results, and solving clustering centers of various scenes based on an averaging method to serve as typical scenes of the type, so as to provide a basis for the optimized operation and planning of the power system.
In the first step, abnormal data detection and cleaning are carried out on historical wind and light load data, wherein the data are wind and light load time sequence data f (t) in one year, each sample comprises wind power, photovoltaic output and load data at 24 moments by taking days as units; the method comprises the following specific steps:
s1, decomposing original wind-solar-charge data f (t) into K IMF intrinsic mode components u with central frequency by adopting a nonlinear time domain decomposition method-VMD model k (t) simultaneously obtaining K u k (t) and minimizing the sum of the limited bandwidths to obtain the VMD model expression as:
Figure BDA0003864746400000021
in the formula: omega k Is the k-th u k (t) a center frequency; δ (t) is a unit pulse function;
Figure BDA0003864746400000022
is the partial derivative operator; introducing Lagrange operator lambda and quadratic penalty factor alpha, and solving by using a simplified formula I, wherein the model expression after operation is as follows:
Figure BDA0003864746400000031
based on an alternative direction multiplier method, the formula (5.2) is solved, and iteration is continuously optimized
Figure BDA0003864746400000032
And λ, the iterative expression is:
Figure BDA0003864746400000033
Figure BDA0003864746400000034
Figure BDA0003864746400000035
in the formula: n is the number of iterations;
Figure BDA0003864746400000036
and
Figure BDA0003864746400000037
are respectively u (t),
Figure BDA0003864746400000038
Fourier transform of λ (t) and f (t);
s2, when the VMD model determines preset values of the parameters K and alpha, if the value of the parameter K is too large, over-decomposition occurs, and modal overlapping is caused; if the value of the parameter alpha is too large, the central frequency is lost, the optimal parameter combination of the VMD is searched by the multi-strategy fusion improved slime mold algorithm based on the KL divergence, and the KL divergence is used for measuring the intrinsic mode component u k (t) similarity with the original wind-solar-load data f (t), and the mathematical expression is as follows:
Figure BDA0003864746400000039
in the formula: the closer R is to 0, the more the eigenmode component u is illustrated k (t) the higher the similarity with the original wind-solar-charge data f (t); when R is the minimum, the parameters K and alpha are the optimal parameter combination;
s3, the slime algorithm simulates the dispersive foraging behavior of slime through establishing a model, namely whether the slime is close to food is selected according to the concentration of the food when the slime is initially close to the food, and the mathematical expression of position updating is as follows:
Figure BDA0003864746400000041
in the formula: x (T) is the position of the slime in the Tth iteration; x b (T) is the best location currently found; w is a weight coefficient; x A (T) and X B (T) the slime mold individuals randomly selected in the Tth iteration are respectively; r is [0,1 ]]The random number of (1); v. of c Is a feedback factor, the value of which decreases linearly from 1 to 0; v. of b For controlling the parameters, the value range is [ -a, a [ -a](ii) a p is a location update control parameter;
wherein: the mathematical models of p, a and W are:
p = tanh | S (i) -DF | formula eight;
Figure BDA0003864746400000042
Figure BDA0003864746400000043
in the formula: s (i) is the adaptive value of the ith slime mold individual; DF is the optimal adaptive value in all iterations; t is max Is the maximum iteration number; bF and wF are respectively an optimal adaptive value and a worst adaptive value in the T iteration; condition is the individuals with the adaptation values sorted in the first half;
in the slime algorithm, based on the purpose of searching high-quality food, slime can segment out partial individuals for exploring the remaining area, and the mathematical formula of location updating is as follows:
Figure BDA0003864746400000044
in the formula: rand is a random number; ub and lb are the upper and lower boundaries of the myxoma exploration area respectively; z is a slime individual proportion parameter for exploring the residual area;
s4, algorithm factor v for preventing slime c By using linearityThe concentration and the quality of food cannot be accurately fed back in time in a descending mode, the early convergence speed is slowed down, and self-adaptive adjustable v is introduced c Acceleration of earlier stage v c The descending speed of the system improves the capability of searching the global optimum value; and iteratively calculating v late c The method is stable, and the phenomenon of falling into local optimum is avoided; adaptive adjustable v c The mathematical expression is:
Figure BDA0003864746400000051
vector introduction in myxoma exploration area by self-adaptive reverse learning mechanism
Figure BDA0003864746400000052
The vector and the position of each individual slime mold
Figure BDA0003864746400000053
On the contrary, the fitness values of the two are compared to avoid falling into local optimum; ith individual location of Myxomycetes at Tth iteration
Figure BDA0003864746400000054
The expression of (c) is:
Figure BDA0003864746400000055
Figure BDA0003864746400000056
based on the self-adaptive decision, the current adaptive value is obtained when the ith slime finds food
Figure BDA0003864746400000057
And the previous optimum adaptive value S (X) i (T)) comparing, judging whether additional exploration is carried out by utilizing a reverse learning mechanism, and updating the next iteration position, wherein the expression is as follows:
Figure BDA0003864746400000058
s5, selecting Mean Absolute Error (MAE) as a standard for measuring the quality of data processing, wherein the calculation mode is as follows:
Figure BDA0003864746400000059
in the formula: n is the total number of the wind, light and charge data samples;
Figure BDA00038647464000000510
processing the data to obtain wind-solar-load data;
the MAE measures the capability of the data processing algorithm for retaining the original characteristics of the data on the premise of effectively removing the abnormal values in the wind-solar-load data; the smaller the MAE, the more excellent the effect of data processing.
In the second step, each processed data sample is expanded into (9, 9) size tensors through normalization and 0 complementing operations, dimension reduction is carried out on the wind-solar-load data based on a multi-head self-attention improved convolution encoder, deep features are extracted, and meanwhile, an original time sequence signal is reconstructed by a convolution decoder;
in the model of the convolutional auto-encoder, the encoding process of the convolutional layer is as follows:
h conv =σ(X convconv +b conv ) Seventeen, a formula;
in the formula: h is a total of conv Outputting characteristic information for the convolutional layer; o is Relu activation function; x conv The wind-solar-load time sequence data after data processing is obtained; omega conv And b conv Convolution kernel number and bias of convolution layer respectively;
the feature information captured by the convolutional layer from the wind, light and charge time sequence data is sent to a multi-head self-attention layer to further extract features, and the coding process is expressed as a formula seventeen;
the decoding process of the decoder for decoding the seventeenth coding result of the formula is expressed by the formula
h deconv =σ(h convdeconv +b deconv ) Eighteen formulas;
in the formula: h is a total of deconv Outputting characteristic information for the deconvolution layer; omega deconv And b deconv Convolution kernel number and bias of the deconvolution layer respectively;
the model of the convolutional self-encoder takes Mean Square Error (MSE) as a reconstruction loss function, minimizes the MSE, and continuously optimizes network parameters of an encoder and a decoder, and the expression is as follows:
Figure BDA0003864746400000061
in the formula: l is a radical of an alcohol r Is a reconstruction loss function; n is a radical of d Days for wind, light and charge data;
in the model of the multi-head self-attention improved convolution encoder, the multi-head self-attention is an improved type based on self-attention, a multi-query mode is adopted to capture a plurality of groups of feature information of different subspaces in parallel from data, and the feature information is spliced according to weights, and the detailed calculation process is as follows:
a1, converting input data Y of a multi-head self-attention layer into a query matrix Q based on linear transformation a Key matrix K a Sum matrix V a The mathematical expression is:
Figure BDA0003864746400000062
in the formula: w is a group of Q 、W K And W V Is a transformation matrix;
will Q a 、K a And V a Mapping to theta characteristic subspaces to obtain a query matrix Q in the theta subspace a6 Key matrix K Sum matrix V The calculation method is as follows:
Figure BDA0003864746400000071
in the formula: w 、W And W A transformation matrix being the theta subspace;
step A2, calculating self-attention values in theta feature subspaces based on the scaling dot product and the Softmax function, wherein the calculation mode is as follows:
Figure BDA0003864746400000072
in the formula: d is a scaling factor; head θ Is the self-attention value in the theta feature subspace;
finally, self-attention in θ feature subspaces is fused:
M multi-head =C Concat (head 1 ,head 2 ,...,head θ )W o a formula twenty-three;
in the formula: m is a group of multi-head Is the fused self-attention value; c Concat Performing matrix splicing operation; w is a group of o Is a parameter matrix.
The third step comprises the following steps
B1, observing elbow values of different cluster numbers by an elbow method, namely an elbow method, based on low-dimensional characteristic information obtained from an encoder, so as to determine the optimal cluster number, initializing and setting a Kmeans initial clustering center, and finely adjusting the whole network parameters and guaranteeing the representativeness of embedded spatial characteristics by using an Adam optimizer based on a joint loss function, so as to obtain the optimal clustering result;
the joint loss function is formulated as
L=L r +γL c Twenty-four of the formula;
in the formula: l is a joint loss function; gamma is a coefficient for controlling the distortion degree of the embedding space, and 0.1 is taken; l is c Is a clustering loss function;
by clustering the center mu j As its and low dimensional spatial feature Z i And connecting the weights of (1) and connecting each low-dimensional spatial feature Z i Mapping onto soft label(ii) a At the same time, to increase Z i And mu i In the step, the clustering loss describes KL divergence between the distribution on the soft label and the Gaussian distribution and is used for measuring the similarity between the distribution on the soft label and the Gaussian distribution; the concrete process is expressed by a formula as follows;
Figure BDA0003864746400000081
Figure BDA0003864746400000082
Figure BDA0003864746400000083
in the formula: q. q of ij As a low-dimensional spatial feature Z i Belongs to the cluster center mu j The probability of (d); b ij Distributing auxiliary functions for the targets;
b2, adjusting the stack encoder by taking the KL divergence as an auxiliary clustering objective function, and performing parameter adjustment by using an Adam optimizer to obtain an encoder structure suitable for scene clustering; adam integrates the advantages of first-order momentum of random gradient descent (SGD) and second-order momentum of root mean square flight (RMSprop); based on the moment mean values of the two, the performance of sparse gradient is fully exerted on the premise of keeping the learning rate of each parameter, so that the algorithm has higher robustness on the unsteady state problem. The specific calculation process is as follows:
Figure BDA0003864746400000084
M tt =β 1 ·M tt-1 +(1-β 1 )·g tt twenty-nine of a formula;
Figure BDA0003864746400000085
Figure BDA0003864746400000086
Figure BDA0003864746400000091
Figure BDA0003864746400000092
in the formula: tt is a time interval; g tt Is a gradient; m is a group of tt Is g tt First moment estimation of (1);
Figure BDA0003864746400000093
is a model parameter; eta tt Is g tt Estimating the second moment;
Figure BDA0003864746400000094
and
Figure BDA0003864746400000095
respectively outputting the corresponding networks; psi is the network step value; beta is a 1 Is M tt The exponential decay rate of (2) is 0.9; beta is a 2 Is eta tt The exponential decay rate of (2) is 0.999; epsilon is a constant and is used for ensuring the robustness of the algorithm;
and step B3, obtaining an intra-year optimal scene clustering result through joint training of the encoder and the clustering layer, evaluating the clustering result by adopting three evaluation indexes of CHI, SC and DBI, and solving a clustering center of various scenes by utilizing an averaging method to obtain a typical scene of each category.
In the content, the DCEC is a deep convolution embedded clustering algorithm DCEC; VMD is the abbreviation of parameter optimization Variational modal decomposition variable mode decomposition.
The method can accurately capture the coupling characteristic information between the wind, light and load data, combines the characteristic extraction process and the clustering process, and ensures the representativeness of the embedded space characteristics.
Drawings
The invention is described in further detail below with reference to the following figures and detailed description:
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a schematic flow chart of the algorithm of the present invention;
FIG. 3 is a schematic diagram of a DCEC-MS network structure based on a multi-head self-attention improved deep convolution embedded clustering model.
Detailed Description
As shown in the figure, the multi-head self-attention deep convolution embedding clustering wind, light and load combined scene method is used for generating a wind, light and load coupling scene, combines a feature extraction process and a clustering process by accurately capturing coupling feature information between wind, light and load data to ensure the representativeness of embedding space features, and comprises the following steps;
optimizing VMD model parameter combination by a multi-strategy fusion improved slime algorithm (SMA), cleaning wind-solar-charge time sequence data based on the optimal parameter combination, and weakening the influence of noise signals on a data feature extraction process;
establishing a multi-head self-attention-based convolution self-encoder, extracting deep characteristic information of the processed wind-solar-charged data, and reconstructing an original time sequence signal by using a convolution decoder;
thirdly, obtaining a proper clustering number based on an elbow method, and utilizing a Kmeans initial clustering center according to characteristics; and then, based on a joint loss function formed by the sum of the reconstruction loss and the clustering loss of the encoder, adjusting network structure parameters and updating clustering results, and solving the clustering center of various scenes based on an averaging method to serve as a typical scene of the class, so as to provide a basis for the optimized operation and planning of the power system.
In the first step, abnormal data detection and cleaning are carried out on historical wind and light load data, wherein the data are wind and light load time sequence data f (t) in one year, each sample comprises wind power, photovoltaic output and load data at 24 moments by taking days as units; the method comprises the following steps:
s1, decomposing original wind-solar-load data f (t) into K IMF (intrinsic mode function) components u with central frequency by adopting a nonlinear time domain decomposition method-VMD (model-vector magnitude) model k (t) simultaneously obtaining K u k (t) and minimizing the sum of the limited bandwidths to obtain the VMD model expression as:
Figure BDA0003864746400000101
in the formula: omega k Is the k-th u k (t) a center frequency; δ (t) is a unit pulse function;
Figure BDA0003864746400000102
is the partial derivative operator;
introducing Lagrange operator lambda and quadratic penalty factor alpha, and solving by using a simplified formula I, wherein the model expression after operation is as follows:
Figure BDA0003864746400000103
based on alternative direction multiplier method, solving equation (5.2), and continuously optimizing iteration
Figure BDA0003864746400000104
And λ, the iterative expression is:
Figure BDA0003864746400000111
Figure BDA0003864746400000112
Figure BDA0003864746400000113
in the formula: n isThe number of iterations;
Figure BDA0003864746400000114
and
Figure BDA0003864746400000115
are respectively u (t),
Figure BDA0003864746400000116
Fourier transform of λ (t) and f (t);
s2, when the VMD model determines preset values of parameters K and alpha, if the value of the parameter K is too large, over-decomposition occurs, and mode overlapping is caused; if the value of the parameter alpha is too large, the central frequency is lost, the optimal parameter combination of the VMD is searched by the multi-strategy fusion improved slime mold algorithm based on the KL divergence, and the KL divergence is used for measuring the intrinsic mode component u k (t) similarity with the original wind-solar-load data f (t), and the mathematical expression is as follows:
Figure BDA0003864746400000117
in the formula: the closer R is to 0, the more the eigenmode component u is illustrated k (t) the higher the similarity with the original wind-solar-charge data f (t); when R is the minimum, the parameters K and alpha are the optimal parameter combination;
s3, simulating the dispersive foraging behavior of the slime through establishing a model by the slime algorithm, namely selecting whether the slime is close to food according to the concentration of the food when the slime is initially close to the food, wherein the mathematical expression of position updating is as follows:
Figure BDA0003864746400000118
in the formula: x (T) is the position of the slime in the Tth iteration; x b (T) is the best location currently found; w is a weight coefficient; x A (T) and X B (T) the slime mold individuals randomly selected in the Tth iteration are respectively; r is [0,1 ]]The random number of (1); v. of c Is a feedback factor, the value of which decreases linearly from 1 to 0; v. of b For controlling the parameters, the value range is [ -a, a [ -a](ii) a p is a location update control parameter;
wherein: the mathematical models of p, a and W are:
p = tanh | S (i) -DF | formula eight;
Figure BDA0003864746400000121
Figure BDA0003864746400000122
in the formula: s (i) is an adaptive value of the ith slime individual; DF is the optimal adaptive value in all iterations; t is max Is the maximum iteration number; bF and wF are respectively an optimal adaptive value and a worst adaptive value in the T iteration; condition is the individuals with the adaptation values sorted in the first half;
in the slime algorithm, based on the purpose of searching high-quality food, slime can segment out partial individuals for exploring the remaining area, and the mathematical formula of location updating is as follows:
Figure BDA0003864746400000123
in the formula: rand is a random number; ub and lb are the upper and lower boundaries of the myxoma exploration area respectively; z is a slime individual proportion parameter for exploring the residual area;
s4, algorithm factor v for preventing slime bacteria c The concentration and the quality of food cannot be fed back accurately and timely by adopting a linear decreasing mode, so that the earlier stage convergence speed is reduced, and self-adaptive adjustable v is introduced c Accelerating earlier stage v c The descending speed of the system improves the capability of searching the global optimum value; and iteratively calculating v late c The method is stable and avoids falling into local optimum; adaptive adjustable v c The mathematical expression is as follows:
Figure BDA0003864746400000131
vector is introduced into myxomycete exploration area by self-adaptive reverse learning mechanism
Figure BDA0003864746400000132
The vector and the position of each individual slime mold
Figure BDA0003864746400000133
On the contrary, the fitness values of the two are compared to avoid falling into local optimum; at the Tth iteration, the ith individual position of the slime
Figure BDA0003864746400000134
The expression of (a) is:
Figure BDA0003864746400000135
Figure BDA0003864746400000136
based on the adaptive decision, the ith slime mold finds the current adaptive value when searching food
Figure BDA0003864746400000137
And the previous optimum adaptive value S (X) i (T)) comparing, judging whether additional exploration is carried out by utilizing a reverse learning mechanism, and updating the next iteration position, wherein the expression is as follows:
Figure BDA0003864746400000138
s5, selecting Mean Absolute Error (MAE) as a standard for measuring the quality of data processing, wherein the calculation mode is as follows:
Figure BDA0003864746400000139
in the formula: n is the total number of the wind, light and charge data samples;
Figure BDA00038647464000001310
processing the data to obtain wind-solar-load data;
the MAE measures the capability of the data processing algorithm for retaining the original characteristics of the data on the premise of effectively removing the abnormal values in the wind-solar-load data; the smaller the MAE, the more excellent the effect of data processing.
In the second step, each processed data sample is expanded into (9, 9) size tensors through normalization and 0 complementing operations, dimension reduction is carried out on the wind-solar-load data based on a multi-head self-attention improved convolution encoder, deep features are extracted, and meanwhile, an original time sequence signal is reconstructed by a convolution decoder;
in the model of the convolutional auto-encoder, the encoding process of the convolutional layer is as follows:
h conv =σ(X convconv +b conv ) Seventeen, a formula;
in the formula: h is conv Outputting characteristic information for the convolutional layer; o is Relu activation function; x conv The wind-solar-charge time sequence data after data processing is obtained; omega conv And b conv Convolution kernel number and bias of convolution layer respectively;
the feature information captured by the convolutional layer from the wind, light and charge time sequence data is sent to a multi-head self-attention layer to further extract features, and the coding process is expressed as a formula seventeen;
the decoding process of the decoder for decoding the seventeenth coding result of the formula is expressed by the formula
h deconv =σ(h convdeconv +b deconv ) Eighteen formulas;
in the formula: h is deconv Outputting characteristic information for the deconvolution layer; omega deconv And b deconv Convolution kernel number and bias of the deconvolution layer respectively;
the model of the convolutional self-encoder takes Mean Square Error (MSE) as a reconstruction loss function, minimizes the MSE, and continuously optimizes network parameters of an encoder and a decoder, and the expression is as follows:
Figure BDA0003864746400000141
in the formula: l is r Is a reconstruction loss function; n is a radical of d Days for wind, solar and solar load data;
in the model of the multi-head self-attention improved convolution encoder, the multi-head self-attention is an improved type based on self-attention, a multi-query mode is adopted to capture a plurality of groups of feature information of different subspaces in parallel from data, and the feature information is spliced according to weights, and the detailed calculation process is as follows:
a1, converting input data Y of a multi-head self-attention layer into a query matrix Q based on linear transformation a Key matrix K a Sum matrix V a The mathematical expression is:
Figure BDA0003864746400000142
in the formula: w Q 、W K And W V Is a conversion matrix;
will Q a 、K a And V a Mapping to theta characteristic subspaces to obtain a query matrix Q in the theta subspace a6 Key matrix K Sum matrix V The calculation method is as follows:
Figure BDA0003864746400000151
in the formula: w is a group of 、W And W A transformation matrix for the theta subspace;
step A2, calculating self-attention values in theta feature subspaces based on the scaling dot product and the Softmax function, wherein the calculation mode is as follows:
Figure BDA0003864746400000152
in the formula: d is a scaling factor; head θ Is the self-attention value in the theta feature subspace;
finally, self-attention in θ feature subspaces is fused:
M multi-head =C Concat (head 1 ,head 2 ,...,head θ )W o twenty-three of the formula;
in the formula: m multi-head Is the fused self-attention value; c Concat Performing matrix splicing operation; w is a group of o Is a parameter matrix.
The third step comprises the following steps
B1, observing elbow values of different cluster numbers by an elbow method, namely an elbow method, based on low-dimensional characteristic information obtained from an encoder, so as to determine the optimal cluster number, initializing and setting a Kmeans initial clustering center, and finely adjusting the whole network parameters and guaranteeing the representativeness of embedded spatial characteristics by using an Adam optimizer based on a joint loss function, so as to obtain the optimal clustering result;
the joint loss function is formulated as
L=L r +γL c Twenty-four of the formula;
in the formula: l is a joint loss function; gamma is a coefficient for controlling the distortion degree of the embedding space, and is taken as 0.1; l is a radical of an alcohol c Is a clustering loss function;
by clustering the center mu j As its and low-dimensional spatial features Z i And connecting the weights of each low-dimensional space feature Z to obtain a combined weight of each low-dimensional space feature Z i Mapping to a soft label; at the same time, to increase Z i And mu i In the step, the clustering loss describes KL divergence between the distribution on the soft label and the Gaussian distribution and is used for measuring the similarity between the distribution and the Gaussian distribution; the concrete process is expressed by a formula as follows;
Figure BDA0003864746400000161
Figure BDA0003864746400000162
Figure BDA0003864746400000163
in the formula: q. q.s ij As a low-dimensional spatial feature Z i Belongs to the cluster center mu j The probability of (d); b ij Distributing an auxiliary function for the target;
b2, adjusting the stack encoder by taking the KL divergence as an auxiliary clustering objective function, and performing parameter adjustment by using an Adam optimizer to obtain an encoder structure suitable for scene clustering; adam integrates the advantages of the first-order momentum of random gradient descent (SGD) and the second-order momentum of root mean square propagation (RMSprop); based on the moment mean values of the two, the performance of sparse gradient is fully exerted on the premise of keeping the learning rate of each parameter, so that the algorithm has higher robustness on the unsteady state problem. The specific calculation process is as follows:
Figure BDA0003864746400000164
M tt =β 1 ·M tt-1 +(1-β 1 )·g tt twenty-nine of a formula;
Figure BDA0003864746400000165
Figure BDA0003864746400000166
Figure BDA0003864746400000167
Figure BDA0003864746400000171
in the formula: tt is the time interval; g is a radical of formula tt Is a gradient; m tt Is g tt Estimating the first moment of (2);
Figure BDA0003864746400000172
is a model parameter; eta tt Is g tt Estimating the second moment of (2);
Figure BDA0003864746400000173
and
Figure BDA0003864746400000174
respectively outputting the corresponding networks; psi is the network step value; beta is a beta 1 Is M tt The exponential decay rate of (2) is 0.9; beta is a 2 Is eta of tt The exponential decay rate of (2) is 0.999; epsilon is a constant and is used for ensuring the robustness of the algorithm;
and B3, obtaining an intra-year optimal scene clustering result through joint training of the encoder and the clustering layer, evaluating the clustering result by adopting three evaluation indexes of CHI, SC and DBI, and solving a clustering center of each type of scene by using an averaging method to obtain a typical scene of each type.
In the content, the DCEC is a deep convolution embedding clustering algorithm DCEC; VMD is the abbreviation of parameter optimization Variational modal decomposition variable mode decomposition.
In the embodiment, actual wind-light-load time sequence data with the sampling time of 1h is taken as a sample for clustering, and a wind-light-load combined scene is constructed, so that the coupling characteristics of the three are fully considered in the process of dispatching and planning the power grid.
The results of the cluster evaluation indexes of different methods are compared as shown in the following table,
Table.1 Comparison of clustering evaluation index results of different methods
Figure BDA0003864746400000175

Claims (4)

1. a multi-head self-attention depth convolution embedding clustering wind-light-load combined scene method is used for generating a wind-light-load coupling scene, and is characterized in that: the method combines a feature extraction process and a clustering process by accurately capturing coupling feature information between wind, light and charge data to ensure the representativeness of embedded space features, and comprises the following steps;
optimizing a VMD model parameter combination by a multi-strategy fusion improved slime mold algorithm, cleaning wind-solar-charge time sequence data based on the optimal parameter combination, and weakening the influence of noise signals on a data feature extraction process;
establishing a multi-head self-attention-based convolution self-encoder, extracting deep characteristic information of the processed wind-solar-charged data, and reconstructing an original time sequence signal by using a convolution decoder;
thirdly, obtaining a proper clustering number based on an elbow method, and utilizing a Kmeans initial clustering center according to characteristics; and then, based on a joint loss function formed by the sum of the reconstruction loss and the clustering loss of the encoder, adjusting network structure parameters and updating clustering results, and solving the clustering center of various scenes based on an averaging method to serve as a typical scene of the class, so as to provide a basis for the optimized operation and planning of the power system.
2. The multi-head self-attention depth convolution embedded clustering wind-solar-load combined scene method according to claim 1, characterized in that: in the first step, abnormal data detection and cleaning are carried out on historical wind and light load data, wherein the data are wind and light load time sequence data f (t) in one year, each sample comprises wind power, photovoltaic output and load data at 24 moments by taking days as units; the method comprises the following specific steps:
s1, decomposing original wind-solar-charge data f (t) into K IMF intrinsic mode components u with central frequency by adopting a nonlinear time domain decomposition method-VMD model k (t) simultaneously obtaining K u k (t) sum of limited bandwidths and minimize itThus, the expression of the VMD model is obtained as follows:
Figure FDA0003864746390000011
in the formula: omega k Is the k-th u k (t) a center frequency; δ (t) is a unit pulse function;
Figure FDA0003864746390000012
is the partial derivative operator; introducing a Lagrange operator lambda and a quadratic term penalty factor alpha, and solving the solution process by using a simplified formula I, wherein the model expression after operation is as follows:
Figure FDA0003864746390000021
based on an alternative direction multiplier method, the formula (5.2) is solved, and iteration is continuously optimized
Figure FDA0003864746390000022
And λ, the iterative expression is:
Figure FDA0003864746390000023
Figure FDA0003864746390000024
Figure FDA0003864746390000025
in the formula: n is the number of iterations;
Figure FDA0003864746390000026
and
Figure FDA0003864746390000027
are respectively u (t),
Figure FDA0003864746390000028
Fourier transform of λ (t) and f (t);
s2, when the VMD model determines preset values of parameters K and alpha, if the value of the parameter K is too large, over-decomposition occurs, and mode overlapping is caused; if the value of the parameter alpha is too large, the central frequency is lost, the optimal parameter combination of the VMD is searched by the multi-strategy fusion improved slime mold algorithm based on the KL divergence, and the KL divergence is used for measuring the intrinsic mode component u k (t) similarity to the original wind-solar-load data f (t), the mathematical expression is:
Figure FDA0003864746390000029
in the formula: the closer R is to 0, the more the eigenmode component u is illustrated k (T) the higher the similarity with the original wind-solar-charge data f (T); when R is the minimum, the parameters K and alpha are the optimal parameter combination;
s3, the slime algorithm simulates the dispersive foraging behavior of slime through establishing a model, namely whether the slime is close to food is selected according to the concentration of the food when the slime is initially close to the food, and the mathematical expression of position updating is as follows:
Figure FDA0003864746390000031
in the formula: x (T) is the position of the slime in the Tth iteration; x b (T) is the best location currently found; w is a weight coefficient; x A (T) and X B (T) the slime mold individuals randomly selected in the Tth iteration are respectively; r is [0,1 ]]The random number of (1); v. of c Is a feedback factor, the value of which decreases linearly from 1 to 0; v. of b For controlling the parameters, the value range is [ -a, a [ -a](ii) a p is a location update control parameter;
wherein: the mathematical models of p, a and W are:
p = tanh | S (i) -DF | formula eight;
Figure FDA0003864746390000032
Figure FDA0003864746390000033
in the formula: s (i) is the adaptive value of the ith slime mold individual; DF is the optimal adaptive value in all iterations; t is a unit of max Is the maximum iteration number; bF and wF are respectively an optimal adaptive value and a worst adaptive value in the T iteration; condition is the individuals with the adaptation values ranked in the first half;
in the slime algorithm, based on the purpose of searching high-quality food, slime can segment out partial individuals for exploring the remaining area, and the mathematical formula of location updating is as follows:
Figure FDA0003864746390000034
in the formula: rand is a random number; ub and lb are the upper and lower boundaries of the myxoma exploration area respectively; z is a slime individual proportion parameter for exploring the residual area;
s4, algorithm factor v for preventing slime c The concentration and the quality of food cannot be fed back accurately and timely by adopting a linear decreasing mode, so that the earlier stage convergence speed is slowed, and self-adaptive adjustable v is introduced c Acceleration of earlier stage v c The descending speed of the system improves the capability of searching the global optimum value; and iteratively calculating v late c The method is stable, and the phenomenon of falling into local optimum is avoided; adaptive adjustable v c The mathematical expression is:
Figure FDA0003864746390000041
vector is introduced into myxomycete exploration area by self-adaptive reverse learning mechanism
Figure FDA0003864746390000042
The vector and the position of each individual slime mold
Figure FDA0003864746390000043
On the contrary, the fitness values of the two are compared to avoid falling into local optimum; ith individual location of Myxomycetes at Tth iteration
Figure FDA0003864746390000044
The expression of (a) is:
Figure FDA0003864746390000045
Figure FDA0003864746390000046
based on the adaptive decision, the ith slime mold finds the current adaptive value when searching food
Figure FDA0003864746390000047
And the previous optimum adaptive value S (X) i (T)) comparing, judging whether additional exploration is carried out by utilizing a reverse learning mechanism, and updating the next iteration position, wherein the expression is as follows:
Figure FDA0003864746390000048
s5, selecting the average absolute error as a standard for measuring the quality of data processing, wherein the calculation mode is as follows:
Figure FDA0003864746390000049
in the formula: n is the total number of the wind, light and charge data samples;
Figure FDA00038647463900000410
the data is processed wind, light and charge data;
the MAE measures the capability of the data processing algorithm for retaining the original characteristics of the data on the premise of effectively removing the abnormal values in the wind-solar-load data; the smaller the MAE, the more excellent the effect of data processing.
3. The multi-head self-attention depth convolution embedded clustering wind-solar-load combined scene method according to claim 2, characterized in that: in the second step, each processed data sample is expanded into (9, 9) size tensors through normalization and 0 complementing operations, dimension reduction is carried out on the wind-solar-load data based on a multi-head self-attention improved convolution encoder, deep features are extracted, and meanwhile, an original time sequence signal is reconstructed by a convolution decoder;
in the model of the convolutional self-encoder, the encoding process of the convolutional layer is as follows:
h conv =σ(X convconv +b conv ) Seventeen, a formula;
in the formula: h is conv Outputting characteristic information for the convolutional layer; o is Relu activation function; x conv The wind-solar-charge time sequence data after data processing is obtained; omega conv And b conv Convolution kernel number and bias of convolution layer respectively;
the feature information captured by the convolutional layer from the wind, light and charge time sequence data is sent to a multi-head self-attention layer to further extract features, and the coding process is expressed as a formula seventeen;
the decoding process of the decoder for decoding the seventeenth formula coding result is expressed by the formula
h deconv =σ(h convdeconv +b deconv ) Eighteen formulas;
in the formula: h is deconv Outputting characteristic information for the deconvolution layer; omega deconv And b deconv Convolution by deconvolution layers, respectivelyKernel number and offset;
the model of the convolutional self-encoder takes Mean Square Error (MSE) as a reconstruction loss function, minimizes the MSE, and continuously optimizes network parameters of an encoder and a decoder, and the expression is as follows:
Figure FDA0003864746390000051
in the formula: l is r Is a reconstruction loss function; n is a radical of d Days for wind, solar and solar load data;
in a model of the multi-head self-attention improved convolution encoder, the multi-head self-attention is an improved type based on self-attention, a multi-query mode is adopted to capture a plurality of groups of feature information of different subspaces in parallel from data, and the feature information is spliced according to weights, and the detailed calculation process is as follows:
a1, converting input data Y of a multi-head self-attention layer into a query matrix Q based on linear transformation a Key matrix K a Sum matrix V a The mathematical expression is:
Figure FDA0003864746390000052
in the formula: w Q 、W K And W V Is a conversion matrix;
will Q a 、K a And V a Mapping to theta characteristic subspaces to obtain a query matrix Q in the theta subspace Key matrix Q Sum matrix V The calculation method is as follows:
Figure FDA0003864746390000061
in the formula: w 、W And W A transformation matrix for the theta subspace;
step A2, calculating self-attention values in theta feature subspaces based on the scaling dot product and the Softmax function, wherein the calculation mode is as follows:
Figure FDA0003864746390000062
in the formula: d is a scaling factor; head θ Is the self-attention value in the theta-th feature subspace;
finally, self-attention in θ feature subspaces is fused:
M multi-head =C Concat (head 1 ,head 2 ,...,head θ )W o a formula twenty-three;
in the formula: m is a group of multi-head Is the fused self-attention value; c Concat Performing matrix splicing operation; w is a group of o Is a parameter matrix.
4. The multi-head self-attention depth convolution embedded clustering wind-solar-load combined scene method according to claim 3, characterized in that: the third step comprises the following steps
B1, observing elbow values of different cluster numbers by an elbow method, namely an elbow method, based on low-dimensional characteristic information obtained from an encoder, so as to determine the optimal cluster number, initializing and setting a Kmeans initial clustering center, and finely adjusting the whole network parameters and guaranteeing the representativeness of embedded spatial characteristics by using an Adam optimizer based on a joint loss function, so as to obtain the optimal clustering result;
the joint loss function is formulated as
L=L r +γL c Twenty-four of the formula;
in the formula: l is a joint loss function; gamma is a coefficient for controlling the distortion degree of the embedding space, and 0.1 is taken; l is a radical of an alcohol c Is a clustering loss function;
by clustering the center mu j As its and low-dimensional spatial features Z i And connecting the weights of each low-dimensional space feature Z to obtain a combined weight of each low-dimensional space feature Z i Mapping to a soft label; at the same time, to increase Z i And mu j Adaptation accuracy of, using Gaussian scoresDistributing the cloth as an ideal target, wherein in the step, the clustering loss describes KL divergence between the distribution on the soft label and Gaussian distribution, and is used for measuring the similarity between the distribution and the Gaussian distribution; the concrete process is expressed by a formula as follows;
Figure FDA0003864746390000071
Figure FDA0003864746390000072
Figure FDA0003864746390000073
in the formula: q. q.s ij As a low-dimensional spatial feature Z i Belongs to the cluster center mu j The probability of (d); b ij Distributing an auxiliary function for the target;
b2, adjusting the stack encoder by taking the KL divergence as an auxiliary clustering objective function, and performing parameter adjustment by using an Adam optimizer to obtain an encoder structure suitable for scene clustering; adam integrates the advantages of the first-order momentum of random gradient descent and the second-order momentum of root mean square flight (RMSprop); based on the moment mean value of the two, the performance of sparse gradient is fully exerted on the premise of keeping the learning rate of each parameter, so that the algorithm has higher robustness on the unsteady state problem. The specific calculation process is as follows:
Figure FDA0003864746390000074
M tt =β 1 ·M tt-1 +(1-β 1 )·g tt twenty-nine of a formula;
Figure FDA0003864746390000075
Figure FDA0003864746390000076
Figure FDA0003864746390000077
Figure FDA0003864746390000081
in the formula: tt is a time interval; g is a radical of formula tt Is a gradient; m is a group of tt Is g tt Estimating the first moment of (2);
Figure FDA0003864746390000082
is a model parameter; eta tt Is g tt Estimating the second moment;
Figure FDA0003864746390000083
and
Figure FDA0003864746390000084
respectively outputting the corresponding networks; psi is the network step value; beta is a 1 Is M tt The exponential decay rate of (2) is 0.9; beta is a 2 Is eta tt The exponential decay rate of (2) is 0.999; epsilon is a constant and is used for ensuring the robustness of the algorithm;
and step B3, obtaining an intra-year optimal scene clustering result through joint training of the encoder and the clustering layer, evaluating the clustering result by adopting three evaluation indexes of CHI, SC and DBI, and solving a clustering center of various scenes by utilizing an averaging method to obtain a typical scene of each category.
CN202211176681.8A 2022-09-26 2022-09-26 Multi-head self-attention deep convolution embedded clustering wind-light-load combined scene method Pending CN115496153A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211176681.8A CN115496153A (en) 2022-09-26 2022-09-26 Multi-head self-attention deep convolution embedded clustering wind-light-load combined scene method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211176681.8A CN115496153A (en) 2022-09-26 2022-09-26 Multi-head self-attention deep convolution embedded clustering wind-light-load combined scene method

Publications (1)

Publication Number Publication Date
CN115496153A true CN115496153A (en) 2022-12-20

Family

ID=84473147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211176681.8A Pending CN115496153A (en) 2022-09-26 2022-09-26 Multi-head self-attention deep convolution embedded clustering wind-light-load combined scene method

Country Status (1)

Country Link
CN (1) CN115496153A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117951633A (en) * 2024-03-27 2024-04-30 中节能甘肃武威太阳能发电有限公司 Photovoltaic power generation equipment fault diagnosis method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117951633A (en) * 2024-03-27 2024-04-30 中节能甘肃武威太阳能发电有限公司 Photovoltaic power generation equipment fault diagnosis method and system
CN117951633B (en) * 2024-03-27 2024-06-11 中节能甘肃武威太阳能发电有限公司 Photovoltaic power generation equipment fault diagnosis method and system

Similar Documents

Publication Publication Date Title
CN112365040B (en) Short-term wind power prediction method based on multi-channel convolution neural network and time convolution network
CN110059878B (en) Photovoltaic power generation power prediction model based on CNN LSTM and construction method thereof
CN109165774A (en) A kind of short-term photovoltaic power prediction technique
CN112149879B (en) New energy medium-and-long-term electric quantity prediction method considering macroscopic volatility classification
CN103942749B (en) A kind of based on revising cluster hypothesis and the EO-1 hyperion terrain classification method of semi-supervised very fast learning machine
CN112508244B (en) Multi-element load prediction method for user-level comprehensive energy system
CN114792156A (en) Photovoltaic output power prediction method and system based on curve characteristic index clustering
CN114282646B (en) Optical power prediction method and system based on two-stage feature extraction and BiLSTM improvement
CN113468817A (en) Ultra-short-term wind power prediction method based on IGOA (optimized El-electric field model)
CN112149883A (en) Photovoltaic power prediction method based on FWA-BP neural network
CN114897129A (en) Photovoltaic power station short-term power prediction method based on similar daily clustering and Kmeans-GRA-LSTM
CN111242355A (en) Photovoltaic probability prediction method and system based on Bayesian neural network
CN115099461A (en) Solar radiation prediction method and system based on double-branch feature extraction
CN115659254A (en) Power quality disturbance analysis method for power distribution network with bimodal feature fusion
CN112508246A (en) Photovoltaic power generation power prediction method based on similar days
CN115496153A (en) Multi-head self-attention deep convolution embedded clustering wind-light-load combined scene method
CN117154690A (en) Photovoltaic power generation power prediction method and system based on neural network
CN116345555A (en) CNN-ISCA-LSTM model-based short-term photovoltaic power generation power prediction method
CN115759389A (en) Day-ahead photovoltaic power prediction method based on weather type similar day combination strategy
CN114898136A (en) Small sample image classification method based on feature self-adaption
CN114488069A (en) Radar high-resolution range profile identification method based on graph neural network
CN117458480A (en) Photovoltaic power generation power short-term prediction method and system based on improved LOF
CN111815051B (en) GRNN photovoltaic power generation prediction method considering weather influence factors
CN117439045A (en) Multi-element load prediction method for comprehensive energy system
CN116187540B (en) Wind power station ultra-short-term power prediction method based on space-time deviation correction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination