CN115496153A - Multi-head self-attention deep convolution embedded clustering wind-light-load combined scene method - Google Patents
Multi-head self-attention deep convolution embedded clustering wind-light-load combined scene method Download PDFInfo
- Publication number
- CN115496153A CN115496153A CN202211176681.8A CN202211176681A CN115496153A CN 115496153 A CN115496153 A CN 115496153A CN 202211176681 A CN202211176681 A CN 202211176681A CN 115496153 A CN115496153 A CN 115496153A
- Authority
- CN
- China
- Prior art keywords
- formula
- wind
- clustering
- data
- self
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 230000008569 process Effects 0.000 claims abstract description 27
- 230000008878 coupling Effects 0.000 claims abstract description 15
- 238000010168 coupling process Methods 0.000 claims abstract description 15
- 238000005859 coupling reaction Methods 0.000 claims abstract description 15
- 238000000605 extraction Methods 0.000 claims abstract description 10
- 238000012935 Averaging Methods 0.000 claims abstract description 7
- 238000004140 cleaning Methods 0.000 claims abstract description 7
- 230000006870 function Effects 0.000 claims description 37
- 239000011159 matrix material Substances 0.000 claims description 30
- 230000003044 adaptive effect Effects 0.000 claims description 23
- 238000004364 calculation method Methods 0.000 claims description 16
- 238000012545 processing Methods 0.000 claims description 14
- 238000000354 decomposition reaction Methods 0.000 claims description 11
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 7
- 230000002159 abnormal effect Effects 0.000 claims description 6
- 230000004927 fusion Effects 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 6
- 230000036961 partial effect Effects 0.000 claims description 6
- 230000002441 reversible effect Effects 0.000 claims description 6
- 238000011156 evaluation Methods 0.000 claims description 5
- 241001467460 Myxogastria Species 0.000 claims description 4
- 230000006978 adaptation Effects 0.000 claims description 4
- 208000009091 myxoma Diseases 0.000 claims description 4
- 238000012549 training Methods 0.000 claims description 4
- LFQSCWFLJHTTHZ-UHFFFAOYSA-N Ethanol Chemical compound CCO LFQSCWFLJHTTHZ-UHFFFAOYSA-N 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 3
- 230000007423 decrease Effects 0.000 claims description 3
- 238000001514 detection method Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 claims description 3
- 230000019637 foraging behavior Effects 0.000 claims description 3
- 238000013178 mathematical model Methods 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 230000003313 weakening effect Effects 0.000 claims description 3
- 230000001133 acceleration Effects 0.000 claims description 2
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 230000003247 decreasing effect Effects 0.000 claims description 2
- 239000004744 fabric Substances 0.000 claims 1
- BIYFBWRLDKOYMU-UHFFFAOYSA-N 1-(3,4-dichlorophenyl)-2-(ethylamino)propan-1-one Chemical compound CCNC(C)C(=O)C1=CC=C(Cl)C(Cl)=C1 BIYFBWRLDKOYMU-UHFFFAOYSA-N 0.000 description 4
- 238000010248 power generation Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000000513 principal component analysis Methods 0.000 description 2
- 230000002829 reductive effect Effects 0.000 description 2
- 241000894006 Bacteria Species 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000001932 seasonal effect Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Pure & Applied Mathematics (AREA)
- Business, Economics & Management (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Computing Systems (AREA)
- Algebra (AREA)
- Life Sciences & Earth Sciences (AREA)
- Economics (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Databases & Information Systems (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Marketing (AREA)
- Public Health (AREA)
- Water Supply & Treatment (AREA)
- Human Resources & Organizations (AREA)
- General Business, Economics & Management (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention provides a multi-head self-attention depth convolution embedded clustering wind, light and load combined scene method, which comprises the following steps of; optimizing VMD model parameter combination by fusing and improving a slime algorithm through multiple strategies, and cleaning wind-light-load time sequence data based on the optimal parameter combination; step two, establishing a convolution self-encoder based on multi-head self attention, and reconstructing an original time sequence signal by utilizing a convolution decoder; thirdly, obtaining a proper clustering number based on an elbow method, and utilizing a Kmeans initial clustering center according to characteristics; adjusting network structure parameters and updating clustering results, solving clustering centers of various scenes based on an averaging method to serve as typical scenes of the type, and providing a basis for optimized operation and planning of a power system; the method can accurately capture the coupling characteristic information between the wind, light and charge data, combines a characteristic extraction process with a clustering process, ensures the representativeness of embedded space characteristics, can generate a wind, light and charge combined scene and accurately capture the coupling characteristic information between the wind, light and charge data.
Description
Technical Field
The invention relates to the technical field of power grids, in particular to a multi-head self-attention deep convolution embedded clustering wind-solar-load combined scene method.
Background
With the construction of a high-proportion renewable energy power system, the fluctuation and periodicity of wind power, photovoltaic and load bring challenges to the work of power grid planning, dispatching operation and the like.
If the scene method is used, the scene of uncertainty of wind, light and load is converted into a plurality of scenes of certainty, and a good foundation can be laid for optimizing power grid dispatching and planning.
The variation of wind power, photovoltaic power generation and electric load along with time shows fixed seasonal or daily periodicity, most of the current scene generation methods cannot fully mine the information value of electric power data, and the method has limitations in capturing the complementary relation of wind and photovoltaic power generation and the energy coupling relation between wind and photovoltaic power generation.
The existing method for generating the scene of wind, light and charge coupling mainly utilizes clustering to extract potential characteristic information among time sequence data and classifies the information. The traditional clustering models such as mean clustering, spectral clustering, hierarchical clustering and Gaussian mixture clustering are applied to optimizing power grid scheduling and planning, but the traditional clustering cannot accurately extract potential coupling characteristics among time sequence data, and the calculation accuracy of the clustering models is reduced when large-scale high-dimensional data is faced. In order to improve the clustering precision of high-dimensional data, methods such as PCA (principal component analysis), singular value decomposition and the like are generally used for reducing data dimensionality and feature extraction, and then clustering is performed based on feature information of a low-dimensional space. In other combined scene generation methods based on deep embedding clustering, the distortion phenomenon of a low-dimensional embedding space caused by later-stage clustering training is not considered, potential feature information among data is weakened, clustering precision is influenced, and in addition, the defect that captured feature information is more unilateral exists.
Disclosure of Invention
The invention provides a multi-head self-attention deep convolution embedded clustering wind-light-load combined scene method which can accurately capture coupling characteristic information between wind-light-load data, combine a characteristic extraction process and a clustering process to ensure the representativeness of embedded spatial characteristics, establish a deep convolution embedded clustering (DCEC-MS) model based on multi-head self-attention improvement, generate a wind-light-load combined scene and accurately capture the coupling characteristic information between the wind-light-load data.
The invention adopts the following technical scheme.
The method is used for generating a wind-light-load coupling scene, combines a feature extraction process and a clustering process by accurately capturing coupling feature information between wind-light-load data to ensure the representativeness of embedding space features, and comprises the following steps;
optimizing VMD model parameter combination by a multi-strategy fusion improved slime algorithm (SMA), cleaning wind-solar-charge time sequence data based on the optimal parameter combination, and weakening the influence of noise signals on a data feature extraction process;
establishing a multi-head self-attention-based convolution self-encoder, extracting deep characteristic information of the processed wind-solar-charged data, and reconstructing an original time sequence signal by using a convolution decoder;
thirdly, obtaining a proper clustering number based on an elbow method, and utilizing a Kmeans initial clustering center according to characteristics; and then, based on a joint loss function formed by the sum of the encoder reconstruction loss and the clustering loss, adjusting network structure parameters and updating clustering results, and solving clustering centers of various scenes based on an averaging method to serve as typical scenes of the type, so as to provide a basis for the optimized operation and planning of the power system.
In the first step, abnormal data detection and cleaning are carried out on historical wind and light load data, wherein the data are wind and light load time sequence data f (t) in one year, each sample comprises wind power, photovoltaic output and load data at 24 moments by taking days as units; the method comprises the following specific steps:
s1, decomposing original wind-solar-charge data f (t) into K IMF intrinsic mode components u with central frequency by adopting a nonlinear time domain decomposition method-VMD model k (t) simultaneously obtaining K u k (t) and minimizing the sum of the limited bandwidths to obtain the VMD model expression as:
in the formula: omega k Is the k-th u k (t) a center frequency; δ (t) is a unit pulse function;is the partial derivative operator; introducing Lagrange operator lambda and quadratic penalty factor alpha, and solving by using a simplified formula I, wherein the model expression after operation is as follows:
based on an alternative direction multiplier method, the formula (5.2) is solved, and iteration is continuously optimizedAnd λ, the iterative expression is:
in the formula: n is the number of iterations;andare respectively u (t),Fourier transform of λ (t) and f (t);
s2, when the VMD model determines preset values of the parameters K and alpha, if the value of the parameter K is too large, over-decomposition occurs, and modal overlapping is caused; if the value of the parameter alpha is too large, the central frequency is lost, the optimal parameter combination of the VMD is searched by the multi-strategy fusion improved slime mold algorithm based on the KL divergence, and the KL divergence is used for measuring the intrinsic mode component u k (t) similarity with the original wind-solar-load data f (t), and the mathematical expression is as follows:
in the formula: the closer R is to 0, the more the eigenmode component u is illustrated k (t) the higher the similarity with the original wind-solar-charge data f (t); when R is the minimum, the parameters K and alpha are the optimal parameter combination;
s3, the slime algorithm simulates the dispersive foraging behavior of slime through establishing a model, namely whether the slime is close to food is selected according to the concentration of the food when the slime is initially close to the food, and the mathematical expression of position updating is as follows:
in the formula: x (T) is the position of the slime in the Tth iteration; x b (T) is the best location currently found; w is a weight coefficient; x A (T) and X B (T) the slime mold individuals randomly selected in the Tth iteration are respectively; r is [0,1 ]]The random number of (1); v. of c Is a feedback factor, the value of which decreases linearly from 1 to 0; v. of b For controlling the parameters, the value range is [ -a, a [ -a](ii) a p is a location update control parameter;
wherein: the mathematical models of p, a and W are:
p = tanh | S (i) -DF | formula eight;
in the formula: s (i) is the adaptive value of the ith slime mold individual; DF is the optimal adaptive value in all iterations; t is max Is the maximum iteration number; bF and wF are respectively an optimal adaptive value and a worst adaptive value in the T iteration; condition is the individuals with the adaptation values sorted in the first half;
in the slime algorithm, based on the purpose of searching high-quality food, slime can segment out partial individuals for exploring the remaining area, and the mathematical formula of location updating is as follows:
in the formula: rand is a random number; ub and lb are the upper and lower boundaries of the myxoma exploration area respectively; z is a slime individual proportion parameter for exploring the residual area;
s4, algorithm factor v for preventing slime c By using linearityThe concentration and the quality of food cannot be accurately fed back in time in a descending mode, the early convergence speed is slowed down, and self-adaptive adjustable v is introduced c Acceleration of earlier stage v c The descending speed of the system improves the capability of searching the global optimum value; and iteratively calculating v late c The method is stable, and the phenomenon of falling into local optimum is avoided; adaptive adjustable v c The mathematical expression is:
vector introduction in myxoma exploration area by self-adaptive reverse learning mechanismThe vector and the position of each individual slime moldOn the contrary, the fitness values of the two are compared to avoid falling into local optimum; ith individual location of Myxomycetes at Tth iterationThe expression of (c) is:
based on the self-adaptive decision, the current adaptive value is obtained when the ith slime finds foodAnd the previous optimum adaptive value S (X) i (T)) comparing, judging whether additional exploration is carried out by utilizing a reverse learning mechanism, and updating the next iteration position, wherein the expression is as follows:
s5, selecting Mean Absolute Error (MAE) as a standard for measuring the quality of data processing, wherein the calculation mode is as follows:
in the formula: n is the total number of the wind, light and charge data samples;processing the data to obtain wind-solar-load data;
the MAE measures the capability of the data processing algorithm for retaining the original characteristics of the data on the premise of effectively removing the abnormal values in the wind-solar-load data; the smaller the MAE, the more excellent the effect of data processing.
In the second step, each processed data sample is expanded into (9, 9) size tensors through normalization and 0 complementing operations, dimension reduction is carried out on the wind-solar-load data based on a multi-head self-attention improved convolution encoder, deep features are extracted, and meanwhile, an original time sequence signal is reconstructed by a convolution decoder;
in the model of the convolutional auto-encoder, the encoding process of the convolutional layer is as follows:
h conv =σ(X conv *ω conv +b conv ) Seventeen, a formula;
in the formula: h is a total of conv Outputting characteristic information for the convolutional layer; o is Relu activation function; x conv The wind-solar-load time sequence data after data processing is obtained; omega conv And b conv Convolution kernel number and bias of convolution layer respectively;
the feature information captured by the convolutional layer from the wind, light and charge time sequence data is sent to a multi-head self-attention layer to further extract features, and the coding process is expressed as a formula seventeen;
the decoding process of the decoder for decoding the seventeenth coding result of the formula is expressed by the formula
h deconv =σ(h conv *ω deconv +b deconv ) Eighteen formulas;
in the formula: h is a total of deconv Outputting characteristic information for the deconvolution layer; omega deconv And b deconv Convolution kernel number and bias of the deconvolution layer respectively;
the model of the convolutional self-encoder takes Mean Square Error (MSE) as a reconstruction loss function, minimizes the MSE, and continuously optimizes network parameters of an encoder and a decoder, and the expression is as follows:
in the formula: l is a radical of an alcohol r Is a reconstruction loss function; n is a radical of d Days for wind, light and charge data;
in the model of the multi-head self-attention improved convolution encoder, the multi-head self-attention is an improved type based on self-attention, a multi-query mode is adopted to capture a plurality of groups of feature information of different subspaces in parallel from data, and the feature information is spliced according to weights, and the detailed calculation process is as follows:
a1, converting input data Y of a multi-head self-attention layer into a query matrix Q based on linear transformation a Key matrix K a Sum matrix V a The mathematical expression is:
in the formula: w is a group of Q 、W K And W V Is a transformation matrix;
will Q a 、K a And V a Mapping to theta characteristic subspaces to obtain a query matrix Q in the theta subspace a6 Key matrix K aθ Sum matrix V aθ The calculation method is as follows:
in the formula: w Qθ 、W Kθ And W Vθ A transformation matrix being the theta subspace;
step A2, calculating self-attention values in theta feature subspaces based on the scaling dot product and the Softmax function, wherein the calculation mode is as follows:
in the formula: d is a scaling factor; head θ Is the self-attention value in the theta feature subspace;
finally, self-attention in θ feature subspaces is fused:
M multi-head =C Concat (head 1 ,head 2 ,...,head θ )W o a formula twenty-three;
in the formula: m is a group of multi-head Is the fused self-attention value; c Concat Performing matrix splicing operation; w is a group of o Is a parameter matrix.
The third step comprises the following steps
B1, observing elbow values of different cluster numbers by an elbow method, namely an elbow method, based on low-dimensional characteristic information obtained from an encoder, so as to determine the optimal cluster number, initializing and setting a Kmeans initial clustering center, and finely adjusting the whole network parameters and guaranteeing the representativeness of embedded spatial characteristics by using an Adam optimizer based on a joint loss function, so as to obtain the optimal clustering result;
the joint loss function is formulated as
L=L r +γL c Twenty-four of the formula;
in the formula: l is a joint loss function; gamma is a coefficient for controlling the distortion degree of the embedding space, and 0.1 is taken; l is c Is a clustering loss function;
by clustering the center mu j As its and low dimensional spatial feature Z i And connecting the weights of (1) and connecting each low-dimensional spatial feature Z i Mapping onto soft label(ii) a At the same time, to increase Z i And mu i In the step, the clustering loss describes KL divergence between the distribution on the soft label and the Gaussian distribution and is used for measuring the similarity between the distribution on the soft label and the Gaussian distribution; the concrete process is expressed by a formula as follows;
in the formula: q. q of ij As a low-dimensional spatial feature Z i Belongs to the cluster center mu j The probability of (d); b ij Distributing auxiliary functions for the targets;
b2, adjusting the stack encoder by taking the KL divergence as an auxiliary clustering objective function, and performing parameter adjustment by using an Adam optimizer to obtain an encoder structure suitable for scene clustering; adam integrates the advantages of first-order momentum of random gradient descent (SGD) and second-order momentum of root mean square flight (RMSprop); based on the moment mean values of the two, the performance of sparse gradient is fully exerted on the premise of keeping the learning rate of each parameter, so that the algorithm has higher robustness on the unsteady state problem. The specific calculation process is as follows:
M tt =β 1 ·M tt-1 +(1-β 1 )·g tt twenty-nine of a formula;
in the formula: tt is a time interval; g tt Is a gradient; m is a group of tt Is g tt First moment estimation of (1);is a model parameter; eta tt Is g tt Estimating the second moment;andrespectively outputting the corresponding networks; psi is the network step value; beta is a 1 Is M tt The exponential decay rate of (2) is 0.9; beta is a 2 Is eta tt The exponential decay rate of (2) is 0.999; epsilon is a constant and is used for ensuring the robustness of the algorithm;
and step B3, obtaining an intra-year optimal scene clustering result through joint training of the encoder and the clustering layer, evaluating the clustering result by adopting three evaluation indexes of CHI, SC and DBI, and solving a clustering center of various scenes by utilizing an averaging method to obtain a typical scene of each category.
In the content, the DCEC is a deep convolution embedded clustering algorithm DCEC; VMD is the abbreviation of parameter optimization Variational modal decomposition variable mode decomposition.
The method can accurately capture the coupling characteristic information between the wind, light and load data, combines the characteristic extraction process and the clustering process, and ensures the representativeness of the embedded space characteristics.
Drawings
The invention is described in further detail below with reference to the following figures and detailed description:
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a schematic flow chart of the algorithm of the present invention;
FIG. 3 is a schematic diagram of a DCEC-MS network structure based on a multi-head self-attention improved deep convolution embedded clustering model.
Detailed Description
As shown in the figure, the multi-head self-attention deep convolution embedding clustering wind, light and load combined scene method is used for generating a wind, light and load coupling scene, combines a feature extraction process and a clustering process by accurately capturing coupling feature information between wind, light and load data to ensure the representativeness of embedding space features, and comprises the following steps;
optimizing VMD model parameter combination by a multi-strategy fusion improved slime algorithm (SMA), cleaning wind-solar-charge time sequence data based on the optimal parameter combination, and weakening the influence of noise signals on a data feature extraction process;
establishing a multi-head self-attention-based convolution self-encoder, extracting deep characteristic information of the processed wind-solar-charged data, and reconstructing an original time sequence signal by using a convolution decoder;
thirdly, obtaining a proper clustering number based on an elbow method, and utilizing a Kmeans initial clustering center according to characteristics; and then, based on a joint loss function formed by the sum of the reconstruction loss and the clustering loss of the encoder, adjusting network structure parameters and updating clustering results, and solving the clustering center of various scenes based on an averaging method to serve as a typical scene of the class, so as to provide a basis for the optimized operation and planning of the power system.
In the first step, abnormal data detection and cleaning are carried out on historical wind and light load data, wherein the data are wind and light load time sequence data f (t) in one year, each sample comprises wind power, photovoltaic output and load data at 24 moments by taking days as units; the method comprises the following steps:
s1, decomposing original wind-solar-load data f (t) into K IMF (intrinsic mode function) components u with central frequency by adopting a nonlinear time domain decomposition method-VMD (model-vector magnitude) model k (t) simultaneously obtaining K u k (t) and minimizing the sum of the limited bandwidths to obtain the VMD model expression as:
in the formula: omega k Is the k-th u k (t) a center frequency; δ (t) is a unit pulse function;is the partial derivative operator;
introducing Lagrange operator lambda and quadratic penalty factor alpha, and solving by using a simplified formula I, wherein the model expression after operation is as follows:
based on alternative direction multiplier method, solving equation (5.2), and continuously optimizing iterationAnd λ, the iterative expression is:
in the formula: n isThe number of iterations;andare respectively u (t),Fourier transform of λ (t) and f (t);
s2, when the VMD model determines preset values of parameters K and alpha, if the value of the parameter K is too large, over-decomposition occurs, and mode overlapping is caused; if the value of the parameter alpha is too large, the central frequency is lost, the optimal parameter combination of the VMD is searched by the multi-strategy fusion improved slime mold algorithm based on the KL divergence, and the KL divergence is used for measuring the intrinsic mode component u k (t) similarity with the original wind-solar-load data f (t), and the mathematical expression is as follows:
in the formula: the closer R is to 0, the more the eigenmode component u is illustrated k (t) the higher the similarity with the original wind-solar-charge data f (t); when R is the minimum, the parameters K and alpha are the optimal parameter combination;
s3, simulating the dispersive foraging behavior of the slime through establishing a model by the slime algorithm, namely selecting whether the slime is close to food according to the concentration of the food when the slime is initially close to the food, wherein the mathematical expression of position updating is as follows:
in the formula: x (T) is the position of the slime in the Tth iteration; x b (T) is the best location currently found; w is a weight coefficient; x A (T) and X B (T) the slime mold individuals randomly selected in the Tth iteration are respectively; r is [0,1 ]]The random number of (1); v. of c Is a feedback factor, the value of which decreases linearly from 1 to 0; v. of b For controlling the parameters, the value range is [ -a, a [ -a](ii) a p is a location update control parameter;
wherein: the mathematical models of p, a and W are:
p = tanh | S (i) -DF | formula eight;
in the formula: s (i) is an adaptive value of the ith slime individual; DF is the optimal adaptive value in all iterations; t is max Is the maximum iteration number; bF and wF are respectively an optimal adaptive value and a worst adaptive value in the T iteration; condition is the individuals with the adaptation values sorted in the first half;
in the slime algorithm, based on the purpose of searching high-quality food, slime can segment out partial individuals for exploring the remaining area, and the mathematical formula of location updating is as follows:
in the formula: rand is a random number; ub and lb are the upper and lower boundaries of the myxoma exploration area respectively; z is a slime individual proportion parameter for exploring the residual area;
s4, algorithm factor v for preventing slime bacteria c The concentration and the quality of food cannot be fed back accurately and timely by adopting a linear decreasing mode, so that the earlier stage convergence speed is reduced, and self-adaptive adjustable v is introduced c Accelerating earlier stage v c The descending speed of the system improves the capability of searching the global optimum value; and iteratively calculating v late c The method is stable and avoids falling into local optimum; adaptive adjustable v c The mathematical expression is as follows:
vector is introduced into myxomycete exploration area by self-adaptive reverse learning mechanismThe vector and the position of each individual slime moldOn the contrary, the fitness values of the two are compared to avoid falling into local optimum; at the Tth iteration, the ith individual position of the slimeThe expression of (a) is:
based on the adaptive decision, the ith slime mold finds the current adaptive value when searching foodAnd the previous optimum adaptive value S (X) i (T)) comparing, judging whether additional exploration is carried out by utilizing a reverse learning mechanism, and updating the next iteration position, wherein the expression is as follows:
s5, selecting Mean Absolute Error (MAE) as a standard for measuring the quality of data processing, wherein the calculation mode is as follows:
in the formula: n is the total number of the wind, light and charge data samples;processing the data to obtain wind-solar-load data;
the MAE measures the capability of the data processing algorithm for retaining the original characteristics of the data on the premise of effectively removing the abnormal values in the wind-solar-load data; the smaller the MAE, the more excellent the effect of data processing.
In the second step, each processed data sample is expanded into (9, 9) size tensors through normalization and 0 complementing operations, dimension reduction is carried out on the wind-solar-load data based on a multi-head self-attention improved convolution encoder, deep features are extracted, and meanwhile, an original time sequence signal is reconstructed by a convolution decoder;
in the model of the convolutional auto-encoder, the encoding process of the convolutional layer is as follows:
h conv =σ(X conv *ω conv +b conv ) Seventeen, a formula;
in the formula: h is conv Outputting characteristic information for the convolutional layer; o is Relu activation function; x conv The wind-solar-charge time sequence data after data processing is obtained; omega conv And b conv Convolution kernel number and bias of convolution layer respectively;
the feature information captured by the convolutional layer from the wind, light and charge time sequence data is sent to a multi-head self-attention layer to further extract features, and the coding process is expressed as a formula seventeen;
the decoding process of the decoder for decoding the seventeenth coding result of the formula is expressed by the formula
h deconv =σ(h conv *ω deconv +b deconv ) Eighteen formulas;
in the formula: h is deconv Outputting characteristic information for the deconvolution layer; omega deconv And b deconv Convolution kernel number and bias of the deconvolution layer respectively;
the model of the convolutional self-encoder takes Mean Square Error (MSE) as a reconstruction loss function, minimizes the MSE, and continuously optimizes network parameters of an encoder and a decoder, and the expression is as follows:
in the formula: l is r Is a reconstruction loss function; n is a radical of d Days for wind, solar and solar load data;
in the model of the multi-head self-attention improved convolution encoder, the multi-head self-attention is an improved type based on self-attention, a multi-query mode is adopted to capture a plurality of groups of feature information of different subspaces in parallel from data, and the feature information is spliced according to weights, and the detailed calculation process is as follows:
a1, converting input data Y of a multi-head self-attention layer into a query matrix Q based on linear transformation a Key matrix K a Sum matrix V a The mathematical expression is:
in the formula: w Q 、W K And W V Is a conversion matrix;
will Q a 、K a And V a Mapping to theta characteristic subspaces to obtain a query matrix Q in the theta subspace a6 Key matrix K aθ Sum matrix V aθ The calculation method is as follows:
in the formula: w is a group of Qθ 、W Kθ And W Vθ A transformation matrix for the theta subspace;
step A2, calculating self-attention values in theta feature subspaces based on the scaling dot product and the Softmax function, wherein the calculation mode is as follows:
in the formula: d is a scaling factor; head θ Is the self-attention value in the theta feature subspace;
finally, self-attention in θ feature subspaces is fused:
M multi-head =C Concat (head 1 ,head 2 ,...,head θ )W o twenty-three of the formula;
in the formula: m multi-head Is the fused self-attention value; c Concat Performing matrix splicing operation; w is a group of o Is a parameter matrix.
The third step comprises the following steps
B1, observing elbow values of different cluster numbers by an elbow method, namely an elbow method, based on low-dimensional characteristic information obtained from an encoder, so as to determine the optimal cluster number, initializing and setting a Kmeans initial clustering center, and finely adjusting the whole network parameters and guaranteeing the representativeness of embedded spatial characteristics by using an Adam optimizer based on a joint loss function, so as to obtain the optimal clustering result;
the joint loss function is formulated as
L=L r +γL c Twenty-four of the formula;
in the formula: l is a joint loss function; gamma is a coefficient for controlling the distortion degree of the embedding space, and is taken as 0.1; l is a radical of an alcohol c Is a clustering loss function;
by clustering the center mu j As its and low-dimensional spatial features Z i And connecting the weights of each low-dimensional space feature Z to obtain a combined weight of each low-dimensional space feature Z i Mapping to a soft label; at the same time, to increase Z i And mu i In the step, the clustering loss describes KL divergence between the distribution on the soft label and the Gaussian distribution and is used for measuring the similarity between the distribution and the Gaussian distribution; the concrete process is expressed by a formula as follows;
in the formula: q. q.s ij As a low-dimensional spatial feature Z i Belongs to the cluster center mu j The probability of (d); b ij Distributing an auxiliary function for the target;
b2, adjusting the stack encoder by taking the KL divergence as an auxiliary clustering objective function, and performing parameter adjustment by using an Adam optimizer to obtain an encoder structure suitable for scene clustering; adam integrates the advantages of the first-order momentum of random gradient descent (SGD) and the second-order momentum of root mean square propagation (RMSprop); based on the moment mean values of the two, the performance of sparse gradient is fully exerted on the premise of keeping the learning rate of each parameter, so that the algorithm has higher robustness on the unsteady state problem. The specific calculation process is as follows:
M tt =β 1 ·M tt-1 +(1-β 1 )·g tt twenty-nine of a formula;
in the formula: tt is the time interval; g is a radical of formula tt Is a gradient; m tt Is g tt Estimating the first moment of (2);is a model parameter; eta tt Is g tt Estimating the second moment of (2);andrespectively outputting the corresponding networks; psi is the network step value; beta is a beta 1 Is M tt The exponential decay rate of (2) is 0.9; beta is a 2 Is eta of tt The exponential decay rate of (2) is 0.999; epsilon is a constant and is used for ensuring the robustness of the algorithm;
and B3, obtaining an intra-year optimal scene clustering result through joint training of the encoder and the clustering layer, evaluating the clustering result by adopting three evaluation indexes of CHI, SC and DBI, and solving a clustering center of each type of scene by using an averaging method to obtain a typical scene of each type.
In the content, the DCEC is a deep convolution embedding clustering algorithm DCEC; VMD is the abbreviation of parameter optimization Variational modal decomposition variable mode decomposition.
In the embodiment, actual wind-light-load time sequence data with the sampling time of 1h is taken as a sample for clustering, and a wind-light-load combined scene is constructed, so that the coupling characteristics of the three are fully considered in the process of dispatching and planning the power grid.
The results of the cluster evaluation indexes of different methods are compared as shown in the following table,
Table.1 Comparison of clustering evaluation index results of different methods
Claims (4)
1. a multi-head self-attention depth convolution embedding clustering wind-light-load combined scene method is used for generating a wind-light-load coupling scene, and is characterized in that: the method combines a feature extraction process and a clustering process by accurately capturing coupling feature information between wind, light and charge data to ensure the representativeness of embedded space features, and comprises the following steps;
optimizing a VMD model parameter combination by a multi-strategy fusion improved slime mold algorithm, cleaning wind-solar-charge time sequence data based on the optimal parameter combination, and weakening the influence of noise signals on a data feature extraction process;
establishing a multi-head self-attention-based convolution self-encoder, extracting deep characteristic information of the processed wind-solar-charged data, and reconstructing an original time sequence signal by using a convolution decoder;
thirdly, obtaining a proper clustering number based on an elbow method, and utilizing a Kmeans initial clustering center according to characteristics; and then, based on a joint loss function formed by the sum of the reconstruction loss and the clustering loss of the encoder, adjusting network structure parameters and updating clustering results, and solving the clustering center of various scenes based on an averaging method to serve as a typical scene of the class, so as to provide a basis for the optimized operation and planning of the power system.
2. The multi-head self-attention depth convolution embedded clustering wind-solar-load combined scene method according to claim 1, characterized in that: in the first step, abnormal data detection and cleaning are carried out on historical wind and light load data, wherein the data are wind and light load time sequence data f (t) in one year, each sample comprises wind power, photovoltaic output and load data at 24 moments by taking days as units; the method comprises the following specific steps:
s1, decomposing original wind-solar-charge data f (t) into K IMF intrinsic mode components u with central frequency by adopting a nonlinear time domain decomposition method-VMD model k (t) simultaneously obtaining K u k (t) sum of limited bandwidths and minimize itThus, the expression of the VMD model is obtained as follows:
in the formula: omega k Is the k-th u k (t) a center frequency; δ (t) is a unit pulse function;is the partial derivative operator; introducing a Lagrange operator lambda and a quadratic term penalty factor alpha, and solving the solution process by using a simplified formula I, wherein the model expression after operation is as follows:
based on an alternative direction multiplier method, the formula (5.2) is solved, and iteration is continuously optimizedAnd λ, the iterative expression is:
in the formula: n is the number of iterations;andare respectively u (t),Fourier transform of λ (t) and f (t);
s2, when the VMD model determines preset values of parameters K and alpha, if the value of the parameter K is too large, over-decomposition occurs, and mode overlapping is caused; if the value of the parameter alpha is too large, the central frequency is lost, the optimal parameter combination of the VMD is searched by the multi-strategy fusion improved slime mold algorithm based on the KL divergence, and the KL divergence is used for measuring the intrinsic mode component u k (t) similarity to the original wind-solar-load data f (t), the mathematical expression is:
in the formula: the closer R is to 0, the more the eigenmode component u is illustrated k (T) the higher the similarity with the original wind-solar-charge data f (T); when R is the minimum, the parameters K and alpha are the optimal parameter combination;
s3, the slime algorithm simulates the dispersive foraging behavior of slime through establishing a model, namely whether the slime is close to food is selected according to the concentration of the food when the slime is initially close to the food, and the mathematical expression of position updating is as follows:
in the formula: x (T) is the position of the slime in the Tth iteration; x b (T) is the best location currently found; w is a weight coefficient; x A (T) and X B (T) the slime mold individuals randomly selected in the Tth iteration are respectively; r is [0,1 ]]The random number of (1); v. of c Is a feedback factor, the value of which decreases linearly from 1 to 0; v. of b For controlling the parameters, the value range is [ -a, a [ -a](ii) a p is a location update control parameter;
wherein: the mathematical models of p, a and W are:
p = tanh | S (i) -DF | formula eight;
in the formula: s (i) is the adaptive value of the ith slime mold individual; DF is the optimal adaptive value in all iterations; t is a unit of max Is the maximum iteration number; bF and wF are respectively an optimal adaptive value and a worst adaptive value in the T iteration; condition is the individuals with the adaptation values ranked in the first half;
in the slime algorithm, based on the purpose of searching high-quality food, slime can segment out partial individuals for exploring the remaining area, and the mathematical formula of location updating is as follows:
in the formula: rand is a random number; ub and lb are the upper and lower boundaries of the myxoma exploration area respectively; z is a slime individual proportion parameter for exploring the residual area;
s4, algorithm factor v for preventing slime c The concentration and the quality of food cannot be fed back accurately and timely by adopting a linear decreasing mode, so that the earlier stage convergence speed is slowed, and self-adaptive adjustable v is introduced c Acceleration of earlier stage v c The descending speed of the system improves the capability of searching the global optimum value; and iteratively calculating v late c The method is stable, and the phenomenon of falling into local optimum is avoided; adaptive adjustable v c The mathematical expression is:
vector is introduced into myxomycete exploration area by self-adaptive reverse learning mechanismThe vector and the position of each individual slime moldOn the contrary, the fitness values of the two are compared to avoid falling into local optimum; ith individual location of Myxomycetes at Tth iterationThe expression of (a) is:
based on the adaptive decision, the ith slime mold finds the current adaptive value when searching foodAnd the previous optimum adaptive value S (X) i (T)) comparing, judging whether additional exploration is carried out by utilizing a reverse learning mechanism, and updating the next iteration position, wherein the expression is as follows:
s5, selecting the average absolute error as a standard for measuring the quality of data processing, wherein the calculation mode is as follows:
in the formula: n is the total number of the wind, light and charge data samples;the data is processed wind, light and charge data;
the MAE measures the capability of the data processing algorithm for retaining the original characteristics of the data on the premise of effectively removing the abnormal values in the wind-solar-load data; the smaller the MAE, the more excellent the effect of data processing.
3. The multi-head self-attention depth convolution embedded clustering wind-solar-load combined scene method according to claim 2, characterized in that: in the second step, each processed data sample is expanded into (9, 9) size tensors through normalization and 0 complementing operations, dimension reduction is carried out on the wind-solar-load data based on a multi-head self-attention improved convolution encoder, deep features are extracted, and meanwhile, an original time sequence signal is reconstructed by a convolution decoder;
in the model of the convolutional self-encoder, the encoding process of the convolutional layer is as follows:
h conv =σ(X conv *ω conv +b conv ) Seventeen, a formula;
in the formula: h is conv Outputting characteristic information for the convolutional layer; o is Relu activation function; x conv The wind-solar-charge time sequence data after data processing is obtained; omega conv And b conv Convolution kernel number and bias of convolution layer respectively;
the feature information captured by the convolutional layer from the wind, light and charge time sequence data is sent to a multi-head self-attention layer to further extract features, and the coding process is expressed as a formula seventeen;
the decoding process of the decoder for decoding the seventeenth formula coding result is expressed by the formula
h deconv =σ(h conv *ω deconv +b deconv ) Eighteen formulas;
in the formula: h is deconv Outputting characteristic information for the deconvolution layer; omega deconv And b deconv Convolution by deconvolution layers, respectivelyKernel number and offset;
the model of the convolutional self-encoder takes Mean Square Error (MSE) as a reconstruction loss function, minimizes the MSE, and continuously optimizes network parameters of an encoder and a decoder, and the expression is as follows:
in the formula: l is r Is a reconstruction loss function; n is a radical of d Days for wind, solar and solar load data;
in a model of the multi-head self-attention improved convolution encoder, the multi-head self-attention is an improved type based on self-attention, a multi-query mode is adopted to capture a plurality of groups of feature information of different subspaces in parallel from data, and the feature information is spliced according to weights, and the detailed calculation process is as follows:
a1, converting input data Y of a multi-head self-attention layer into a query matrix Q based on linear transformation a Key matrix K a Sum matrix V a The mathematical expression is:
in the formula: w Q 、W K And W V Is a conversion matrix;
will Q a 、K a And V a Mapping to theta characteristic subspaces to obtain a query matrix Q in the theta subspace aθ Key matrix Q aθ Sum matrix V aθ The calculation method is as follows:
in the formula: w Qθ 、W Kθ And W Vθ A transformation matrix for the theta subspace;
step A2, calculating self-attention values in theta feature subspaces based on the scaling dot product and the Softmax function, wherein the calculation mode is as follows:
in the formula: d is a scaling factor; head θ Is the self-attention value in the theta-th feature subspace;
finally, self-attention in θ feature subspaces is fused:
M multi-head =C Concat (head 1 ,head 2 ,...,head θ )W o a formula twenty-three;
in the formula: m is a group of multi-head Is the fused self-attention value; c Concat Performing matrix splicing operation; w is a group of o Is a parameter matrix.
4. The multi-head self-attention depth convolution embedded clustering wind-solar-load combined scene method according to claim 3, characterized in that: the third step comprises the following steps
B1, observing elbow values of different cluster numbers by an elbow method, namely an elbow method, based on low-dimensional characteristic information obtained from an encoder, so as to determine the optimal cluster number, initializing and setting a Kmeans initial clustering center, and finely adjusting the whole network parameters and guaranteeing the representativeness of embedded spatial characteristics by using an Adam optimizer based on a joint loss function, so as to obtain the optimal clustering result;
the joint loss function is formulated as
L=L r +γL c Twenty-four of the formula;
in the formula: l is a joint loss function; gamma is a coefficient for controlling the distortion degree of the embedding space, and 0.1 is taken; l is a radical of an alcohol c Is a clustering loss function;
by clustering the center mu j As its and low-dimensional spatial features Z i And connecting the weights of each low-dimensional space feature Z to obtain a combined weight of each low-dimensional space feature Z i Mapping to a soft label; at the same time, to increase Z i And mu j Adaptation accuracy of, using Gaussian scoresDistributing the cloth as an ideal target, wherein in the step, the clustering loss describes KL divergence between the distribution on the soft label and Gaussian distribution, and is used for measuring the similarity between the distribution and the Gaussian distribution; the concrete process is expressed by a formula as follows;
in the formula: q. q.s ij As a low-dimensional spatial feature Z i Belongs to the cluster center mu j The probability of (d); b ij Distributing an auxiliary function for the target;
b2, adjusting the stack encoder by taking the KL divergence as an auxiliary clustering objective function, and performing parameter adjustment by using an Adam optimizer to obtain an encoder structure suitable for scene clustering; adam integrates the advantages of the first-order momentum of random gradient descent and the second-order momentum of root mean square flight (RMSprop); based on the moment mean value of the two, the performance of sparse gradient is fully exerted on the premise of keeping the learning rate of each parameter, so that the algorithm has higher robustness on the unsteady state problem. The specific calculation process is as follows:
M tt =β 1 ·M tt-1 +(1-β 1 )·g tt twenty-nine of a formula;
in the formula: tt is a time interval; g is a radical of formula tt Is a gradient; m is a group of tt Is g tt Estimating the first moment of (2);is a model parameter; eta tt Is g tt Estimating the second moment;andrespectively outputting the corresponding networks; psi is the network step value; beta is a 1 Is M tt The exponential decay rate of (2) is 0.9; beta is a 2 Is eta tt The exponential decay rate of (2) is 0.999; epsilon is a constant and is used for ensuring the robustness of the algorithm;
and step B3, obtaining an intra-year optimal scene clustering result through joint training of the encoder and the clustering layer, evaluating the clustering result by adopting three evaluation indexes of CHI, SC and DBI, and solving a clustering center of various scenes by utilizing an averaging method to obtain a typical scene of each category.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211176681.8A CN115496153A (en) | 2022-09-26 | 2022-09-26 | Multi-head self-attention deep convolution embedded clustering wind-light-load combined scene method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211176681.8A CN115496153A (en) | 2022-09-26 | 2022-09-26 | Multi-head self-attention deep convolution embedded clustering wind-light-load combined scene method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115496153A true CN115496153A (en) | 2022-12-20 |
Family
ID=84473147
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211176681.8A Pending CN115496153A (en) | 2022-09-26 | 2022-09-26 | Multi-head self-attention deep convolution embedded clustering wind-light-load combined scene method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115496153A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117951633A (en) * | 2024-03-27 | 2024-04-30 | 中节能甘肃武威太阳能发电有限公司 | Photovoltaic power generation equipment fault diagnosis method and system |
-
2022
- 2022-09-26 CN CN202211176681.8A patent/CN115496153A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117951633A (en) * | 2024-03-27 | 2024-04-30 | 中节能甘肃武威太阳能发电有限公司 | Photovoltaic power generation equipment fault diagnosis method and system |
CN117951633B (en) * | 2024-03-27 | 2024-06-11 | 中节能甘肃武威太阳能发电有限公司 | Photovoltaic power generation equipment fault diagnosis method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112365040B (en) | Short-term wind power prediction method based on multi-channel convolution neural network and time convolution network | |
CN110059878B (en) | Photovoltaic power generation power prediction model based on CNN LSTM and construction method thereof | |
CN109165774A (en) | A kind of short-term photovoltaic power prediction technique | |
CN112149879B (en) | New energy medium-and-long-term electric quantity prediction method considering macroscopic volatility classification | |
CN103942749B (en) | A kind of based on revising cluster hypothesis and the EO-1 hyperion terrain classification method of semi-supervised very fast learning machine | |
CN112508244B (en) | Multi-element load prediction method for user-level comprehensive energy system | |
CN114792156A (en) | Photovoltaic output power prediction method and system based on curve characteristic index clustering | |
CN114282646B (en) | Optical power prediction method and system based on two-stage feature extraction and BiLSTM improvement | |
CN113468817A (en) | Ultra-short-term wind power prediction method based on IGOA (optimized El-electric field model) | |
CN112149883A (en) | Photovoltaic power prediction method based on FWA-BP neural network | |
CN114897129A (en) | Photovoltaic power station short-term power prediction method based on similar daily clustering and Kmeans-GRA-LSTM | |
CN111242355A (en) | Photovoltaic probability prediction method and system based on Bayesian neural network | |
CN115099461A (en) | Solar radiation prediction method and system based on double-branch feature extraction | |
CN115659254A (en) | Power quality disturbance analysis method for power distribution network with bimodal feature fusion | |
CN112508246A (en) | Photovoltaic power generation power prediction method based on similar days | |
CN115496153A (en) | Multi-head self-attention deep convolution embedded clustering wind-light-load combined scene method | |
CN117154690A (en) | Photovoltaic power generation power prediction method and system based on neural network | |
CN116345555A (en) | CNN-ISCA-LSTM model-based short-term photovoltaic power generation power prediction method | |
CN115759389A (en) | Day-ahead photovoltaic power prediction method based on weather type similar day combination strategy | |
CN114898136A (en) | Small sample image classification method based on feature self-adaption | |
CN114488069A (en) | Radar high-resolution range profile identification method based on graph neural network | |
CN117458480A (en) | Photovoltaic power generation power short-term prediction method and system based on improved LOF | |
CN111815051B (en) | GRNN photovoltaic power generation prediction method considering weather influence factors | |
CN117439045A (en) | Multi-element load prediction method for comprehensive energy system | |
CN116187540B (en) | Wind power station ultra-short-term power prediction method based on space-time deviation correction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |