CN116883199A - Multi-element load data complement method based on artificial mask convolution self-encoder - Google Patents
Multi-element load data complement method based on artificial mask convolution self-encoder Download PDFInfo
- Publication number
- CN116883199A CN116883199A CN202310880370.8A CN202310880370A CN116883199A CN 116883199 A CN116883199 A CN 116883199A CN 202310880370 A CN202310880370 A CN 202310880370A CN 116883199 A CN116883199 A CN 116883199A
- Authority
- CN
- China
- Prior art keywords
- data
- aml
- cae
- missing
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 230000000295 complement effect Effects 0.000 title claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims abstract description 30
- 238000010606 normalization Methods 0.000 claims abstract description 15
- 238000012545 processing Methods 0.000 claims abstract description 11
- 238000012217 deletion Methods 0.000 claims description 22
- 230000037430 deletion Effects 0.000 claims description 22
- 238000012549 training Methods 0.000 claims description 11
- 230000006870 function Effects 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 230000017105 transposition Effects 0.000 claims 1
- 238000013135 deep learning Methods 0.000 abstract description 4
- 238000011423 initialization method Methods 0.000 abstract description 3
- 230000004913 activation Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000016273 neuron death Effects 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 241000726094 Aristolochia Species 0.000 description 2
- BBFQZRXNYIEMAW-UHFFFAOYSA-N aristolochic acid I Chemical compound C1=C([N+]([O-])=O)C2=C(C(O)=O)C=C3OCOC3=C2C2=C1C(OC)=CC=C2 BBFQZRXNYIEMAW-UHFFFAOYSA-N 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000001502 supplementing effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 230000004060 metabolic process Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/0895—Weakly supervised learning, e.g. semi-supervised or self-supervised learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Business, Economics & Management (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Strategic Management (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Artificial Intelligence (AREA)
- Human Resources & Organizations (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Mathematics (AREA)
- Marketing (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- General Business, Economics & Management (AREA)
- Pure & Applied Mathematics (AREA)
- Tourism & Hospitality (AREA)
- Primary Health Care (AREA)
- Quality & Reliability (AREA)
- Water Supply & Treatment (AREA)
- Public Health (AREA)
- Operations Research (AREA)
- Entrepreneurship & Innovation (AREA)
- Game Theory and Decision Science (AREA)
- Development Economics (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Complex Calculations (AREA)
Abstract
The invention relates to a multi-element load data complement method based on an artificial mask convolution self-encoder, which comprises the following steps: acquiring historical data of electric, cold and heat loads of an IES, and step two: and (5) remolding the multi-element load data and obtaining a missing position matrix. Step three: and carrying out normalization processing on the multi-element load data. Step four: and building an AML-CAE model to realize missing data interpolation of multiple loads. Step five: AML-CAE was trained. Step six: AML-CAE is achieved by reducing L aml The distribution of non-missing values is learned, and a trained AML-CAE model can be obtained after AML-CAE iteration is completed. The invention adopts a convolution self-encoder as a deep learning network structure, combines an updating strategy of an artificial missing position mask and an initialization method of linear interpolation, and is used for interpolating missing values in IES multi-element load data. The method adopts a self-supervision learning strategy and does not depend on a complete data set, so that the method has higher practicability and application meaning.
Description
Technical Field
The invention relates to a multi-element load data interpolation method based on an artificial mask convolution self-encoder (AML-CAE), in particular to a multi-element load missing data interpolation method based on an artificial mask linear filling convolution self-encoder (AML-CAE), and belongs to the technical field of multi-element load missing data interpolation.
Background
The development of Integrated Energy Systems (IES) is an important solution in the face of climate change, energy safety and economic sustainability challenges. With the development of artificial intelligence technology, more and more data-driven IES control optimization running methods are applied. However, these methods rely on efficient analysis of large amounts of system state data. In practice, due to sensor or transmission faults, missing values usually exist in the historical data of the electric and cold loads recorded by the system, so that the problem of the multi-energy load data complementation of the IES is brought into wide attention.
The current method for supplementing missing data by deep learning mostly adopts a supervision training strategy, and complete data is needed for training. In reality, however, the missing value distribution is relatively random, and it becomes challenging to choose a continuous and complete data set for training.
Disclosure of Invention
In order to overcome the defects of the prior researches, the invention provides a multi-element load data complement method based on an artificial mask convolution self-encoder (AML-CAE).
The method for completing the multi-element load data based on the artificial mask convolution self-encoder comprises the following specific steps:
step one, acquiring electric, cold and heat load historical data of an IES.
And step two, remolding the multi-element load data and obtaining a missing position matrix. In order to meet the format requirements of convolution and take into account the correlation between multiple attributes, the time series of the multi-energy load is converted into a feature matrix with the same number of rows and columns. The number of lines is 3, which corresponds to the electric, cold and heat load data. The column number is T, which represents the number of data sampling points of the acquired historical load every day. The feature matrix takes the original missing data X as input, and meanwhile, a missing position matrix M is obtained according to the position of the missing data in the X real . Deletion matrix M real Can be expressed as:
and thirdly, normalizing the multi-element load data. There is a strong correlation between the multipotent loads of IES, and taking them as parallel inputs helps to improve the accuracy of missing data computation. However, since there is a large difference in values between the electric load, the cold load and the heat load, it is necessary to normalize the multi-energy load data, respectively, in order to allow the model to converge more quickly. The invention combines maximum and minimum normalization (Min-Max Normalization) with the deletion position matrix M real And respectively carrying out normalization treatment on the electric load, the cold load and the heat load. The calculation formula is as follows:
e, C, H, which represents the acquired history data of electric, cold and heat loads; m is M re 、M rc 、M rh Respectively representing the missing position matrixes corresponding to the electric, cold and heat loads; min (·) and max (·) represent functions of the minimum element and the maximum element in the extraction matrix, respectively; e (E) norm 、C norm 、H norm Representing the results after the electric, cold and heat loads are normalized respectively; x is X norm The results of the normalization of X are shown, ranging from 0 to 1.
And fourthly, building an AML-CAE model to realize missing data interpolation of multiple loads. The AML-CAE network architecture is composed of an encoder and a decoder. The encoder consists of four convolutional layers and the decoder consists of five transposed convolutional layers. The use of the LeakyReLU activation function solves the neuronal death problem, enhances gradient flow, and increases the expressive power of the model.
Training AML-CAE. First, for normalized data X norm Processing by linear interpolation to obtain initialized dataThen in the kth iteration, by updating the artificial deletion position matrix +.>Will->And (3) withCarrying out Hadamard product operation to obtain artificially deleted data +.>Finally pair->Processing is likewise carried out by linear interpolation to obtain AML-CAE input data +.>The data after model reconstruction is obtained by taking the data as input of AML-CAE training process>Wherein the artificial deletion position matrix->And->Can be expressed as:
where Lin (·) represents a linear interpolation function for filling in missing values in the variables. The Hadamard operation is indicated. AML-CAE (. Cndot.) represents the forward propagation of AML-CAE.
N-th group of samples->By forward propagation, the reconstructed result can be obtained +.>Error L between input and reconstruction aml Can be expressed as:
the present invention uses Adam optimization algorithm to update network parameters. If the data sample is not trained in the current iteration, transferring to the next group of samples; otherwise, go to the next iteration.
Step six, AML-CAE is realized by reducing L aml The distribution of non-missing values is learned, a trained AML-CAE model can be obtained after AML-CAE iteration is completed, and the data X after interpolation n_imp Can be expressed as:
for X n_imp Performing inverse normalization processing to obtain final complete data X imp . The calculation formula is as follows:
the multi-element load missing data interpolation based on AML-CAE is realized. The method can lay a foundation for the subsequent research on the aspects of multi-element load prediction, multi-element load energy efficiency analysis, multi-element load energy efficiency characteristic distribution, IES operation regulation and control, energy planning and the like.
Compared with the prior art, the invention has the beneficial effects that:
the invention realizes the missing data interpolation of the multi-element load data of the IES system in a self-supervision learning mode. According to the method, the load data in the period does not need to be screened, and the training sample can be directly constructed by utilizing incomplete historical data in a self-supervision learning mode. Firstly, the method adopts linear interpolation filling for the true missing position and the artificial mask position of the data. And then the artificial missing position mask is randomly updated once in each round of model training, so that the missing part of the linearization value can effectively restore the real distribution, and the interpolation precision is improved.
The invention realizes the interpolation of the multi-element load missing data by building the AML-CAE model. The method adopts a convolution self-encoder as a deep learning network structure, combines an updating strategy of a manual missing position mask and an initialization method of linear interpolation, and is used for interpolating missing values in IES multi-element load data. The method adopts a self-supervision learning strategy and does not depend on a complete data set, so that the method has higher practicability and application meaning, and lays a foundation for the subsequent method for optimizing the IES operation.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic representation of a simulated artificial deletion scene of a method for supplementing multiple load data based on an artificial mask convolution self-encoder of the present invention, (a) is random deletion; (b) is a continuous deletion;
FIG. 2 is a diagram of the structure and parameters of AML-CAE of the present invention;
FIG. 3 is a visual representation of each feature of a sample with a 30% loss rate for random loss according to the present invention, (a) electrical load, (b) cold load, and (c) thermal load;
FIG. 4 is a visual representation of each feature of a sample with a 30% loss rate for the continuous loss of the present invention, (a) electrical load, (b) cold load, and (c) thermal load;
fig. 5 is a flow chart of an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1. The simulation and analysis of missing data interpolation for the complete real data set in the tampeper district IES at arizona state university further illustrates the present invention. Since the data of the selected IES multipotency burden is complete, we need to artificially simulate the missing scene in order to test the proposed missing data interpolation scheme. There are two typical deletion scenarios: one is random deletion or discrete random deletion, and the other is continuous deletion. The complete real dataset in the Tanpei school district IES at the state university of Aristolochia comes from Campus Metabolism.
The method for completing the multi-element load data based on the artificial mask convolution self-encoder comprises the following specific steps:
step one, acquiring historical data of electric, cold and heat loads with a time resolution of 15 minutes in the Tanpei school district of the state university of Aristolochia in 2020.
And secondly, remolding the multi-element load data in the simulated missing scene and obtaining a missing position matrix. In order to meet the format requirements of convolution and take into account the correlation between multiple attributes, the time series of the multi-energy load is converted into a feature matrix with the same number of rows and columns. The number of lines is 3, which corresponds to the electric, cold and heat load data. The number of columns is 96, which represents the number of data sampling points per day of the acquired historical load. The feature matrix takes the original missing data X as input, and meanwhile, a missing position matrix M is obtained according to the position of the missing data in the X real . Deletion matrix M real Can be expressed as:
and thirdly, normalizing the multi-element load data. There is a strong correlation between the multipotent loads of IES, and taking them as parallel inputs helps to improve the accuracy of missing data computation. However, since there is a large difference in values between the electric load, the cold load and the heat load, it is necessary to normalize the multi-energy load data, respectively, in order to allow the model to converge more quickly. The invention combines maximum and minimum normalization (Min-Max Normalization) with the deletion position matrix M real And respectively carrying out normalization treatment on the electric load, the cold load and the heat load. The calculation formula is as follows:
e, C, H, which represents the acquired history data of electric, cold and heat loads; m is M re 、M rc 、M rh Respectively representing the missing position matrixes corresponding to the electric, cold and heat loads; min (·) and max (·) represent functions of the minimum element and the maximum element in the extraction matrix, respectively; e (E) norm 、C norm 、H norm Representing the results after the electric, cold and heat loads are normalized respectively; x is X norm The results of the normalization of X are shown, ranging from 0 to 1.
And fourthly, building an AML-CAE model to realize missing data interpolation of multiple loads. The structure and parameters of the AML-CAE used in the present invention are shown in FIG. 2. Because the selected data set time resolution is 15 minutes, each feature contains 96 sampling points per day, i.e., is 1×96 in size. The input data consists of three characteristics, namely an electrical load, a cold load and a hot load, forming a matrix of size 3 x 96. Since convolution operations require processing in the channel dimension, it is necessary to add a channel dimension to the input matrix to form a matrix of size 3 x 96 x 1. The encoder consists of four convolutional layers and the decoder consists of five transposed convolutional layers. The use of the LeakyReLU activation function solves the neuronal death problem, enhances gradient flow, and increases the expressive power of the model. The iteration number is 500, and the optimizer selects Adam algorithm.
Conv2D in FIG. 2 is one type of convolution layer that is used for the convolution operation of the feature. This operation may capture spatial structure and characteristics in multiple loads. Conv2Dtran is a deconvolution operation that upsamples the input data so that the size of the output signature is larger than the size of the input signature. LeakyReLU is an activation function that introduces a small negative slope at the negative input to avoid neuronal death (output constant zero). Compared with the traditional ReLU activation function, the LeakyReLU has a certain negative response capability, and can better handle the gradient disappearance problem. filters is the number of filters in a given convolutional layer. The filter is used in convolution operations to extract multiple aspects of the multi-element load. filters=n, meaning that the convolutional layer will output n different feature maps. kernel is used to define the receptive field of the convolution operation, and kernel= (i, j) represents the i x j region of the input data that is considered each time the convolution operation is performed. The stride is used to control the step size of the sliding window over the input data. stride= (i, j) means that the filter moves i elements at a time in the horizontal direction and j elements at a time in the vertical direction. padding represents a padding operation in a convolution operation to add additional elements around the edges of the input data. padding= (i, j) indicates i elements filled in the vertical direction (up and down), j elements filled in the horizontal direction (left and right), and the filled elements are 0.
Training AML-CAE. First, for normalized data X norm Processing by linear interpolation to obtain initialized dataThen in the kth iteration, by updating the artificial deletion position matrix +.>Deletion rate and deletion type of (2) and M real Identical), will->And->Carrying out Hadamard product operation to obtain artificially deleted data +.>Finally toProcessing is likewise carried out by linear interpolation to obtain AML-CAE input data +.>The data after model reconstruction is obtained by taking the data as input of AML-CAE training process>Wherein the artificial deletion position matrix->And (3) withCan be expressed as:
where Lin (·) represents a linear interpolation function for filling in missing values in the variables. The Hadamard operation is indicated. AML-CAE (. Cndot.) represents the forward propagation of AML-CAE.
N-th group of samples->By forward propagation, the reconstructed result can be obtained +.>Error L between input and reconstruction aml Can be expressed as:
if the data sample is not trained in the current iteration, transferring to the next group of samples; otherwise, go to the next iteration.
Step six, AML-CAE is realized by reducing L aml The distribution of non-missing values is learned, a trained AML-CAE model can be obtained after AML-CAE iteration is completed, and the data X after interpolation n_imp Can be expressed as:
for X n_imp Performing inverse normalization processing to obtain final complete data X imp . The calculation formula is as follows:
the multi-element load missing data interpolation based on AML-CAE is realized. In order to visually observe the performance of AML-CAE on random deletion of multiple loads and interpolation under continuous deletion, a sample with a deletion rate of 30% is visualized as shown in fig. 3 and 4. A flow chart of an implementation of the present invention is shown in fig. 5.
The invention realizes the interpolation of the multi-element load missing data by building the AML-CAE model. The method adopts a convolution self-encoder as a deep learning network structure, combines an updating strategy of a manual missing position mask and an initialization method of linear interpolation, and is used for interpolating missing values in IES multi-element load data. The method adopts a self-supervision learning strategy and does not depend on a complete data set, so that the method has higher practicability and application meaning, and lays a foundation for the subsequent method for optimizing the IES operation.
The embodiments of the present invention have been described in detail above with reference to the accompanying drawings, but the present invention is not limited to the described embodiments. It will be apparent to those skilled in the art that various changes, modifications, substitutions and alterations can be made to these embodiments without departing from the principles and spirit of the invention, and yet fall within the scope of the invention.
Claims (3)
1. A multi-element load data complement method based on an artificial mask convolution self-encoder is characterized in that: the method comprises the following steps:
step one: acquiring electrical, cold and heat load historical data of the IES;
step two: remolding the multi-element load data and acquiring missing positions, wherein the number of rows of the matrix is 3, the rows of the matrix correspond to the electric, cold and hot load data respectively, the number of columns of the matrix is T, the number of data sampling points of the acquired historical load per day is represented, the feature matrix is taken as input original missing data X, and meanwhile, a missing position matrix M is obtained according to the positions of the missing data in the X real Deletion matrix M real Can be expressed as:
step three: normalizing the multi-element load data;
step four: building an AML-CAE model to realize missing data interpolation of multiple loads, wherein the AML-CAE network architecture consists of an encoder and a decoder, the encoder consists of four convolution layers, and the decoder consists of five transposition convolution layers;
step five: training AML-CAE;
step six: AML-CAE is achieved by reducing L aml The distribution of non-missing values is learned, a trained AML-CAE model can be obtained after AML-CAE iteration is completed, and the data X after interpolation n_imp Can be expressed as:
for X n_imp Performing inverse normalization processing to obtain final complete data X imp The calculation formula is as follows:
and (5) implementing the interpolated multi-element load missing data of AML-CAE.
2. The method for multi-element load data complement based on artificial mask convolution self-encoder of claim 1, wherein: the third step specifically comprises the following steps:
combining the maximum and minimum normalization with the missing position matrix M real And respectively carrying out normalization treatment on electric, cold and heat loads, wherein the calculation formula is as follows:
wherein E, C, H represents the obtained electrical, cold, and heat load history data; m is M re 、M rc 、M rh Respectively representing the missing position matrixes corresponding to the electric, cold and heat loads; min (·) and max (·) represent functions of the minimum element and the maximum element in the extraction matrix, respectively; e (E) norm 、C norm 、H norm Representing the results after the electric, cold and heat loads are normalized respectively; x is X norm The results of the normalization of X are shown, ranging from 0 to 1.
3. The method for multi-element load data complement based on artificial mask convolution self-encoder of claim 1, wherein: the fifth step specifically comprises the following steps:
first, for normalized data X norm Processing by linear interpolation to obtain initialized dataThen in the kth iteration, by updating the artificial deletion position matrix +.>Will->And->Carrying out Hadamard product operation to obtain artificially deleted data +.>Back pair->Processing is likewise carried out by linear interpolation to obtain AML-CAE input data +.> The data after model reconstruction is obtained by taking the data as input of AML-CAE training process>Wherein the position matrix is artificially deletedAnd->Can be expressed as:
wherein Lin (& gt) represents a linear interpolation function for filling the missing values in the variables, as well as Hadamard product operation, AML-CAE (& gt) represents forward propagation of AML-CAE;
n-th group of samples->By forward propagation, a reconstructed result can be obtained>Error L between input and reconstruction aml Can be expressed as:
if the data sample is not trained in the current iteration, transferring to the next group of samples; otherwise, go to the next iteration.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310880370.8A CN116883199A (en) | 2023-07-17 | 2023-07-17 | Multi-element load data complement method based on artificial mask convolution self-encoder |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310880370.8A CN116883199A (en) | 2023-07-17 | 2023-07-17 | Multi-element load data complement method based on artificial mask convolution self-encoder |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116883199A true CN116883199A (en) | 2023-10-13 |
Family
ID=88271148
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310880370.8A Pending CN116883199A (en) | 2023-07-17 | 2023-07-17 | Multi-element load data complement method based on artificial mask convolution self-encoder |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116883199A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117437086A (en) * | 2023-12-20 | 2024-01-23 | 中国电建集团贵阳勘测设计研究院有限公司 | Deep learning-based solar resource missing measurement data interpolation method and system |
-
2023
- 2023-07-17 CN CN202310880370.8A patent/CN116883199A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117437086A (en) * | 2023-12-20 | 2024-01-23 | 中国电建集团贵阳勘测设计研究院有限公司 | Deep learning-based solar resource missing measurement data interpolation method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111292525B (en) | Traffic flow prediction method based on neural network | |
CN107633486A (en) | Structure Magnetic Resonance Image Denoising based on three-dimensional full convolutional neural networks | |
CN107886510A (en) | A kind of prostate MRI dividing methods based on three-dimensional full convolutional neural networks | |
CN109636721B (en) | Video super-resolution method based on countermeasure learning and attention mechanism | |
CN109635763B (en) | Crowd density estimation method | |
CN109785279B (en) | Image fusion reconstruction method based on deep learning | |
CN113591954B (en) | Filling method of missing time sequence data in industrial system | |
CN116883199A (en) | Multi-element load data complement method based on artificial mask convolution self-encoder | |
CN114693064B (en) | Building group scheme generation performance evaluation method | |
CN116702627B (en) | Urban storm waterlogging rapid simulation method based on deep convolutional neural network | |
CN113935249B (en) | Upper-layer ocean thermal structure inversion method based on compression and excitation network | |
CN104408697B (en) | Image Super-resolution Reconstruction method based on genetic algorithm and canonical prior model | |
CN111861886A (en) | Image super-resolution reconstruction method based on multi-scale feedback network | |
CN111768326B (en) | High-capacity data protection method based on GAN (gas-insulated gate bipolar transistor) amplified image foreground object | |
CN115860215A (en) | Photovoltaic and wind power generation power prediction method and system | |
CN115439849B (en) | Instrument digital identification method and system based on dynamic multi-strategy GAN network | |
CN117039983A (en) | Photovoltaic output prediction method and terminal combined with channel attention mechanism | |
CN115907000A (en) | Small sample learning method for optimal power flow prediction of power system | |
CN116226654A (en) | Machine learning data forgetting method based on mask gradient | |
CN116152263A (en) | CM-MLP network-based medical image segmentation method | |
CN116070401A (en) | High-dimensional magnetic resonance image reconstruction method based on transform domain tensor low-rank priori depth expansion network | |
CN115100599A (en) | Mask transform-based semi-supervised crowd scene abnormality detection method | |
CN114140317A (en) | Image animation method based on cascade generation confrontation network | |
CN113537573A (en) | Wind power operation trend prediction method based on dual space-time feature extraction | |
CN116662766B (en) | Wind speed prediction method and device based on data two-dimensional reconstruction and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |