CN116883199A - Multi-element load data complement method based on artificial mask convolution self-encoder - Google Patents

Multi-element load data complement method based on artificial mask convolution self-encoder Download PDF

Info

Publication number
CN116883199A
CN116883199A CN202310880370.8A CN202310880370A CN116883199A CN 116883199 A CN116883199 A CN 116883199A CN 202310880370 A CN202310880370 A CN 202310880370A CN 116883199 A CN116883199 A CN 116883199A
Authority
CN
China
Prior art keywords
data
aml
cae
missing
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310880370.8A
Other languages
Chinese (zh)
Inventor
蔡毅
董伟
张帆
赵晓东
方晓伦
周华锋
孙超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Zhejiang Electric Power Co Ltd Hangzhou Qiantang District Power Supply Co
Hangzhou Dianzi University
Original Assignee
State Grid Zhejiang Electric Power Co Ltd Hangzhou Qiantang District Power Supply Co
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Zhejiang Electric Power Co Ltd Hangzhou Qiantang District Power Supply Co, Hangzhou Dianzi University filed Critical State Grid Zhejiang Electric Power Co Ltd Hangzhou Qiantang District Power Supply Co
Priority to CN202310880370.8A priority Critical patent/CN116883199A/en
Publication of CN116883199A publication Critical patent/CN116883199A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0895Weakly supervised learning, e.g. semi-supervised or self-supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Strategic Management (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Human Resources & Organizations (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Mathematics (AREA)
  • Marketing (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • General Business, Economics & Management (AREA)
  • Pure & Applied Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • Primary Health Care (AREA)
  • Quality & Reliability (AREA)
  • Water Supply & Treatment (AREA)
  • Public Health (AREA)
  • Operations Research (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Complex Calculations (AREA)

Abstract

The invention relates to a multi-element load data complement method based on an artificial mask convolution self-encoder, which comprises the following steps: acquiring historical data of electric, cold and heat loads of an IES, and step two: and (5) remolding the multi-element load data and obtaining a missing position matrix. Step three: and carrying out normalization processing on the multi-element load data. Step four: and building an AML-CAE model to realize missing data interpolation of multiple loads. Step five: AML-CAE was trained. Step six: AML-CAE is achieved by reducing L aml The distribution of non-missing values is learned, and a trained AML-CAE model can be obtained after AML-CAE iteration is completed. The invention adopts a convolution self-encoder as a deep learning network structure, combines an updating strategy of an artificial missing position mask and an initialization method of linear interpolation, and is used for interpolating missing values in IES multi-element load data. The method adopts a self-supervision learning strategy and does not depend on a complete data set, so that the method has higher practicability and application meaning.

Description

Multi-element load data complement method based on artificial mask convolution self-encoder
Technical Field
The invention relates to a multi-element load data interpolation method based on an artificial mask convolution self-encoder (AML-CAE), in particular to a multi-element load missing data interpolation method based on an artificial mask linear filling convolution self-encoder (AML-CAE), and belongs to the technical field of multi-element load missing data interpolation.
Background
The development of Integrated Energy Systems (IES) is an important solution in the face of climate change, energy safety and economic sustainability challenges. With the development of artificial intelligence technology, more and more data-driven IES control optimization running methods are applied. However, these methods rely on efficient analysis of large amounts of system state data. In practice, due to sensor or transmission faults, missing values usually exist in the historical data of the electric and cold loads recorded by the system, so that the problem of the multi-energy load data complementation of the IES is brought into wide attention.
The current method for supplementing missing data by deep learning mostly adopts a supervision training strategy, and complete data is needed for training. In reality, however, the missing value distribution is relatively random, and it becomes challenging to choose a continuous and complete data set for training.
Disclosure of Invention
In order to overcome the defects of the prior researches, the invention provides a multi-element load data complement method based on an artificial mask convolution self-encoder (AML-CAE).
The method for completing the multi-element load data based on the artificial mask convolution self-encoder comprises the following specific steps:
step one, acquiring electric, cold and heat load historical data of an IES.
And step two, remolding the multi-element load data and obtaining a missing position matrix. In order to meet the format requirements of convolution and take into account the correlation between multiple attributes, the time series of the multi-energy load is converted into a feature matrix with the same number of rows and columns. The number of lines is 3, which corresponds to the electric, cold and heat load data. The column number is T, which represents the number of data sampling points of the acquired historical load every day. The feature matrix takes the original missing data X as input, and meanwhile, a missing position matrix M is obtained according to the position of the missing data in the X real . Deletion matrix M real Can be expressed as:
and thirdly, normalizing the multi-element load data. There is a strong correlation between the multipotent loads of IES, and taking them as parallel inputs helps to improve the accuracy of missing data computation. However, since there is a large difference in values between the electric load, the cold load and the heat load, it is necessary to normalize the multi-energy load data, respectively, in order to allow the model to converge more quickly. The invention combines maximum and minimum normalization (Min-Max Normalization) with the deletion position matrix M real And respectively carrying out normalization treatment on the electric load, the cold load and the heat load. The calculation formula is as follows:
e, C, H, which represents the acquired history data of electric, cold and heat loads; m is M re 、M rc 、M rh Respectively representing the missing position matrixes corresponding to the electric, cold and heat loads; min (·) and max (·) represent functions of the minimum element and the maximum element in the extraction matrix, respectively; e (E) norm 、C norm 、H norm Representing the results after the electric, cold and heat loads are normalized respectively; x is X norm The results of the normalization of X are shown, ranging from 0 to 1.
And fourthly, building an AML-CAE model to realize missing data interpolation of multiple loads. The AML-CAE network architecture is composed of an encoder and a decoder. The encoder consists of four convolutional layers and the decoder consists of five transposed convolutional layers. The use of the LeakyReLU activation function solves the neuronal death problem, enhances gradient flow, and increases the expressive power of the model.
Training AML-CAE. First, for normalized data X norm Processing by linear interpolation to obtain initialized dataThen in the kth iteration, by updating the artificial deletion position matrix +.>Will->And (3) withCarrying out Hadamard product operation to obtain artificially deleted data +.>Finally pair->Processing is likewise carried out by linear interpolation to obtain AML-CAE input data +.>The data after model reconstruction is obtained by taking the data as input of AML-CAE training process>Wherein the artificial deletion position matrix->And->Can be expressed as:
where Lin (·) represents a linear interpolation function for filling in missing values in the variables. The Hadamard operation is indicated. AML-CAE (. Cndot.) represents the forward propagation of AML-CAE.
N-th group of samples->By forward propagation, the reconstructed result can be obtained +.>Error L between input and reconstruction aml Can be expressed as:
the present invention uses Adam optimization algorithm to update network parameters. If the data sample is not trained in the current iteration, transferring to the next group of samples; otherwise, go to the next iteration.
Step six, AML-CAE is realized by reducing L aml The distribution of non-missing values is learned, a trained AML-CAE model can be obtained after AML-CAE iteration is completed, and the data X after interpolation n_imp Can be expressed as:
for X n_imp Performing inverse normalization processing to obtain final complete data X imp . The calculation formula is as follows:
the multi-element load missing data interpolation based on AML-CAE is realized. The method can lay a foundation for the subsequent research on the aspects of multi-element load prediction, multi-element load energy efficiency analysis, multi-element load energy efficiency characteristic distribution, IES operation regulation and control, energy planning and the like.
Compared with the prior art, the invention has the beneficial effects that:
the invention realizes the missing data interpolation of the multi-element load data of the IES system in a self-supervision learning mode. According to the method, the load data in the period does not need to be screened, and the training sample can be directly constructed by utilizing incomplete historical data in a self-supervision learning mode. Firstly, the method adopts linear interpolation filling for the true missing position and the artificial mask position of the data. And then the artificial missing position mask is randomly updated once in each round of model training, so that the missing part of the linearization value can effectively restore the real distribution, and the interpolation precision is improved.
The invention realizes the interpolation of the multi-element load missing data by building the AML-CAE model. The method adopts a convolution self-encoder as a deep learning network structure, combines an updating strategy of a manual missing position mask and an initialization method of linear interpolation, and is used for interpolating missing values in IES multi-element load data. The method adopts a self-supervision learning strategy and does not depend on a complete data set, so that the method has higher practicability and application meaning, and lays a foundation for the subsequent method for optimizing the IES operation.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic representation of a simulated artificial deletion scene of a method for supplementing multiple load data based on an artificial mask convolution self-encoder of the present invention, (a) is random deletion; (b) is a continuous deletion;
FIG. 2 is a diagram of the structure and parameters of AML-CAE of the present invention;
FIG. 3 is a visual representation of each feature of a sample with a 30% loss rate for random loss according to the present invention, (a) electrical load, (b) cold load, and (c) thermal load;
FIG. 4 is a visual representation of each feature of a sample with a 30% loss rate for the continuous loss of the present invention, (a) electrical load, (b) cold load, and (c) thermal load;
fig. 5 is a flow chart of an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1. The simulation and analysis of missing data interpolation for the complete real data set in the tampeper district IES at arizona state university further illustrates the present invention. Since the data of the selected IES multipotency burden is complete, we need to artificially simulate the missing scene in order to test the proposed missing data interpolation scheme. There are two typical deletion scenarios: one is random deletion or discrete random deletion, and the other is continuous deletion. The complete real dataset in the Tanpei school district IES at the state university of Aristolochia comes from Campus Metabolism.
The method for completing the multi-element load data based on the artificial mask convolution self-encoder comprises the following specific steps:
step one, acquiring historical data of electric, cold and heat loads with a time resolution of 15 minutes in the Tanpei school district of the state university of Aristolochia in 2020.
And secondly, remolding the multi-element load data in the simulated missing scene and obtaining a missing position matrix. In order to meet the format requirements of convolution and take into account the correlation between multiple attributes, the time series of the multi-energy load is converted into a feature matrix with the same number of rows and columns. The number of lines is 3, which corresponds to the electric, cold and heat load data. The number of columns is 96, which represents the number of data sampling points per day of the acquired historical load. The feature matrix takes the original missing data X as input, and meanwhile, a missing position matrix M is obtained according to the position of the missing data in the X real . Deletion matrix M real Can be expressed as:
and thirdly, normalizing the multi-element load data. There is a strong correlation between the multipotent loads of IES, and taking them as parallel inputs helps to improve the accuracy of missing data computation. However, since there is a large difference in values between the electric load, the cold load and the heat load, it is necessary to normalize the multi-energy load data, respectively, in order to allow the model to converge more quickly. The invention combines maximum and minimum normalization (Min-Max Normalization) with the deletion position matrix M real And respectively carrying out normalization treatment on the electric load, the cold load and the heat load. The calculation formula is as follows:
e, C, H, which represents the acquired history data of electric, cold and heat loads; m is M re 、M rc 、M rh Respectively representing the missing position matrixes corresponding to the electric, cold and heat loads; min (·) and max (·) represent functions of the minimum element and the maximum element in the extraction matrix, respectively; e (E) norm 、C norm 、H norm Representing the results after the electric, cold and heat loads are normalized respectively; x is X norm The results of the normalization of X are shown, ranging from 0 to 1.
And fourthly, building an AML-CAE model to realize missing data interpolation of multiple loads. The structure and parameters of the AML-CAE used in the present invention are shown in FIG. 2. Because the selected data set time resolution is 15 minutes, each feature contains 96 sampling points per day, i.e., is 1×96 in size. The input data consists of three characteristics, namely an electrical load, a cold load and a hot load, forming a matrix of size 3 x 96. Since convolution operations require processing in the channel dimension, it is necessary to add a channel dimension to the input matrix to form a matrix of size 3 x 96 x 1. The encoder consists of four convolutional layers and the decoder consists of five transposed convolutional layers. The use of the LeakyReLU activation function solves the neuronal death problem, enhances gradient flow, and increases the expressive power of the model. The iteration number is 500, and the optimizer selects Adam algorithm.
Conv2D in FIG. 2 is one type of convolution layer that is used for the convolution operation of the feature. This operation may capture spatial structure and characteristics in multiple loads. Conv2Dtran is a deconvolution operation that upsamples the input data so that the size of the output signature is larger than the size of the input signature. LeakyReLU is an activation function that introduces a small negative slope at the negative input to avoid neuronal death (output constant zero). Compared with the traditional ReLU activation function, the LeakyReLU has a certain negative response capability, and can better handle the gradient disappearance problem. filters is the number of filters in a given convolutional layer. The filter is used in convolution operations to extract multiple aspects of the multi-element load. filters=n, meaning that the convolutional layer will output n different feature maps. kernel is used to define the receptive field of the convolution operation, and kernel= (i, j) represents the i x j region of the input data that is considered each time the convolution operation is performed. The stride is used to control the step size of the sliding window over the input data. stride= (i, j) means that the filter moves i elements at a time in the horizontal direction and j elements at a time in the vertical direction. padding represents a padding operation in a convolution operation to add additional elements around the edges of the input data. padding= (i, j) indicates i elements filled in the vertical direction (up and down), j elements filled in the horizontal direction (left and right), and the filled elements are 0.
Training AML-CAE. First, for normalized data X norm Processing by linear interpolation to obtain initialized dataThen in the kth iteration, by updating the artificial deletion position matrix +.>Deletion rate and deletion type of (2) and M real Identical), will->And->Carrying out Hadamard product operation to obtain artificially deleted data +.>Finally toProcessing is likewise carried out by linear interpolation to obtain AML-CAE input data +.>The data after model reconstruction is obtained by taking the data as input of AML-CAE training process>Wherein the artificial deletion position matrix->And (3) withCan be expressed as:
where Lin (·) represents a linear interpolation function for filling in missing values in the variables. The Hadamard operation is indicated. AML-CAE (. Cndot.) represents the forward propagation of AML-CAE.
N-th group of samples->By forward propagation, the reconstructed result can be obtained +.>Error L between input and reconstruction aml Can be expressed as:
if the data sample is not trained in the current iteration, transferring to the next group of samples; otherwise, go to the next iteration.
Step six, AML-CAE is realized by reducing L aml The distribution of non-missing values is learned, a trained AML-CAE model can be obtained after AML-CAE iteration is completed, and the data X after interpolation n_imp Can be expressed as:
for X n_imp Performing inverse normalization processing to obtain final complete data X imp . The calculation formula is as follows:
the multi-element load missing data interpolation based on AML-CAE is realized. In order to visually observe the performance of AML-CAE on random deletion of multiple loads and interpolation under continuous deletion, a sample with a deletion rate of 30% is visualized as shown in fig. 3 and 4. A flow chart of an implementation of the present invention is shown in fig. 5.
The invention realizes the interpolation of the multi-element load missing data by building the AML-CAE model. The method adopts a convolution self-encoder as a deep learning network structure, combines an updating strategy of a manual missing position mask and an initialization method of linear interpolation, and is used for interpolating missing values in IES multi-element load data. The method adopts a self-supervision learning strategy and does not depend on a complete data set, so that the method has higher practicability and application meaning, and lays a foundation for the subsequent method for optimizing the IES operation.
The embodiments of the present invention have been described in detail above with reference to the accompanying drawings, but the present invention is not limited to the described embodiments. It will be apparent to those skilled in the art that various changes, modifications, substitutions and alterations can be made to these embodiments without departing from the principles and spirit of the invention, and yet fall within the scope of the invention.

Claims (3)

1. A multi-element load data complement method based on an artificial mask convolution self-encoder is characterized in that: the method comprises the following steps:
step one: acquiring electrical, cold and heat load historical data of the IES;
step two: remolding the multi-element load data and acquiring missing positions, wherein the number of rows of the matrix is 3, the rows of the matrix correspond to the electric, cold and hot load data respectively, the number of columns of the matrix is T, the number of data sampling points of the acquired historical load per day is represented, the feature matrix is taken as input original missing data X, and meanwhile, a missing position matrix M is obtained according to the positions of the missing data in the X real Deletion matrix M real Can be expressed as:
step three: normalizing the multi-element load data;
step four: building an AML-CAE model to realize missing data interpolation of multiple loads, wherein the AML-CAE network architecture consists of an encoder and a decoder, the encoder consists of four convolution layers, and the decoder consists of five transposition convolution layers;
step five: training AML-CAE;
step six: AML-CAE is achieved by reducing L aml The distribution of non-missing values is learned, a trained AML-CAE model can be obtained after AML-CAE iteration is completed, and the data X after interpolation n_imp Can be expressed as:
for X n_imp Performing inverse normalization processing to obtain final complete data X imp The calculation formula is as follows:
and (5) implementing the interpolated multi-element load missing data of AML-CAE.
2. The method for multi-element load data complement based on artificial mask convolution self-encoder of claim 1, wherein: the third step specifically comprises the following steps:
combining the maximum and minimum normalization with the missing position matrix M real And respectively carrying out normalization treatment on electric, cold and heat loads, wherein the calculation formula is as follows:
wherein E, C, H represents the obtained electrical, cold, and heat load history data; m is M re 、M rc 、M rh Respectively representing the missing position matrixes corresponding to the electric, cold and heat loads; min (·) and max (·) represent functions of the minimum element and the maximum element in the extraction matrix, respectively; e (E) norm 、C norm 、H norm Representing the results after the electric, cold and heat loads are normalized respectively; x is X norm The results of the normalization of X are shown, ranging from 0 to 1.
3. The method for multi-element load data complement based on artificial mask convolution self-encoder of claim 1, wherein: the fifth step specifically comprises the following steps:
first, for normalized data X norm Processing by linear interpolation to obtain initialized dataThen in the kth iteration, by updating the artificial deletion position matrix +.>Will->And->Carrying out Hadamard product operation to obtain artificially deleted data +.>Back pair->Processing is likewise carried out by linear interpolation to obtain AML-CAE input data +.> The data after model reconstruction is obtained by taking the data as input of AML-CAE training process>Wherein the position matrix is artificially deletedAnd->Can be expressed as:
wherein Lin (& gt) represents a linear interpolation function for filling the missing values in the variables, as well as Hadamard product operation, AML-CAE (& gt) represents forward propagation of AML-CAE;
n-th group of samples->By forward propagation, a reconstructed result can be obtained>Error L between input and reconstruction aml Can be expressed as:
if the data sample is not trained in the current iteration, transferring to the next group of samples; otherwise, go to the next iteration.
CN202310880370.8A 2023-07-17 2023-07-17 Multi-element load data complement method based on artificial mask convolution self-encoder Pending CN116883199A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310880370.8A CN116883199A (en) 2023-07-17 2023-07-17 Multi-element load data complement method based on artificial mask convolution self-encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310880370.8A CN116883199A (en) 2023-07-17 2023-07-17 Multi-element load data complement method based on artificial mask convolution self-encoder

Publications (1)

Publication Number Publication Date
CN116883199A true CN116883199A (en) 2023-10-13

Family

ID=88271148

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310880370.8A Pending CN116883199A (en) 2023-07-17 2023-07-17 Multi-element load data complement method based on artificial mask convolution self-encoder

Country Status (1)

Country Link
CN (1) CN116883199A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437086A (en) * 2023-12-20 2024-01-23 中国电建集团贵阳勘测设计研究院有限公司 Deep learning-based solar resource missing measurement data interpolation method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437086A (en) * 2023-12-20 2024-01-23 中国电建集团贵阳勘测设计研究院有限公司 Deep learning-based solar resource missing measurement data interpolation method and system

Similar Documents

Publication Publication Date Title
CN111292525B (en) Traffic flow prediction method based on neural network
CN107633486A (en) Structure Magnetic Resonance Image Denoising based on three-dimensional full convolutional neural networks
CN107886510A (en) A kind of prostate MRI dividing methods based on three-dimensional full convolutional neural networks
CN109636721B (en) Video super-resolution method based on countermeasure learning and attention mechanism
CN109635763B (en) Crowd density estimation method
CN109785279B (en) Image fusion reconstruction method based on deep learning
CN113591954B (en) Filling method of missing time sequence data in industrial system
CN116883199A (en) Multi-element load data complement method based on artificial mask convolution self-encoder
CN114693064B (en) Building group scheme generation performance evaluation method
CN116702627B (en) Urban storm waterlogging rapid simulation method based on deep convolutional neural network
CN113935249B (en) Upper-layer ocean thermal structure inversion method based on compression and excitation network
CN104408697B (en) Image Super-resolution Reconstruction method based on genetic algorithm and canonical prior model
CN111861886A (en) Image super-resolution reconstruction method based on multi-scale feedback network
CN111768326B (en) High-capacity data protection method based on GAN (gas-insulated gate bipolar transistor) amplified image foreground object
CN115860215A (en) Photovoltaic and wind power generation power prediction method and system
CN115439849B (en) Instrument digital identification method and system based on dynamic multi-strategy GAN network
CN117039983A (en) Photovoltaic output prediction method and terminal combined with channel attention mechanism
CN115907000A (en) Small sample learning method for optimal power flow prediction of power system
CN116226654A (en) Machine learning data forgetting method based on mask gradient
CN116152263A (en) CM-MLP network-based medical image segmentation method
CN116070401A (en) High-dimensional magnetic resonance image reconstruction method based on transform domain tensor low-rank priori depth expansion network
CN115100599A (en) Mask transform-based semi-supervised crowd scene abnormality detection method
CN114140317A (en) Image animation method based on cascade generation confrontation network
CN113537573A (en) Wind power operation trend prediction method based on dual space-time feature extraction
CN116662766B (en) Wind speed prediction method and device based on data two-dimensional reconstruction and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination