WO2022267960A1 - 基于客户端选择的联邦注意力dbn协同检测*** - Google Patents

基于客户端选择的联邦注意力dbn协同检测*** Download PDF

Info

Publication number
WO2022267960A1
WO2022267960A1 PCT/CN2022/098981 CN2022098981W WO2022267960A1 WO 2022267960 A1 WO2022267960 A1 WO 2022267960A1 CN 2022098981 W CN2022098981 W CN 2022098981W WO 2022267960 A1 WO2022267960 A1 WO 2022267960A1
Authority
WO
WIPO (PCT)
Prior art keywords
dbn
training
concentrator
data
federated
Prior art date
Application number
PCT/CN2022/098981
Other languages
English (en)
French (fr)
Inventor
夏卓群
陈亚玲
尹波
廖曙光
邢利
文琴
Original Assignee
长沙理工大学
长沙麦融高科股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 长沙理工大学, 长沙麦融高科股份有限公司 filed Critical 长沙理工大学
Publication of WO2022267960A1 publication Critical patent/WO2022267960A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Definitions

  • the invention relates to the technical field of data processing, in particular to a federated attention DBN collaborative detection system based on client selection.
  • the information security of the smart grid is becoming more and more important.
  • the advanced measurement system AMI is an important part of the smart grid.
  • the security of the advanced metering system is an urgent problem that needs to be solved urgently for the security of the smart grid. Since AMI is a key system of the smart grid, it is vulnerable to network attacks, and the data collected by the smart meter is sensitive, which may cause privacy leakage.
  • the detection model of related technologies usually has "low learning efficiency", " Model training accuracy is not high” and “privacy leakage” and other technical problems.
  • the present invention aims to solve at least the technical problems existing in the prior art.
  • the present invention proposes a federated attention DBN collaborative detection system based on client selection, which can effectively improve the efficiency of federated learning, reduce the number of concentrators that need to be trained, and reduce the communication overhead and computing overhead between concentrators and data centers , improve the accuracy of model training, and enhance the security of data privacy.
  • the invention also proposes a DBN detection method with an attention mechanism.
  • the present invention also proposes a federated attention DBN cooperative detection device based on client selection and having the DBN detection method of the above-mentioned attention mechanism.
  • the invention also proposes a computer-readable storage medium.
  • this embodiment provides a federated attention DBN collaborative detection system based on client selection, including:
  • a plurality of concentrators each of which is connected in communication with a plurality of the smart meters under its jurisdiction, is used to obtain the power data from the corresponding smart meters, and use a DBN training model with an attention mechanism to The above electric power data is trained to obtain training parameters;
  • the data center is connected in communication with all the concentrators, allocates the DBN training model to the concentrators according to the resources of the concentrators, and is used to obtain the training parameters from each of the concentrators.
  • the federated average aggregation is performed on the training parameters, and the result of the federated average aggregation is sent to the concentrator selected in the next round, so that the concentrator trains the DBN training model to convergence.
  • the federated attention DBN collaborative detection system based on client selection includes multiple smart meters, concentrators, and data centers.
  • a large amount of power data is collected on the smart meters, and then the power data is stored on the concentrator. Due to the advanced measurement of the smart grid
  • the system Advanced Metering Infrastructure, AMI
  • the DBN training model with attention mechanism is used on the concentrator to train the power data to obtain training parameters, and each concentrator does not communicate directly, which can better improve the security of power data.
  • the data center is connected to the concentrator, obtains the training parameters, and performs federated average aggregation on the training parameters.
  • the concentrator resources include communication quality, idle CPU, GPU, and send the result of the federated average aggregation to the concentrator selected in the next round, to The concentrator continues to train the electric data until the DBN training model converges.
  • the federated attention DBN collaborative detection system based on client selection provided by this embodiment can greatly shorten the training time, and can also achieve the accuracy close to the centralized method; compared with the distributed detection model, it is better than the distributed unaggregated The training model on the device, while protecting the data privacy well.
  • the concentrator includes a data collection module, a data processing module, and an attack detection module, the data collection module is connected to the data processing module by communication, and the data collection module is connected to the attack detection module communication connection.
  • this embodiment provides a DBN detection method with an attention mechanism, which is applied to a data center and includes the following steps:
  • the training parameters are obtained by the concentrator using the DBN training model with attention mechanism to train the power data;
  • the power data is trained to obtain training parameters, and then the data center performs federated average fitting on the training parameters, and sends the result of the federated average aggregation to the concentrator selected in the next round, so that the concentrator can
  • the power data continues to be trained until the DBN training model converges.
  • the DBN detection method of the attention mechanism provided by this embodiment can effectively shorten the training time, and has the accuracy close to the centralized method; compared with the distributed detection model, it is better than the distributed unidentified Aggregated on-device training models while maintaining data privacy well.
  • the input matrix of the attention mechanism includes a key matrix, a value matrix and a query matrix
  • the output matrix of the attention mechanism includes a context matrix
  • this embodiment provides a DBN detection method with an attention mechanism, which is applied to a concentrator and includes the following steps:
  • the result of the federated average aggregation from the data center is received, and according to the result of the federated average aggregation, the power data is continuously trained until the DBN training model converges.
  • the concentrator receives the DBN training model with attention mechanism from the data center, obtains the power data collected by the smart meter and keeps it in the concentrator, uses the DBN training model to train the power data to obtain training parameters, and converts the
  • the training parameters are sent to the data center, and the result of federated average aggregation is received from the data center, and according to the result of the federated average aggregation, the power data is continuously trained in the concentrator until the DBN training model converges.
  • the DBN detection method of the attention mechanism provided by this embodiment can effectively shorten the training time, and has the accuracy close to the centralized method; compared with the distributed detection model, it is better than the distributed unidentified Aggregated on-device training models while maintaining data privacy well.
  • said using said DBN training model to train said power data to obtain training parameters comprises the following steps:
  • Step S1 inputting the electric power data into the first layer RBM for pre-training to obtain the training result
  • Step S2 inputting the training result to the second layer RBM for training
  • Step S3 repeating the step S1, the step S2 until the maximum number of iterations with the iteration band;
  • Step S4 use the softmax layer to perform backpropagation, and fine-tune the weights of the entire DBN network.
  • the RBM includes a visible layer and a hidden layer, and further includes a step of: performing layer-by-layer training on the RBM, and calculating activation probabilities of the visible layer and the hidden layer using an activation function.
  • this embodiment provides a federated attention DBN cooperative detection device based on client selection, including: a memory, a processor, and a computer program stored on the memory and operable on the processor, the processor When the computer program is executed, the DBN detection method of the attention mechanism as described in the second aspect and the third aspect is realized.
  • this embodiment provides a computer-readable storage medium, the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are used to make a computer execute the second aspect and the third aspect.
  • the DBN detection method of the attention mechanism is used to make a computer execute the second aspect and the third aspect.
  • Fig. 1 is the flow chart of the DBN detection method of the attention mechanism that an embodiment of the present invention provides
  • Fig. 2 is the flowchart of the DBN detection method of the attention mechanism that another embodiment of the present invention provides;
  • Fig. 3 is a DBN model diagram based on the attention mechanism of the DBN detection method of the attention mechanism provided by another embodiment of the present invention.
  • AMI advanced measurement system
  • Deep learning is widely used to solve the security problems of smart grids due to its powerful feature extraction capabilities.
  • the data is built through centralized training data, which may leak data privacy.
  • federated learning is proposed, which proposes to let users train their own models locally, integrate the user's training parameters under the premise of protecting user privacy, and then update the local user model by returning parameters through the cloud model.
  • the performance of the detection model is related to the amount of data. The larger the amount of data, the better the performance of the trained model; the privacy leakage of AMI will have a serious impact.
  • Most traditional AMI machine learning detection algorithms focus on local data for training. During this process, a large amount of data is transmitted to the data center, and the private content contained in the data of all parties is more or less exposed.
  • the invention proposes a federated learning framework based on concentrator selection to improve federated learning efficiency, and then deploys a DBN algorithm based on an attention mechanism to train a detection model, focusing on more important features and improving the detection accuracy of the detection model.
  • the original data is not exchanged and transmitted, but the data of all parties is integrated, which improves the performance of the detection model compared with a single partial data set, and reduces user data privacy leakage risks of.
  • Federated Learning is an emerging basic technology of artificial intelligence. It was first proposed by Google in 2016. It was originally used to solve the problem of updating models locally for end users of Android phones. Its design goal is to ensure the exchange of big data. Under the premise of real-time information security, protection of terminal data and personal data privacy, and compliance with laws and regulations, efficient machine learning can be carried out between multiple parties or computing nodes, and federated learning can improve the privacy of local data. Conservation issues.
  • Deep Belief Networks is a probability generation model, which is a stack of multiple Restricted Boltzmann Machines (RBM). As the number of RBM hidden layer nodes increases, under certain conditions It can fit any data distribution.
  • RBM layer includes a visible layer and a hidden layer, and the units in any layer are not connected to each other, but are connected to each other in different layers. Except for the first layer and the last layer, each layer of RBM has two roles: as a hidden layer of the previous layer, or as an input (visual layer) of the next layer.
  • DBN training includes two steps, pre-training and weight fine-tuning.
  • the original data is input into the RBM of the first layer for training, and the training result is used as the input of the next layer of RBM for training, and the training is repeated until the maximum number of iterations is reached.
  • the BP algorithm is used for backpropagation to adjust the DBN network to avoid falling into local optimum.
  • the invention provides a federated attention DBN collaborative detection system and method based on client selection, which can greatly shorten the training time, and can also achieve the accuracy close to the centralized method; compared with the distributed detection model, it is better than the distributed detection model Unaggregated on-device training models while maintaining data privacy well.
  • the federated attention DBN cooperative detection system based on client selection includes multiple smart meters, concentrators and data centers, collects a large amount of power data on the smart meters, and then stores the power data on the concentrator, Since the Advanced Metering Infrastructure (AMI) of the smart grid often has network attacks, and the privacy and security requirements of power data are high, the DBN training model with attention mechanism is used on the concentrator to analyze the The power data is trained to obtain the training parameters, and the security of the power data can be better improved without direct communication between the various concentrators.
  • AMI Advanced Metering Infrastructure
  • the data center is connected to the concentrator to obtain the training parameters, and the training Perform federated average aggregation of parameters, allocate the DBN training model to the concentrator according to the resources of the concentrator, and send the result of the federated average aggregation to the concentrator selected in the next round, so that the concentrator
  • the controller continues to train the electric data until the DBN training model converges.
  • the federated attention DBN collaborative detection system based on client selection provided by this embodiment can greatly shorten the training time, and can also achieve the accuracy close to the centralized method; compared with the distributed detection model, it is better than the distributed unaggregated The training model on the device, while protecting the data privacy well.
  • the concentrator includes a data collection module, a data processing module, and an attack detection module, the data collection module is connected in communication with the data processing module, and the data collection module is connected in communication with the attack detection module.
  • FIG. 1 is a flowchart of a DBN detection method of an attention mechanism provided by an embodiment of the present invention.
  • the DBN detection method of an attention mechanism includes but is not limited to steps S110 to S140.
  • Step S110 initializing the DBN training model with attention mechanism
  • Step S120 select a concentrator to participate in training according to concentrator resources, and send a DBN training model to the concentrator;
  • Step S130 receiving the training parameters obtained from the concentrator training, the training parameters are obtained by the concentrator using the DBN training model with attention mechanism to train the power data;
  • Step S140 perform federated average fitting on the training parameters, and send the result of federated average aggregation to the selected concentrator in the next round, so that the concentrator continues to train the power data until the DBN training model converges.
  • the DBN detection method of the attention mechanism provided by this embodiment can effectively shorten the training time, and has the accuracy close to the centralized method; compared with the distributed detection model, it is better than the distributed unidentified Aggregated on-device training models while maintaining data privacy well.
  • Attention Model is widely used in various types of deep learning tasks such as natural language processing, image recognition, and speech recognition. It is one of the core technologies worthy of attention and in-depth understanding in deep learning technology. Its goal It is to select the information related to the current task from the messy information and reduce the influence of noise on the results.
  • the attention mechanism in deep learning is essentially similar to the selective visual attention mechanism of human beings. The core goal is to select information that is more critical to the current task goal from a large number of information.
  • the input matrix of the attention mechanism includes a key matrix, a value matrix and a query matrix
  • the output matrix of the attention mechanism includes a context matrix
  • the attention module has three input matrices: the key matrix K TXh , the value matrix V TXh and the query matrix Q TXh , and the output is the context matrix C TXh , which is calculated as follows:
  • FIG. 2 is a flow chart of a DBN detection method with an attention mechanism provided by another embodiment of the present invention.
  • the DBN detection method with an attention mechanism includes but is not limited to steps S210 to S240.
  • Step S210 receiving the DBN training model with attention mechanism from the data center, and obtaining the power data collected by the smart meter;
  • Step S220 using the DBN training model to train the power data to obtain training parameters
  • Step S230 sending the training parameters to the data center, so that the data center performs federated average aggregation on the training parameters;
  • Step S240 receiving the federated average aggregation result from the data center, and continuing to train the electric power data according to the federated average aggregation result until the DBN training model converges.
  • the concentrator receives a DBN training model with an attention mechanism from the data center, obtains the power data collected by the smart meter and keeps it in the concentrator, and uses the DBN training model to train the power data Obtain training parameters, send the training parameters to the data center, receive the result of federated average aggregation from the data center, and continue training on the power data in the concentrator according to the result of the federated average aggregation until the The DBN training model converges.
  • the DBN detection method of the attention mechanism provided by this embodiment can effectively shorten the training time, and has the accuracy close to the centralized method; compared with the distributed detection model, it is better than the distributed unidentified Aggregated on-device training models while maintaining data privacy well.
  • using the DBN training model to train the electric power data to obtain training parameters includes the following steps:
  • Step S1 input the power data into the first layer RBM for pre-training to obtain the training result
  • Step S2 input the training result to the second layer RBM for training
  • Step S3 repeat step S1, step S2 until the maximum number of iterations with iteration;
  • Step S4 use the softmax layer to perform backpropagation, and fine-tune the weights of the entire DBN network.
  • the RBM includes a visible layer and a hidden layer, and further includes a step of: performing layer-by-layer training on the RBM, and calculating activation probabilities of the visible layer and the hidden layer using an activation function.
  • FIG. 3 is a DBN model diagram based on the attention mechanism of the DBN detection method of the attention mechanism provided by another embodiment of the present invention.
  • using a larger proportion of concentrators can improve the performance of the model and enhance the detection accuracy of the model. If the greedy scheme is adopted, the objective function of each round can be minimized, but concentrators with low security risks, low computing power, and poor communication quality have fewer chances to be selected for training, which means that the local data of these concentrators has a significant impact on the global The contribution of the model is small. With the deviation of concentrator selection, the generalization ability of the global model will decrease, so the fairness of concentrator selection is also a factor to be considered in concentrator selection. The goal is to select as many concentrators as possible for model training under the condition that the concentrator comprehensively considers security risks, computing power, communication quality and fairness.
  • Cyber attack risk refers to the possibility of cyber attacks and the consequences of cyber attacks.
  • the formula is as follows:
  • P refers to the probability of a successful cyber attack
  • C refers to the consequences of a cyber attack.
  • the concentrator is responsible for model training and attack detection, so there will be attacks in the transmission process from data generation to the concentrator.
  • the smart meter is vulnerable to network attacks, and the data transmission from the smart meter to the concentrator is also vulnerable to attacks, so The attack risk of the device and the attack risk of the communication link need to be considered.
  • the attacker randomly selects the attack target. According to the possibility of successful attack on the device, it is related to the degree of defense. The weaker the defense, the easier it is for the attacker to attack it. Assuming that the number of smart meters in charge of the concentrator k is M k , and all smart meters on a concentrator have the same degree of defense, let the defense resources of the smart meter M k be (Referring to protective measures, such as firewalls, personnel security, encryption, etc.), the defense effect expression of smart meters is as follows:
  • the attack probability of the smart meter on the concentrator k is:
  • attack probability of the link on the concentrator k is for:
  • the weight of the concentrator k participating in the training for t rounds is as follows:
  • represents the expected rate of selecting a concentrator.
  • the performance of the generated model is better than that of the same number of concentrators and the decrease of the number of concentrators in each round.
  • the purpose is to ensure that the average weight of each concentrator in each round is as large as possible when the long-term average selection rate of each concentrator is greater than ⁇ , and the number of concentrators selected in each round is at least m.
  • the DBN neural network is used, and the dot product attention module and DBN module are used. This embodiment does not encode the position of the original data. These modules are the same as the transformer modules, but they are combined differently.
  • the input data It is converted by position coding and input coding, and then input into the SDA and DBN modules, and finally output. Each output of the model is a predicted value of a time slot.
  • the DBN model of the attention mechanism first encodes the input data.
  • Each row of the input data is a feature vector of a time point, and the input encoding makes the original data linearly transformed:
  • Win is the linear transformation matrix
  • bin is the bias
  • Win is randomly initialized and updated together with other parameters during training.
  • This embodiment does not perform position encoding on the input data, but explains the position encoding, because for machine translation, the context of a sentence has an important impact on translation.
  • the interrelationship of NSL data attributes is not so close, and the order of attributes has little effect on the category of data, so the position encoding of the input data is not performed.
  • the data after input encoding is input into the RBM of DBN, here only the RBM of DBN is used to train the input data and output the hidden state.
  • Traditional RBM can only accept binary input, which is easy to cause data loss.
  • the input data in this embodiment includes continuous data, so in order to be able to process these continuously distributed real-valued data, on the basis of traditional RBM, the input nodes are divided into binary variables The node expands to a real-valued variable node for continuous input. Hidden nodes are still binary nodes.
  • the attention module has three input matrices: the key matrix K TXh , the value matrix V TXh and the query matrix Q TXh , and the output is the context matrix C TXh , which is calculated as follows:
  • the data output by the DBN module passes through a fully connected layer and softmax activation function, and this layer outputs the classification result.
  • the output layer outputs five probability values, corresponding to the maximum probability The value is the corresponding category.
  • the loss function used here is the mean square error function.
  • the global model uses the Adam optimizer to optimize the network structure.
  • the present invention provides a federated attention DBN cooperative detection device based on client selection, comprising: a memory, a processor, and a computer program stored on the memory and operable on the processor, and the processor executes the computer program When implementing the DBN detection method of the above-mentioned attention mechanism.
  • an embodiment of the present invention also provides a computer-readable storage medium, the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are executed by one or more control processors, for example, control The processor can execute the method steps S110 to S140 in FIG. 1 and the method steps S210 to S240 in FIG. 2 .
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, tape, magnetic disk storage or other magnetic storage devices, or can Any other medium used to store desired information and which can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Bioethics (AREA)
  • Computer Security & Cryptography (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Computer Hardware Design (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Telephonic Communication Services (AREA)

Abstract

本发明公开了一种基于客户端选择的联邦注意力DBN协同检测***,包括:多个智能电表,用于收集电力数据;多个集中器,每一个集中器与管辖的多个智能电表通信连接,用于获取来自对应智能电表的电力数据,并使用带注意力机制的DBN训练模型对电力数据进行训练得到训练参数;数据中心,与所有集中器通信连接,用于获取来自每一个集中器的训练参数,对训练参数进行联邦平均聚合,根据集中器的资源分配DBN训练模型到集中器,以及将联邦平均聚合的结果发送到下一轮集中器,以使集中器将DBN训练模型训练至收敛。能够有效提升联邦学习的效率,提高模型训练精度,提升数据隐私的安全性。

Description

基于客户端选择的联邦注意力DBN协同检测*** 技术领域
本发明涉及数据处理技术领域,特别涉及一种基于客户端选择的联邦注意力DBN协同检测***。
背景技术
智能电网的信息安全越来越重要,高级量测体系AMI是智能电网的重要组成部分,高级计量***的安全问题是智能电网安全急需解决的紧急问题。由于AMI是智能电网的关键体系,容易遭受网络攻击,并且其中的智能电表收集用户的用电信息数据具有敏感性,可能会造成隐私泄露,相关技术的检测模型通常存在“学习效率低”、“模型训练精度不高”和“隐私泄露”等技术问题。
发明内容
本发明旨在至少解决现有技术中存在的技术问题。为此,本发明提出一种基于客户端选择的联邦注意力DBN协同检测***,能够有效提升联邦学习的效率,减少需要训练的集中器的数量,减少集中器与数据中心的通信开销以及计算开销,提高模型训练精度,提升数据隐私的安全性。
本发明还提出一种注意力机制的DBN检测方法。
本发明还提出一种具有上述注意力机制的DBN检测方法的基于客户端选择的联邦注意力DBN协同检测设备。
本发明还提出一种计算机可读存储介质。
第一方面,本实施例提供了一种基于客户端选择的联邦注意力DBN协同检测***,包括:
多个智能电表,用于收集电力数据;
多个集中器,每一个所述集中器与管辖的多个所述智能电表通信连接,用于获取来自对应所述智能电表的所述电力数据,并使用带注意力机制的DBN训练模型对所述电力数据进行训练得到训练参数;
数据中心,与所有所述集中器通信连接,根据所述集中器的资源分配所述DBN训练模型到所述集中器,用于获取来自每一个所述集中器的所述训练参数,对所述训练参数进行联邦平均聚合,以及将所述联邦平均聚合的结果发送到下一轮选择的所述集中器,以使所述集 中器将所述DBN训练模型训练至收敛。
根据本发明实施例的基于客户端选择的联邦注意力DBN协同检测***,至少具有如下有益效果:
基于客户端选择的联邦注意力DBN协同检测***包括多个智能电表、集中器和数据中心,在智能电表上收集大量的电力数据,再将电力数据存储到集中器上,由于智能电网高级量测体系(Advanced Metering Infrastructure,AMI)常常会有网络攻击,而电力数据的隐私性和安全性的要求较高,在集中器上使用带注意力机制的DBN训练模型对所述电力数据进行训练得到训练参数,而且各个集中器之间不直接进行通信,就能较好地提升电力数据的安全性,数据中心与所述集中器连接,获取所述训练参数,对所述训练参数进行联邦平均聚合,根据所述集中器的资源分配所述DBN训练模型到所述集中器,集中器资源包括通信质量、空闲CPU、GPU,将所述联邦平均聚合的结果发送到下一轮选择的集中器,以使所述集中器对所述电力数据继续训练直至所述DBN训练模型收敛。
本实施例提供的基于客户端选择的联邦注意力DBN协同检测***能够大幅缩短训练时间,也能达到接近于集中式方法的准确性;与分布式检测模型相比,优于分布式的未聚合的设备上的训练模型,同时很好地保护了数据隐私。
根据本发明的一些实施例,所述集中器包括数据采集模块、数据处理模块、攻击检测模块,所述数据采集模块与所述数据处理模块通信连接,所述数据采集模块与所述攻击检测模块通信连接。
第二方面,本实施例提供了一种注意力机制的DBN检测方法,应用于数据中心,包括以下步骤:
初始化带注意力机制的DBN训练模型;
根据集中器资源选择参与训练的集中器,并向所述集中器下发所述DBN训练模型;
接收来着所述集中器训练得到的训练参数,所述训练参数由所述集中器使用带注意力机制的DBN训练模型对电力数据进行训练得到;
对所述训练参数进行联邦平均拟合,将所述联邦平均聚合的结果发送到下一轮选择的所述集中器,以使所述集中器对所述电力数据继续训练直至所述DBN训练模型收敛。
根据本发明实施例的注意力机制的DBN检测方法,至少具有如下有益效果:
首先在数据中心上初始化带注意力机制的DBN训练模型,根据集中器资源选择参与训练的集中器,并向所述集中器下发所述DBN训练模型,集中器使用所述DBN训练模型对所述电力数据进行训练得到训练参数,数据中心再对所述训练参数进行联邦平均拟合,将所述 联邦平均聚合的结果发送到下一轮选择的所述集中器,以使所述集中器对所述电力数据继续训练直至所述DBN训练模型收敛。
本实施例提供的注意力机制的DBN检测方法与集中式检测模型相比,能够有效缩短训练时间,具有接近于集中式方法的准确性;与分布式检测模型相比,优于分布式的未聚合的设备上的训练模型,同时很好地保护了数据隐私。
根据本发明的一些实施例,所述注意力机制的输入矩阵包括键矩阵、值矩阵和查询矩阵,所述注意力机制的输出矩阵包括上下文矩阵。
第三方面,本实施例提供了一种注意力机制的DBN检测方法,应用于集中器,包括以下步骤:
接收来自数据中心的带注意力机制的DBN训练模型,获取智能电表收集的电力数据;
使用所述DBN训练模型对所述电力数据进行训练得到训练参数;
将所述训练参数发送到数据中心,以使所述数据中心对所述训练参数进行联邦平均聚合;
接收来自所述数据中心的联邦平均聚合的结果,根据所述联邦平均聚合的结果,对所述电力数据继续训练直至所述DBN训练模型收敛。
根据本发明实施例的注意力机制的DBN检测方法,至少具有如下有益效果:
集中器接收自数据中心的带注意力机制的DBN训练模型,获取智能电表收集的电力数据并保留在所述集中器,使用所述DBN训练模型对所述电力数据进行训练得到训练参数,将所述训练参数发送到数据中心,接收来自所述数据中心的联邦平均聚合的结果,根据所述联邦平均聚合的结果在所述集中器对所述电力数据继续训练直至所述DBN训练模型收敛。
本实施例提供的注意力机制的DBN检测方法与集中式检测模型相比,能够有效缩短训练时间,具有接近于集中式方法的准确性;与分布式检测模型相比,优于分布式的未聚合的设备上的训练模型,同时很好地保护了数据隐私。
根据本发明的一些实施例,所述使用所述DBN训练模型对所述电力数据进行训练得到训练参数,包括以下步骤:
步骤S1、将所述电力数据输入到第一层RBM进行预训练得到训练结果;
步骤S2、将所述训练结果输入到第二层RBM进行训练;
步骤S3、重复所述步骤S1、所述步骤S2直至迭代带最大迭代次数;
步骤S4、使用softmax层进行反向传播,对整个DBN网络进行权值微调。
根据本发明的一些实施例,所述RBM包括可视层和隐藏层,还包括步骤:对所述RBM进行逐层训练,使用激活函数计算所述可视层和所述隐藏层的激活概率。
第四方面,本实施例提供了一种基于客户端选择的联邦注意力DBN协同检测设备,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如第二方面、第三方面所述的注意力机制的DBN检测方法。
第五方面,本实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使计算机执行如第二方面、第三方面所述的注意力机制的DBN检测方法。
本发明的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。
附图说明
本发明的上述和/或附加的方面和优点从结合下面附图对实施例的描述中将变得明显和容易理解,其中摘要附图要与说明书附图的其中一幅完全一致:
图1是本发明一个实施例提供的注意力机制的DBN检测方法的流程图;
图2是本发明另一个实施例提供的注意力机制的DBN检测方法的流程图;
图3是本发明另一个实施例提供的注意力机制的DBN检测方法的基于注意力机制的DBN模型图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。
需要说明的是,虽然在***示意图中进行了功能模块划分,在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于***中的模块划分,或流程图中的顺序执行所示出或描述的步骤。说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。
智能电网的安全越来越重要,高级量测体系AMI是智能电网的重要组成部分,由于AMI是智能电网的关键体系,容易遭受网络攻击,并且其中的智能电表收集用户的用电信息数据具有敏感性,可能会造成隐私泄露。
深度学习由于强大的特征提取能力广泛用于解决智能电网的安全问题,但是,数据都是通过集中式训练数据来建立模型,可能会泄露数据隐私。相关技术中提出了联邦学习,提出让用户在本地训练自己的模型,在保护用户隐私的前提下整合用户的训练参数,再通过云端模型进行参数回传来更新本地用户模型。检测模型的性能与数据量有关,数据量越大,训练 的模型性能越好;AMI的隐私泄露会造成严重的影响,大多数传统的AMI机器学习检测算法集中局部数据进行训练。在此过程中,大量的数据向数据中心传输,各方数据中所包含隐私内容均或多或少地被暴露出来。因此,在检测模型的性能和局部数据隐私之间存在矛盾,显示联邦学习框架和LSTM结合可以实现攻击检测,同时保护数据隐私。本发明提出基于集中器选择的联邦学习框架,提高联邦学习效率,然后部署基于注意力机制的DBN算法进行检测模型的训练,关注更重要的特征,提高检测模型的检测精度。在联邦学习和注意力机制的DBN协同检测过程中,原始数据不进行交换和传输,但又整合了各方数据,使得检测模型性能较单一局部数据集有所提高,并且减少了用户数据隐私泄露的风险。
联邦学习(Federated Learning)是一种新兴的人工智能基础技术,在2016年由谷歌最先提出,最初是用于解决安卓手机终端用户在本地更新模型的问题,其设计目标是在保障大数据交换时的信息安全、保护终端数据和个人数据隐私、保证合法合规的前提下,在多参与方或多计算结点之间开展高效率的机器学习,联邦学习可以很好地改善本地数据的隐私保护问题。
深度信念网络(Deep Belief Networks,DBN)是一种概率生成模型,是多个受限玻尔兹曼机(Restricted Boltzmann Machines,RBM)的堆叠,随着RBM隐层节点数的增加,在一定条件下可以拟合任意的数据分布。其中每个RBM层包括一个可见层和隐藏层,任意层内的单元不相互连接,不同层之间相互连接。除了第一层和最后一层之外,RBM的每一层都有两个作用:作为前一层的隐藏层,或者作为后一层的输入(可视层)。DBN训练包括两个步骤,预训练和权值微调,将原始数据输入第一层的RBM进行训练,训练后的结果作为下一层RBM的输入进行训练,重复训练直至迭代到最大迭代次数。最后利用BP算法进行反向传播,调整DBN网络,避免陷入局部最优。
本发明提供了一种基于客户端选择的联邦注意力DBN协同检测***、方法,能够大幅缩短训练时间,也能达到接近于集中式方法的准确性;与分布式检测模型相比,优于分布式的未聚合的设备上的训练模型,同时很好地保护了数据隐私。
下面结合附图,对本发明实施例作进一步阐述。
在一实施例中,基于客户端选择的联邦注意力DBN协同检测***包括多个智能电表、集中器和数据中心,在智能电表上收集大量的电力数据,再将电力数据存储到集中器上,由于智能电网高级量测体系(Advanced Metering Infrastructure,AMI)常常会有网络攻击,而电力数据的隐私性和安全性的要求较高,在集中器上使用带注意力机制的DBN训练模型对所述电力数据进行训练得到训练参数,而且各个集中器之间不直接进行通信,就能较好地提升电 力数据的安全性,数据中心与所述集中器连接,获取所述训练参数,对所述训练参数进行联邦平均聚合,根据所述集中器的资源分配所述DBN训练模型到所述集中器,将所述联邦平均聚合的结果发送到下一轮选择的所述集中器,以使所述集中器对所述电力数据继续训练直至所述DBN训练模型收敛。本实施例提供的基于客户端选择的联邦注意力DBN协同检测***能够大幅缩短训练时间,也能达到接近于集中式方法的准确性;与分布式检测模型相比,优于分布式的未聚合的设备上的训练模型,同时很好地保护了数据隐私。
在一实施例中,集中器包括数据采集模块、数据处理模块、攻击检测模块,所述数据采集模块与所述数据处理模块通信连接,所述数据采集模块与所述攻击检测模块通信连接。
参照图1,图1是本发明一个实施例提供的注意力机制的DBN检测方法的流程图,注意力机制的DBN检测方法包括但不仅限于步骤S110至步骤S140。
步骤S110,初始化带注意力机制的DBN训练模型;
步骤S120,根据集中器资源选择参与训练的集中器,并向集中器下发DBN训练模型;
步骤S130,接收来着集中器训练得到的训练参数,训练参数由集中器使用带注意力机制的DBN训练模型对电力数据进行训练得到;
步骤S140,对训练参数进行联邦平均拟合,将联邦平均聚合的结果发送到下一轮选择的集中器,以使集中器对电力数据继续训练直至DBN训练模型收敛。
在一实施例中,首先在数据中心上初始化带注意力机制的DBN训练模型,根据客户集中器使用所述DBN训练模型对所述电力数据进行训练得到训练参数,数据中心再对所述训练参数进行联邦平均拟合,将所述联邦平均聚合的结果发送到对应的每一个所述的集中器,以使所述集中器对所述电力数据继续训练直至所述DBN训练模型收敛。本实施例提供的注意力机制的DBN检测方法与集中式检测模型相比,能够有效缩短训练时间,具有接近于集中式方法的准确性;与分布式检测模型相比,优于分布式的未聚合的设备上的训练模型,同时很好地保护了数据隐私。
注意力模型(Attention Model)被广泛使用在自然语言处理、图像识别及语音识别等各种不同类型的深度学习任务中,是深度学习技术中最值得关注与深入了解的核心技术之一,其目标是从杂乱的信息中选取对当前任务有关的信息,减少噪声对结果的影响。深度学习中的注意力机制从本质上讲和人类的选择性视觉注意力机制类似,核心目标也是从众多信息中选择出对当前任务目标更关键的信息。
在一实施例中,注意力机制的输入矩阵包括键矩阵、值矩阵和查询矩阵,注意力机制的输出矩阵包括上下文矩阵。
注意力模块有三个输入矩阵:键矩阵K TXh,值矩阵V TXh和查询矩阵Q TXh,输出是上下文矩阵C TXh,计算公式如下:
Figure PCTCN2022098981-appb-000001
首先计算键与查询的相似性,
Figure PCTCN2022098981-appb-000002
是对Q*K T获得的分数进行归一化,再经过softmax函数,将获得的分数转换成[0,1]之间的概率分布。根据每个特征向量获得的概率分布,然后乘上相应的Values值,就得到了Attention的结果,代表每个向量属性与查询矩阵Q的相似程度。这里的K和V来自输入X RBM(K=V=X RBM),Q可以训练参数,在训练阶段被随机初始化和学习,而且在注意力机制模块上应用了一个残差连接,然后进行标准化:
X Atten=Norm(Q+Attention(X RBM,X RBM,Q))
参照图2,图2是本发明另一个实施例提供的注意力机制的DBN检测方法的流程图,注意力机制的DBN检测方法包括但不仅限于步骤S210至步骤S240。
步骤S210,接收来自数据中心的带注意力机制的DBN训练模型,获取智能电表收集的电力数据;
步骤S220,使用DBN训练模型对电力数据进行训练得到训练参数;
步骤S230,将训练参数发送到数据中心,以使数据中心对训练参数进行联邦平均聚合;
步骤S240,接收来自数据中心的联邦平均聚合的结果,根据联邦平均聚合的结果对电力数据继续训练直至DBN训练模型收敛。
在一实施例中,集中器接收自数据中心的带注意力机制的DBN训练模型,获取智能电表收集的电力数据并保留在所述集中器,使用所述DBN训练模型对所述电力数据进行训练得到训练参数,将所述训练参数发送到数据中心,接收来自所述数据中心的联邦平均聚合的结果,根据所述联邦平均聚合的结果在所述集中器对所述电力数据继续训练直至所述DBN训练模型收敛。本实施例提供的注意力机制的DBN检测方法与集中式检测模型相比,能够有效缩短训练时间,具有接近于集中式方法的准确性;与分布式检测模型相比,优于分布式的未聚合的设备上的训练模型,同时很好地保护了数据隐私。
在一实施例中,使用DBN训练模型对电力数据进行训练得到训练参数,包括以下步骤:
步骤S1、将电力数据输入到第一层RBM进行预训练得到训练结果;
步骤S2、将训练结果输入到第二层RBM进行训练;
步骤S3、重复步骤S1、步骤S2直至迭代带最大迭代次数;
步骤S4、使用softmax层进行反向传播,对整个DBN网络进行权值微调。
在一实施例中,RBM包括可视层和隐藏层,还包括步骤:对RBM进行逐层训练,使用激活函数计算可视层和隐藏层的激活概率。
参照图3,图3是本发明另一个实施例提供的注意力机制的DBN检测方法的基于注意力机制的DBN模型图。
在一实施例中,使用更大比例的集中器能够提高模型的性能,增强模型的检测精度。如果采用贪婪方案,每一轮目标函数可以达到最小化,但是安全风险低、计算能力低、通信质量差的集中器被选训练的机会就少,这就意味着这些集中器的本地数据对全局模型的贡献很小。随着集中器选择的偏差,全局模型的泛化能力会降低,所以集中器选择的公平性也是集中器选择要考虑的一个因素。目标是在集中器综合考虑安全风险、计算能力、通信质量和公平性的条件下,选择尽可能多的集中器进行模型的训练。
1.安全风险评估
网络攻击风险指的是网络攻击发生的可能性与网络攻击造成的后果,公式形式如下:
R=P*C
P指的是网络攻击成功概率,C指的是网络攻击造成的后果。
集中器负责模型的训练和攻击的检测,所以数据从产生到集中器的这一段传输过程都会存在攻击,其中的智能电表容易遭受网络攻击,智能电表到集中器的数据传输也容易遭受攻击,所以需要考虑设备的攻击风险和通信链路的攻击风险。
对于设备攻击风险,假设攻击者随机选取攻击目标,根据设备攻击成功的可能性与其防御程度有关,防御越弱,攻击者越容易攻击去攻击它。假设集中器k负责的智能电表数量为M k,并且一个集中器上的所有智能电表防御程度一样,设智能电表M k的防御资源为
Figure PCTCN2022098981-appb-000003
(指防护措施,比如防火墙、人员安全、加密等),智能电表的防御效果表达式如下:
Figure PCTCN2022098981-appb-000004
集中器的防御效果
Figure PCTCN2022098981-appb-000005
根据防御效果函数值和评定模型得到集中器k上的智能电表遭受攻击概率为:
Figure PCTCN2022098981-appb-000006
针对通信链路的攻击可能性,假设每个智能电表只与一个集中器通信,且智能电表与集中器的每一条通信链路的攻击可能性相同,那么集中器k上的链路遭受攻击概率为:
Figure PCTCN2022098981-appb-000007
由于集中器数量K比较多,并且每个集中器连接的智能电表数量几乎相同,所以,定义集中器遭受攻击的概率为:
Figure PCTCN2022098981-appb-000008
对于网络攻击产生的影响,假设当攻击产生时,对集中器造成的后果是一样的C k=1。那么集中器k的遭受网络攻击的风险为
R k=P K*C k=p k
2.集中器选择模型
在智能电网AMI***集中器上进行本地训练,需要考虑到集中器资源情况,比如集中器与智能电表、数据中心的无线通信状态质量Comm k,集中器的计算能力Comp k,集中器上的资源情况,然后还需要集中器选择的公平性和集中器遭受攻击的风险R k
t轮参与训练的集中器k的权重如下:
Figure PCTCN2022098981-appb-000009
考虑到集中器选择的公平性,引入针对集中器的长期公平约束条件:
Figure PCTCN2022098981-appb-000010
β表示选择集中器的期望率。
参与训练的集中器数量如果保持递增规律,与每轮相同集中器数量和集中器数量递减情况相比,生成的模型性能更好。目的是在确保每个集中器长期平均选择率大于β时,保证每一轮的集中器的平均权重尽可能大,每轮选择的集中器数量至少为m。
在一实施例中,使用DBN神经网络,采用点乘的注意力模块和DBN模块,本实施例没有对原始数据进行位置编码,这些模块与transformer的模块一样,但是进行了不同的组合,输入数据通过位置编码和输入编码进行转换,然后输入SDA、DBN模块中,最后输出,模型每次输出是一个时隙的预测值。
1.输入编码
注意力机制的DBN模型首先对输入数据进行输入编码,输入的数据每一行是一个时间点的特征向量,输入编码使原始数据进行线性转换:
Figure PCTCN2022098981-appb-000011
W in是线性转换矩阵,b in是偏置,W in是随机初始化的,在训练过程中与其他参数一起更新。
本实施例没有对输入数据进行位置编码,对进行位置编码做出了解释,因为对于机器翻译来说,句子的前后关系对翻译有重要的影响。NSL数据属性的相互关系联系没有那么紧密,并且属性的顺序对于数据的类别影响不大,所以不对输入数据进行位置编码。
2.DBN模块
经过输入编码后的数据输入DBN的RBM中,这里只用到DBN的RBM进行输入数据的训练输出隐藏状态。传统的RBM只能接受二进制输入,容易导致数据丢失,本实施例的输入数据包括连续的数据,所以为了能够处理这些连续分布的实值数据,在传统RBM的基础上,将输入节点由二进制变量节点扩展为连续输入的实值变量节点。隐藏节点仍然是二值节点。
3.注意力模块
注意力模块有三个输入矩阵:键矩阵K TXh,值矩阵V TXh和查询矩阵Q TXh,输出是上下文矩阵C TXh,计算公式如下:
Figure PCTCN2022098981-appb-000012
首先计算键与查询的相似性,
Figure PCTCN2022098981-appb-000013
是对Q*K T获得的分数进行归一化,再经过softmax函 数,将获得的分数转换成[0,1]之间的概率分布。根据每个特征向量获得的概率分布,然后乘上相应的Values值,就得到了Attention的结果,代表每个向量属性与查询矩阵Q的相似程度。这里的K和V来自输入X RBM(K=V=X RBM),Q可以训练参数,在训练阶段被随机初始化和学习,而且在注意力机制模块上应用了一个残差连接,然后进行标准化:
X Atten=Norm(Q+Attention(X RBM,X RBM,Q))
4.全连接层和输出层
DBN模块输出的数据经过一个全连接层和softmax激活函数,该层输出分类结果。
y=softmax(W ouX atten+b ou)
假设第一个输出代表正常,第二个输出代表dos攻击,第三个类别代表probing攻击,第四个类别代表R2L,第五个类别代表U2R,输出层输出五个概率值,对应的最大概率值就是对应的类别。
这里使用损失函数为均方差函数。全局模型利用adam优化器进行网络结构的优化。
本发明提供了一种基于客户端选择的联邦注意力DBN协同检测设备,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上述的注意力机制的DBN检测方法。
此外,本发明的一个实施例还提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个控制处理器执行,例如,控制处理器能够执行图1中的方法步骤S110至步骤S140、图2中的方法步骤S210至步骤S240。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、***可以被实施为软件、固件、硬件及其适当的组合。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域 普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。
以上是对本发明的较佳实施进行了具体说明,但本发明并不局限于上述实施方式,熟悉本领域的技术人员在不违背本发明精神的前提下还可作出种种的等同变形或替换,这些等同的变形或替换均包含在本发明权利要求所限定的范围内。

Claims (9)

  1. 一种基于客户端选择的联邦注意力DBN协同检测***,其特征在于,包括:
    多个智能电表,用于收集电力数据;
    多个集中器,每一个所述集中器与管辖的多个所述智能电表通信连接,用于获取来自对应所述智能电表的所述电力数据,并使用带注意力机制的DBN训练模型对所述电力数据进行训练得到训练参数;
    数据中心,与所有所述集中器通信连接,根据所述集中器的资源分配所述DBN训练模型到所述集中器,用于获取来自每一个所述集中器的所述训练参数,对所述训练参数进行联邦平均聚合,以及将所述联邦平均聚合的结果发送到下一轮选择的所述集中器,以使所述集中器将所述DBN训练模型训练至收敛。
  2. 根据权利要求1所述的基于客户端选择的联邦注意力DBN协同检测***,其特征在于,所述集中器包括数据采集模块、数据处理模块、攻击检测模块,所述数据采集模块与所述数据处理模块通信连接,所述数据采集模块与所述攻击检测模块通信连接。
  3. 一种注意力机制的DBN检测方法,应用于数据中心,其特征在于,包括以下步骤:
    初始化带注意力机制的DBN训练模型;
    根据集中器资源选择参与训练的集中器,并向所述集中器下发所述DBN训练模型;
    接收来着所述集中器训练得到的训练参数,所述训练参数由所述集中器使用带注意力机制的DBN训练模型对电力数据进行训练得到;
    对所述训练参数进行联邦平均拟合,将所述联邦平均聚合的结果发送到下一轮选择的所述集中器,以使所述集中器对所述电力数据继续训练直至所述DBN训练模型收敛。
  4. 根据权利要求3所述的注意力机制的DBN检测方法,其特征在于,所述注意力机制的输入矩阵包括键矩阵、值矩阵和查询矩阵,所述注意力机制的输出矩阵包括上下文矩阵。
  5. 一种注意力机制的DBN检测方法,应用于集中器,其特征在于,包括以下步骤:
    接收来自数据中心的带注意力机制的DBN训练模型,获取智能电表收集的电力数据;
    使用所述DBN训练模型对所述电力数据进行训练得到训练参数;
    将所述训练参数发送到数据中心,以使所述数据中心对所述训练参数进行联邦平均聚合;
    接收来自所述数据中心的联邦平均聚合的结果,根据所述联邦平均聚合的结果,对所述电力数据继续训练直至所述DBN训练模型收敛。
  6. 根据权利要求5所述的注意力机制的DBN检测方法,其特征在于,所述使用所述DBN训练模型对所述电力数据进行训练得到训练参数,包括以下步骤:
    步骤S1、将所述电力数据输入到第一层RBM进行预训练得到训练结果;
    步骤S2、将所述训练结果输入到第二层RBM进行训练;
    步骤S3、重复所述步骤S1、所述步骤S2直至迭代带最大迭代次数;
    步骤S4、使用softmax层进行反向传播,对整个DBN网络进行权值微调。
  7. 根据权利要求6所述的注意力机制的DBN检测方法,其特征在于,所述RBM包括可视层和隐藏层,还包括步骤:
    对所述RBM进行逐层训练,使用激活函数计算所述可视层和所述隐藏层的激活概率。
  8. 一种基于客户端选择的联邦注意力DBN协同检测设备,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求3至7中任意一项所述的注意力机制的DBN检测方法。
  9. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使计算机执行如权利要求3至7任意一项所述的注意力机制的DBN检测方法。
PCT/CN2022/098981 2021-06-24 2022-06-15 基于客户端选择的联邦注意力dbn协同检测*** WO2022267960A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110703207.5 2021-06-24
CN202110703207.5A CN113392919B (zh) 2021-06-24 2021-06-24 一种注意力机制的深度信念网络dbn检测方法

Publications (1)

Publication Number Publication Date
WO2022267960A1 true WO2022267960A1 (zh) 2022-12-29

Family

ID=77623687

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/098981 WO2022267960A1 (zh) 2021-06-24 2022-06-15 基于客户端选择的联邦注意力dbn协同检测***

Country Status (2)

Country Link
CN (1) CN113392919B (zh)
WO (1) WO2022267960A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116074844A (zh) * 2023-04-06 2023-05-05 广东电力交易中心有限责任公司 一种基于全流量自适应检测的5g切片逃逸攻击检测方法
CN116561696A (zh) * 2023-01-11 2023-08-08 上海合煌能源科技有限公司 基于多维度的用户可调节负荷快速聚合方法及其***
CN116977272A (zh) * 2023-05-05 2023-10-31 深圳市第二人民医院(深圳市转化医学研究院) 一种基于联邦图注意力学习的结构磁共振图像处理方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113392919B (zh) * 2021-06-24 2023-04-28 长沙理工大学 一种注意力机制的深度信念网络dbn检测方法
CN115208604B (zh) * 2022-02-22 2024-03-15 长沙理工大学 一种ami网络入侵检测的方法、装置及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109192199A (zh) * 2018-06-30 2019-01-11 中国人民解放军战略支援部队信息工程大学 一种结合瓶颈特征声学模型的数据处理方法
CN110211574A (zh) * 2019-06-03 2019-09-06 哈尔滨工业大学 基于瓶颈特征和多尺度多头注意力机制的语音识别模型建立方法
CN112800461A (zh) * 2021-01-28 2021-05-14 深圳供电局有限公司 一种基于联邦学习框架的电力计量***网络入侵检测方法
CN113392919A (zh) * 2021-06-24 2021-09-14 长沙理工大学 基于客户端选择的联邦注意力dbn协同检测***

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3032852A1 (fr) * 2015-02-13 2016-08-19 Orange Procede de selection de concentrateurs de connexions reseau
CN106295323A (zh) * 2016-07-27 2017-01-04 苏盛 基于云安全的高级计量体系恶意软件检测方法
WO2020185973A1 (en) * 2019-03-11 2020-09-17 doc.ai incorporated System and method with federated learning model for medical research applications
CN111537945B (zh) * 2020-06-28 2021-05-11 南方电网科学研究院有限责任公司 基于联邦学习的智能电表故障诊断方法及设备
CN111723942B (zh) * 2020-06-29 2024-02-02 南方电网科学研究院有限责任公司 一种企业用电负荷预测方法、电网业务子***及预测***
CN112181666B (zh) * 2020-10-26 2023-09-01 华侨大学 一种基于边缘智能的设备评估和联邦学习重要性聚合方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109192199A (zh) * 2018-06-30 2019-01-11 中国人民解放军战略支援部队信息工程大学 一种结合瓶颈特征声学模型的数据处理方法
CN110211574A (zh) * 2019-06-03 2019-09-06 哈尔滨工业大学 基于瓶颈特征和多尺度多头注意力机制的语音识别模型建立方法
CN112800461A (zh) * 2021-01-28 2021-05-14 深圳供电局有限公司 一种基于联邦学习框架的电力计量***网络入侵检测方法
CN113392919A (zh) * 2021-06-24 2021-09-14 长沙理工大学 基于客户端选择的联邦注意力dbn协同检测***

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NI GAO, GAO LING;HE YIYUE;GAO QUANLI;REN JIE: "Intrusion detection model based on deep belief nets", JOURNAL OF SOUTHEAST UNIVERSITY (ENGLISH EDITION), SOUTHEAST UNIVERSITY; DONGNAN DAXUE (CHINESE ELECTRONIC PERIODICAL SERVICES), CHINA, vol. 31, no. 3, 15 September 2015 (2015-09-15), China , pages 339 - 346, XP093016816, ISSN: 1003-7985 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116561696A (zh) * 2023-01-11 2023-08-08 上海合煌能源科技有限公司 基于多维度的用户可调节负荷快速聚合方法及其***
CN116561696B (zh) * 2023-01-11 2024-04-16 上海合煌能源科技有限公司 基于多维度的用户可调节负荷快速聚合方法及其***
CN116074844A (zh) * 2023-04-06 2023-05-05 广东电力交易中心有限责任公司 一种基于全流量自适应检测的5g切片逃逸攻击检测方法
CN116977272A (zh) * 2023-05-05 2023-10-31 深圳市第二人民医院(深圳市转化医学研究院) 一种基于联邦图注意力学习的结构磁共振图像处理方法

Also Published As

Publication number Publication date
CN113392919A (zh) 2021-09-14
CN113392919B (zh) 2023-04-28

Similar Documents

Publication Publication Date Title
WO2022267960A1 (zh) 基于客户端选择的联邦注意力dbn协同检测***
Li et al. Wind power forecasting considering data privacy protection: A federated deep reinforcement learning approach
Li et al. Privacy-preserving spatiotemporal scenario generation of renewable energies: A federated deep generative learning approach
Wang et al. Improving fairness in graph neural networks via mitigating sensitive attribute leakage
CN113408743B (zh) 联邦模型的生成方法、装置、电子设备和存储介质
WO2021128805A1 (zh) 一种基于生成对抗强化学习的无线网络资源分配方法
WO2022205833A1 (zh) 无线网络协议知识图谱构建分析方法、***、设备及介质
US20210374617A1 (en) Methods and systems for horizontal federated learning using non-iid data
CN112668044B (zh) 面向联邦学习的隐私保护方法及装置
Liu et al. Keep your data locally: Federated-learning-based data privacy preservation in edge computing
Zhang et al. Energy theft detection in an edge data center using threshold-based abnormality detector
CN114818011A (zh) 一种适用碳信用评价的联邦学习方法、***及电子设备
CN112926747B (zh) 优化业务模型的方法及装置
Xiao et al. Network security situation prediction method based on MEA-BP
Zhou et al. Network traffic prediction method based on echo state network with adaptive reservoir
CN115208604B (zh) 一种ami网络入侵检测的方法、装置及介质
CN114003957A (zh) 一种基于联邦学习社交媒体用户隐私信息保护方法和***
CN115879542A (zh) 一种面向非独立同分布异构数据的联邦学习方法
Hao et al. Producing more with less: a GAN-based network attack detection approach for imbalanced data
WO2022095246A1 (zh) 一种基于差分隐私机制的边缘智能电网协作决策方法
CN113850399A (zh) 一种基于预测置信度序列的联邦学习成员推断方法
Qu et al. Personalized federated learning for heterogeneous residential load forecasting
Cheng et al. GFL: Federated learning on non-IID data via privacy-preserving synthetic data
CN112910865B (zh) 一种基于因子图的推断攻击阶段最大似然估计方法及***
Sun et al. Communication-efficient vertical federated learning with limited overlapping samples

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22827455

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22827455

Country of ref document: EP

Kind code of ref document: A1