CN111625820A - Federal defense method based on AIoT-oriented security - Google Patents

Federal defense method based on AIoT-oriented security Download PDF

Info

Publication number
CN111625820A
CN111625820A CN202010474349.4A CN202010474349A CN111625820A CN 111625820 A CN111625820 A CN 111625820A CN 202010474349 A CN202010474349 A CN 202010474349A CN 111625820 A CN111625820 A CN 111625820A
Authority
CN
China
Prior art keywords
aiot
terminal
federal
training
samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010474349.4A
Other languages
Chinese (zh)
Inventor
陈铭松
宋云飞
夏珺
马言悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Normal University
Original Assignee
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Normal University filed Critical East China Normal University
Priority to CN202010474349.4A priority Critical patent/CN111625820A/en
Publication of CN111625820A publication Critical patent/CN111625820A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/552Detecting local intrusion or implementing counter-measures involving long-term monitoring or reporting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses an AIoT safety-oriented federal defense method, which is characterized by comprising the following steps: each edge end node collects the confrontation attack samples encountered locally, performs the confrontation training independently by using the normal samples and the confrontation samples, sends the gradient information updated by each pair of the confrontation training to the cloud, and performs parameter updating once by using the aggregated gradient, thereby completing the model parameter synchronization in the whole distributed defense architecture. Compared with the prior art, the method has certain defense universality, when the neural network on any AIoT equipment is attacked by the countercheck sample, the attacked AIoT equipment can be defensively repaired on the premise of ensuring the privacy security of the AIoT terminal, the problems of unbalanced training data, diversified potential countercheck attack types, easy privacy disclosure in the local data uploading process and the like of the specific AIoT environment are effectively solved, and the method is simple and convenient and has good effect.

Description

Federal defense method based on AIoT-oriented security
Technical Field
The invention relates to the technical field of federal learning and confrontation training, in particular to a federal defense method for resisting attack of various types of confrontation samples based on AIoT oriented safety.
Background
Neural networks (neural networks) are models often used in various scenes such as computer vision, natural language processing, semantic segmentation, and the like in the field of artificial intelligence. Due to the good classification effect, the method is widely applied to various tasks related to deep learning. AIoT refers to an application that integrates with the internet of things (IoT) in the course of the fierce development of Artificial Intelligence (AI). Typically, IoT devices provide data support, while AI is responsible for computational power support. The countermeasure sample (adverialExamples) is an emerging research hotspot in recent years, and means that the purpose of trying to 'cheat' the neural network is achieved by adding tiny disturbance which is difficult to be perceived by human eyes to an original picture, so that the neural network makes wrong judgment, and the purpose of attacking is achieved. Federal learning (federal learning) is a distributed deep learning training framework proposed by Google in 2016. The method is used for solving the core problems of uneven data distribution, privacy disclosure in the local data uploading process and the like in the distributed scene.
At present, classical countersample attacks include fast gradient notation (FGSM), Basic Iterative Methods (BIM), jacobian-based saliency map attacks (JSMA), deep spoofing (DeepFool), criny-wagner attacks (C & W), simple black box attacks (SimBA), and the like. Confrontational training is a defense method that is more common at present. The method is characterized in that in the training process of the neural network, corresponding confrontation samples are added to participate in training together while normal samples are used.
In the prior art, the countermeasure training usually only uses the FGSM type of countermeasure sample attack method, the countermeasure training lacks certain defense universality, and the problems of unbalanced training data specific to the AIoT environment, potential diversification of countermeasure attack types, easy privacy disclosure in the local data uploading process and the like exist.
Disclosure of Invention
The invention aims to provide a federal defense method facing AIoT safety, which aims at overcoming the defects of the prior art, adopts the combination of federal learning and countermeasure training to construct the federal defense method facing AIoT safety, when the neural network on any AIoT device is attacked by resisting samples, the attacked AIoT device is defended and repaired on the premise of ensuring the privacy of the AIoT terminal, the method utilizes the characteristic that AIoT terminals are distributed in different environments to collect various original data and countercheck samples generated by different countercheck attack methods, gradient information which is learned by different AIoT terminal countermeasure training and resists different types of countermeasure samples is aggregated through federal learning, so that the privacy safety of AIoT equipment is greatly protected, the communication overhead between the terminal and the cloud is reduced, the transmission efficiency is improved, and the problem that the traditional countermeasure training is lack of universality and is not enough is effectively solved.
The purpose of the invention is realized as follows: a federal defense method based on AIoT safety is characterized in that federal learning and countermeasure training are combined to construct a federal defense method based on AIoT safety, when the neural network on any AIoT device is attacked by resisting samples, the attacked AIoT device is defended and repaired on the premise of ensuring the privacy of the AIoT terminal, the method collects respective training data of distributed AIoT terminal nodes at local respectively, then, each AIoT terminal model independently performs a confrontational training by using the collected training data, and after each training round is finished, each AIoT terminal will obtain the gradient information of the respective model that needs to be updated in the current round, then all the AIoT terminals send respective gradient information to the cloud end, the cloud end utilizes a federal aggregation algorithm to aggregate model gradient information sent by the AIoT terminals to obtain a final gradient, and the gradient is used for carrying out parameter updating on a cloud end model for one time; and finally, the cloud sends the updated model parameters back to each AIoT terminal device to complete the synchronization of the model parameters, and the specific federal defense of the AIoT terminal device comprises the following steps:
a) sample collection phase
Each AIoT terminal node collects different original data and encountered anti-attack samples at a local end respectively; the samples collected by each AIoT terminal have different types of countersamples caused by different positions of the AIoT terminals, different original samples and different attackers.
b) Confrontational training phase
Each AIoT terminal node performs countermeasure training independently by using respective normal samples and countermeasure samples.
c) Federal learning phase
Each AIoT terminal node sends updated gradient information to the cloud after each pair of anti-training stages, the cloud collects the collected gradient information of each AIoT terminal by using a federal aggregation algorithm, and updates parameters for one time by using the aggregated gradient; after each pair of anti-training stages of the AIoT terminal nodes, in order to protect the privacy of the AIoT terminals, the AIoT terminal nodes send to the cloud not only training data but also specific model information, but also gradient information for updating the model. Meanwhile, the method greatly reduces the communication bandwidth between the terminal and the cloud end and improves the transmission efficiency. d) Model synchronization phase
And the cloud end sends the updated model parameter information to all AIoT terminal nodes to complete the model parameter synchronization in the whole distributed defense framework.
Compared with the prior art, the method has certain defense universality, further improves the defense effect, constructs a federal defense method facing AIoT safety through the combination of federal learning and countermeasure training, carries out defensive repair on the AIoT equipment suffering attack on the premise of ensuring the privacy safety of an AIoT terminal when a neural network on any AIoT equipment is attacked by a countermeasure sample, effectively solves the problems of unbalanced training data, diversified potential countermeasure attack types, easy privacy disclosure in the local data uploading process and the like of the AIoT environment, and is simple and convenient and has good effect.
Drawings
FIG. 1 is a flow chart of the present invention;
fig. 2 is a schematic diagram of the framework of the present invention.
Detailed Description
Referring to fig. 1, the federal defense of the present invention specifically includes the following four stages:
a) sample collection phase
The raw data that the AIoT terminal nodes distributed in different environments are exposed to is different, and the types of potential counterattack attacks that they face may be different due to different attackers. Thus, in the sample collection phase, different AIoT terminal nodes will first collect raw data and challenge samples generated by a particular type of challenge attack in their respective environments. The diversity of the original data and the confrontation samples in the AIoT scene refers to: the AIoT terminals differ in their local original samples due to their location and in the type of challenge samples due to the different attackers.
b) Confrontational training phase
When each AIoT terminal obtains enough training data and challenge samples, a separate challenge training process is started locally. The essence of the confrontation training is that corresponding confrontation samples are added into a data set of the traditional neural network training to cooperatively participate in the training. Therefore, when calculating the loss function in the reverse propagation of the neural network, the calculation needs to be divided into two parts: one part is the loss function generated by the normal training samples, the other part is the loss function generated by the confrontation samples, and the final loss function of the confrontation training is the weighted sum of the two.
c) Federal learning phase
After each pair of anti-training wheels is independently completed by each AIoT terminal, the updated gradient information of the current round is sent to the cloud server. After receiving the gradient information sent by all the terminal devices, the cloud end aggregates all the collected gradients through a federal aggregation algorithm to obtain a final gradient. The cloud server then updates the cloud model with the gradient. The AIoT terminal sends gradient information instead of training data or specific model information to the cloud server after each pair of anti-training wheels finishes, so that privacy data on each terminal device are protected from being leaked, and communication bandwidth between the terminal and the cloud is greatly reduced.
d) Model synchronization phase
After the cloud model is updated, in order to keep the cloud model and all AIoT terminal models synchronous, the cloud end sends the model parameters updated through federal learning to all AIoT terminals, and parameter synchronization of all models in the system is completed.
The present invention is further illustrated by the following specific examples.
Example 1
Referring to FIG. 2, the present invention performs normal training and joint countermeasure training as follows:
a) sample collection phase
Each edge device collects the challenge sample data set generated by the respective encountered challenge attack, and the specific implementation adopts the data set partitioning algorithm of the following algorithm 1:
Figure BDA0002515371160000041
the input of the algorithm comprises a total data set, the number of AIoT terminals and the serial number of each AIoT terminal node; the output of the algorithm is the raw data set Bucket obtained at each AIoT terminal node. The algorithm firstly determines the size of a Bucket according to the number of AIoT terminals; determining the range of a data set which is required to be obtained by the terminal node in a total data set according to the serial number of the terminal node; then, a random function is used for carrying out scattering treatment on the data set in the range, so that data enhancement is realized; and finally, acquiring a corresponding data set, namely the Bucket, in a subscript random access mode.
b) Confrontational training phase
Each edge device (AIoT terminal node) performs the independent countermeasure training by utilizing the collected countermeasure sample and the normal training sample;
c) federal learning phase
The specific implementation of the federal learning phase employs the federal confrontation training algorithm of algorithm 2 below:
Figure BDA0002515371160000051
the federal confrontation training algorithm describes in detail the specific implementation of the present invention during the sample collection phase, the confrontation training phase, the federal learning phase, and the model synchronization phase. The inputs to the algorithm include a set of known types of counterattack, cloud servers and their model parameters, AIoT terminal nodes and their model parameters, and a data set. The algorithm comprises the steps that firstly, each AIoT device sends a model initialization request to a cloud server, a data set partitioning algorithm is used for completing a sample collection stage locally, and respective original data set buckets are obtained; then, different counterattack algorithms are used for counterattacking the model on the AIoT terminal and the original data set Bucket to generate different types of countersamples; then, each AIoT device performs the confrontation training independently locally using the respective original data set Bucket and the confrontation sample, parameters required by the confrontation training are the normal sample, the confrontation sample and the correct data label, and a return value is the gradient information calculated by the model in the backward propagation through the loss function after the round of the confrontation training.
d) Model synchronization phase
Each edge device (AIoT terminal node) sends the gradient generated by the countermeasure training to the cloud server, after receiving the gradients sent by all terminals, the cloud server performs one-time aggregation on all gradients through a federal aggregation algorithm (the most common federal average algorithm is used in the experiment) to obtain a final gradient, and the cloud server model updates the model parameters by using the gradient. And finally, the cloud model sends the trained model parameters, namely the latest model parameters, to all the AIoT terminal devices, all the edge device models are updated, and the terminal models and the cloud model complete parameter synchronization so as to ensure the consistency of all the model parameters in the system.
The experimental results show that the method can effectively resist various types of attack of resisting samples, ensures the privacy security of the AIoT terminal, improves the defense performance of the model by about 20 percent on average, and has simple and convenient method and good defense effect.
The above embodiments are only for further illustration of the present invention and are not intended to limit the present invention, and all equivalent implementations of the present invention should be included in the scope of the claims of the present invention.

Claims (2)

1. A federal defense method facing AIoT security is characterized in that federal learning and countermeasure training are combined to construct a federal defense method facing AIoT security, when a neural network on any AIoT device is attacked by countermeasure samples, defensive repair is carried out on the attacked AIoT device on the premise of ensuring privacy security of an AIoT terminal, and specific federal defense comprises the following steps:
a) sample collection phase
Each AIoT terminal node collects different original data and a confrontation sample data set generated by confrontation attack at a local end respectively;
b) confrontational training phase
Each AIoT terminal node performs countermeasure training independently by using respective normal samples and countermeasure samples;
c) federal learning phase
Each AIoT terminal node sends updated gradient information to the cloud after each pair of anti-training stages, the cloud collects the collected gradient information of each AIoT terminal by using a federal aggregation algorithm, and updates parameters for one time by using the aggregated gradient;
d) model synchronization phase
And the cloud end sends the updated model parameter information to all AIoT terminal nodes to complete the model parameter synchronization in the whole distributed defense framework.
2. The method for federal defense against AIoT security as claimed in claim 1, wherein the samples collected by each AIoT terminal have different types of samples against the AIoT terminal due to different positions of the AIoT terminal, and different types of samples against the AIoT terminal due to different attackers.
CN202010474349.4A 2020-05-29 2020-05-29 Federal defense method based on AIoT-oriented security Pending CN111625820A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010474349.4A CN111625820A (en) 2020-05-29 2020-05-29 Federal defense method based on AIoT-oriented security

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010474349.4A CN111625820A (en) 2020-05-29 2020-05-29 Federal defense method based on AIoT-oriented security

Publications (1)

Publication Number Publication Date
CN111625820A true CN111625820A (en) 2020-09-04

Family

ID=72260763

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010474349.4A Pending CN111625820A (en) 2020-05-29 2020-05-29 Federal defense method based on AIoT-oriented security

Country Status (1)

Country Link
CN (1) CN111625820A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112257063A (en) * 2020-10-19 2021-01-22 上海交通大学 Cooperative game theory-based detection method for backdoor attacks in federal learning
CN112560059A (en) * 2020-12-17 2021-03-26 浙江工业大学 Vertical federal model stealing defense method based on neural pathway feature extraction
CN112668044A (en) * 2020-12-21 2021-04-16 中国科学院信息工程研究所 Privacy protection method and device for federal learning
CN112738035A (en) * 2020-12-17 2021-04-30 杭州趣链科技有限公司 Block chain technology-based vertical federal model stealing defense method
CN114283341A (en) * 2022-03-04 2022-04-05 西南石油大学 High-transferability confrontation sample generation method, system and terminal
CN114978899A (en) * 2022-05-11 2022-08-30 业成科技(成都)有限公司 AIoT equipment updating method and device
WO2023038220A1 (en) * 2021-09-07 2023-03-16 Samsung Electronics Co., Ltd. Method and apparatus for performing horizontal federated learning
CN116644802A (en) * 2023-07-19 2023-08-25 支付宝(杭州)信息技术有限公司 Model training method and device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109460814A (en) * 2018-09-28 2019-03-12 浙江工业大学 A kind of deep learning classification method for attacking resisting sample function with defence
CN109617706A (en) * 2018-10-18 2019-04-12 北京鼎力信安技术有限公司 Industrial control system means of defence and industrial control system protective device
CN109948658A (en) * 2019-02-25 2019-06-28 浙江工业大学 The confrontation attack defense method of Feature Oriented figure attention mechanism and application
CN110084002A (en) * 2019-04-23 2019-08-02 清华大学 Deep neural network attack method, device, medium and calculating equipment
US20190244103A1 (en) * 2018-02-07 2019-08-08 Royal Bank Of Canada Robust pruned neural networks via adversarial training
CN110278249A (en) * 2019-05-30 2019-09-24 天津神兔未来科技有限公司 A kind of distribution group intelligence system
CN110334808A (en) * 2019-06-12 2019-10-15 武汉大学 A kind of confrontation attack defense method based on confrontation sample training
CN110572253A (en) * 2019-09-16 2019-12-13 济南大学 Method and system for enhancing privacy of federated learning training data
CN110633805A (en) * 2019-09-26 2019-12-31 深圳前海微众银行股份有限公司 Longitudinal federated learning system optimization method, device, equipment and readable storage medium
CN110674938A (en) * 2019-08-21 2020-01-10 浙江工业大学 Anti-attack defense method based on cooperative multi-task training
CN110995737A (en) * 2019-12-13 2020-04-10 支付宝(杭州)信息技术有限公司 Gradient fusion method and device for federal learning and electronic equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190244103A1 (en) * 2018-02-07 2019-08-08 Royal Bank Of Canada Robust pruned neural networks via adversarial training
CN109460814A (en) * 2018-09-28 2019-03-12 浙江工业大学 A kind of deep learning classification method for attacking resisting sample function with defence
CN109617706A (en) * 2018-10-18 2019-04-12 北京鼎力信安技术有限公司 Industrial control system means of defence and industrial control system protective device
CN109948658A (en) * 2019-02-25 2019-06-28 浙江工业大学 The confrontation attack defense method of Feature Oriented figure attention mechanism and application
CN110084002A (en) * 2019-04-23 2019-08-02 清华大学 Deep neural network attack method, device, medium and calculating equipment
CN110278249A (en) * 2019-05-30 2019-09-24 天津神兔未来科技有限公司 A kind of distribution group intelligence system
CN110334808A (en) * 2019-06-12 2019-10-15 武汉大学 A kind of confrontation attack defense method based on confrontation sample training
CN110674938A (en) * 2019-08-21 2020-01-10 浙江工业大学 Anti-attack defense method based on cooperative multi-task training
CN110572253A (en) * 2019-09-16 2019-12-13 济南大学 Method and system for enhancing privacy of federated learning training data
CN110633805A (en) * 2019-09-26 2019-12-31 深圳前海微众银行股份有限公司 Longitudinal federated learning system optimization method, device, equipment and readable storage medium
CN110995737A (en) * 2019-12-13 2020-04-10 支付宝(杭州)信息技术有限公司 Gradient fusion method and device for federal learning and electronic equipment

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112257063A (en) * 2020-10-19 2021-01-22 上海交通大学 Cooperative game theory-based detection method for backdoor attacks in federal learning
CN112560059A (en) * 2020-12-17 2021-03-26 浙江工业大学 Vertical federal model stealing defense method based on neural pathway feature extraction
CN112738035A (en) * 2020-12-17 2021-04-30 杭州趣链科技有限公司 Block chain technology-based vertical federal model stealing defense method
CN112560059B (en) * 2020-12-17 2022-04-29 浙江工业大学 Vertical federal model stealing defense method based on neural pathway feature extraction
CN112668044A (en) * 2020-12-21 2021-04-16 中国科学院信息工程研究所 Privacy protection method and device for federal learning
WO2023038220A1 (en) * 2021-09-07 2023-03-16 Samsung Electronics Co., Ltd. Method and apparatus for performing horizontal federated learning
CN114283341A (en) * 2022-03-04 2022-04-05 西南石油大学 High-transferability confrontation sample generation method, system and terminal
CN114283341B (en) * 2022-03-04 2022-05-17 西南石油大学 High-transferability confrontation sample generation method, system and terminal
CN114978899A (en) * 2022-05-11 2022-08-30 业成科技(成都)有限公司 AIoT equipment updating method and device
CN114978899B (en) * 2022-05-11 2024-04-16 业成光电(深圳)有限公司 AIoT equipment updating method and device
CN116644802A (en) * 2023-07-19 2023-08-25 支付宝(杭州)信息技术有限公司 Model training method and device

Similar Documents

Publication Publication Date Title
CN111625820A (en) Federal defense method based on AIoT-oriented security
CN110008696A (en) A kind of user data Rebuilding Attack method towards the study of depth federation
CN112668044B (en) Privacy protection method and device for federal learning
CN112463056B (en) Multi-node distributed training method, device, equipment and readable medium
CN112560059B (en) Vertical federal model stealing defense method based on neural pathway feature extraction
CN111709022A (en) Hybrid alarm association method based on AP clustering and causal relationship
CN113033966A (en) Risk target identification method and device, electronic equipment and storage medium
CN114326403A (en) Multi-agent system security convergence control method based on node information privacy protection
CN116708009A (en) Network intrusion detection method based on federal learning
CN114863226A (en) Network physical system intrusion detection method
Qiu et al. Born this way: A self-organizing evolution scheme with motif for internet of things robustness
WO2022151579A1 (en) Backdoor attack active defense method and device in edge computing scene
CN114205816A (en) Information security architecture of power mobile Internet of things and use method thereof
CN116805082A (en) Splitting learning method for protecting private data of client
CN110855654B (en) Vulnerability risk quantitative management method and system based on flow mutual access relation
CN115793717B (en) Group collaborative decision-making method, device, electronic equipment and storage medium
CN113837398A (en) Graph classification task poisoning attack method based on federal learning
CN110889467A (en) Company name matching method and device, terminal equipment and storage medium
CN116050546A (en) Federal learning method of Bayesian robustness under data dependent identical distribution
CN115879108A (en) Federal learning model attack defense method based on neural network feature extraction
Li et al. Image restoration using improved particle swarm optimization
Shen et al. Coordinated attacks against federated learning: A multi-agent reinforcement learning approach
CN118015287B (en) Domain correction adaptive device-based cross-domain small sample segmentation method
CN115913749B (en) Block chain DDoS detection method based on decentralization federation learning
CN116778544B (en) Face recognition privacy protection-oriented antagonism feature generation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200904