CN117077192A - Method and device for defending attack of taking and riding in federal study with privacy protection - Google Patents

Method and device for defending attack of taking and riding in federal study with privacy protection Download PDF

Info

Publication number
CN117077192A
CN117077192A CN202310938055.6A CN202310938055A CN117077192A CN 117077192 A CN117077192 A CN 117077192A CN 202310938055 A CN202310938055 A CN 202310938055A CN 117077192 A CN117077192 A CN 117077192A
Authority
CN
China
Prior art keywords
gradient
server
client
model
ciphertext
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310938055.6A
Other languages
Chinese (zh)
Other versions
CN117077192B (en
Inventor
张秉晟
孙嘉葳
任奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202310938055.6A priority Critical patent/CN117077192B/en
Publication of CN117077192A publication Critical patent/CN117077192A/en
Application granted granted Critical
Publication of CN117077192B publication Critical patent/CN117077192B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Bioethics (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Storage Device Security (AREA)

Abstract

The invention discloses a method and a device for defending a pick-up car attack in federal study with privacy protection, wherein a key center generates a public-private key pair, and private keys are divided among a first server, a second server, a first server and clients; the first server encrypts the first initial model by using the public key and carries out partial decryption by using private key segmentation held by the first server to obtain a second initial model, and each client downloads the second initial model and carries out decryption by using the private key segmentation of the second initial model to carry out local model training to obtain respective gradients; the method comprises the steps that a first server and a second server calculate the gradient of a client side and cosine similarity of average gradient according to gradient ciphertext of each client side, corresponding rewards or penalties are carried out on each client side, aggregation is carried out according to the gradient ciphertext, the aggregated gradient ciphertext is obtained, each client side is updated according to the aggregated gradient ciphertext, and in a trap wheel, a trap gradient is constructed to induce a riding attacker to update to an error direction.

Description

Method and device for defending attack of taking and riding in federal study with privacy protection
Technical Field
The invention belongs to the technical field of federal study of privacy protection, and particularly relates to a method and a device for defending a pick-up car attack in federal study of privacy protection.
Background
With the high development of artificial intelligence, the quality of training data sets plays an increasingly critical role in model training. Data barriers are difficult to break due to data monopolization or department business differences between enterprises, which results in the mutual unavailability of data between enterprises or departments. In order to break the data island, *** proposed a distributed machine learning framework in 2016-federal learning. In federal learning, all participants share a global model in each round of training, which is maintained by a unique trusted parameter server and iteratively generated and transmitted to the participants in accordance with the training round. The participants do not need to share the local data set, but only transmit the local training updated model, so that the aim of protecting the privacy of the participant data is achieved.
As the application field of federal learning becomes wider and wider, many studies find weak points in federal learning, prove that fairness and confidentiality threats exist in federal learning, and the patent aims at solving fairness attack and confidentiality in federal learning, and is specifically summarized as follows:
1. Fairness attack
In the federal learning environment, how to guarantee fairness is a big problem, because the finally aggregated model has high commercial value, which increases the probability that malicious clients want to obtain the aggregated model free of charge, commonly referred to as "white spots". Such malicious clients are simply ride-on attackers whose motivation is that they have no data locally or want to reduce the training costs and overhead locally. The riding attacker enables the low-contribution participants to obtain the same high-quality aggregation model as the high-contribution participants, so that the rights and interests of other normal participants are greatly damaged, and the fairness of federal learning is destroyed.
The current common attack of the riding and the riding mainly comprises the following steps:
(1) A random gaussian disturbance-based pick-up attack (delta weight attack, DWA) (2) a random gaussian disturbance-based pick-up attack (stochastic perturbations attack, SPA) which assumes that the attacker has a priori knowledge of the training model, enabling to learn in advance the approximate variance of the SGD of each round of local update model and global model of the fair participants. (3) A pick-and-place attack (auxiliary dataset attack, ADA) based on the auxiliary dataset the method assumes that the pick-up person holds a small portion of the data in the dataset and replaces the random gaussian perturbation process with an adam optimizer.
In the process of realizing the patent, the current method for resisting the attack of the pick-up car is mainly found to adopt a method for calculating the model similarity of the participants, such as calculating the Euclidean distance between models or the frequency of model weight change. However, this type of method has the following problems:
(1) Participants holding Non-independent co-distributed (Non-IID) data sets are easily identified as ride-on attackers because when a benign participant holds a Non-independent co-distributed data set, the model that is typically trained will have a low similarity to the aggregated model and be erroneously identified as a ride-on attacker. (2) When a ride attacker adopts the auxiliary data set, resulting in its very powerful camouflage ability, it may be difficult to distinguish by simply calculating the model similarity. This patent has solved above-mentioned problem through setting up trap and punishment mechanism in federal study.
2. Confidentiality attack
Confidentiality in federal learning means that sensitive information such as local data, global models, etc. is not revealed to unauthorized users. In federal learning, an attacker cannot directly obtain the privacy information of a participant's data set because the participant's data set is stored locally. However, if the original model update of the participant is directly uploaded to the central server, an attacker can obtain the local privacy information of the participant by analyzing the uploaded original model update, which still has the risk of privacy disclosure.
The current method for solving confidentiality attack mainly comprises the following steps: (1) Based on privacy protection technology of homomorphic encryption (Homomorphic Encryption, HE), the result obtained by performing operation on homomorphic encrypted data and then decrypting is consistent with the result obtained by directly performing operation on plaintext. (2) Privacy protection technology based on Secure multiparty computing ((Secure Multi-Party Computation, SMPC)), which refers to the Secure computation of a model or function problem between multiple parties without the involvement of a trusted third party. (3) Privacy enhancement based on differential privacy (Differential privacy, DP). (4) blockchain techniques.
Current pick-up attack defense strategies in federal learning of privacy protection may cause privacy protection to be compromised, for example: the patent CN114266361A marks the clients with abnormal Euclidean distance and average frequency of weight change respectively by calculating the Euclidean distance between the clients, and when a certain client is marked for 3 times by abnormality, the client is considered as a pick-up attacker, and the client kicks out of federal learning training; however, the method does not carry out privacy protection on the information uploaded by the client; meanwhile, the capability of the assumed pick-up attacker is weak, and if the pick-up attacker adopts pick-up attack based on random Gaussian disturbance or pick-up attack based on auxiliary data set, the pick-up attacker cannot be distinguished by Euclidean distance and weight change frequency. Patent CN112714106a implements defense against a ride-on attacker based on intelligent contracts by building intelligent contracts between task issuing institutions and clients in the blockchain according to computational certificates, which uses blockchain technology, which requires each party to verify and record all transactions, thus its performance and scalability are challenging, and blockchain technology also requires the establishment of intelligent contracts, maintenance and transmission of blockchain data, which all require a lot of storage space and computational resources. Further, the distribution state of the loss values of the WGAN-GP model is insufficient to distinguish from the above-described assumed riding attacker's ability is weak.
In summary, the conventional method for defending a pick-up attack has the following problems:
(1) Judging only from the similarity between the model parameters submitted by the riding attacker and the model parameters of other participants, and possibly failing to identify the riding attacker with strong camouflage capability;
(2) The participants with the non-independent co-distributed data sets are easy to identify as the riding car attacker;
(3) Privacy in federal learning is not protected.
Disclosure of Invention
Aiming at the problems existing in the prior art, the embodiment of the application aims to provide a method and a device for defending a pick-up car attack in federal learning with privacy protection.
According to a first aspect of an embodiment of the present application, there is provided a method for defending a pick-up car attack in federal learning for protecting privacy, including:
the first server performs model initialization, a key center generates a public key pair and broadcasts the public key, and the private key is segmented between the first server and the second server as well as between the first server and each client through a key segmentation algorithm;
the first server encrypts the first initial model by using the public key and performs partial decryption by using private key segmentation held by the first server to obtain a second initial model, and each client downloads the second initial model and performs a partial decryption algorithm and a full decryption algorithm by using the private key segmentation of the second initial model to obtain a third initial model;
According to the third initial model, each client performs local model training to obtain respective gradients, encrypts the gradients by using a public key, and sends gradient ciphertext to a first server;
the method comprises the steps that a first server and a second server calculate the cosine similarity of a client gradient and an average gradient according to gradient ciphertext of each client, corresponding rewards or penalties are carried out on each client, the first server carries out parameter aggregation which is safe and resistant to the attack of a riding car according to the gradient ciphertext, the aggregated gradient ciphertext is obtained, each client updates according to the aggregated gradient ciphertext, and in a trap wheel, the updating of the riding car attacker to the error direction is induced by constructing the trap gradient.
Further, each client performs local model training, and after each gradient is obtained, the gradient of the current round is obtainedApproximately as a large integer.
Further, the first server and the second server calculate cosine similarity of the client gradient and the average gradient according to the gradient ciphertext of each client, and the method comprises the following steps:
first serverFor the current gradient ciphertext [ [ g ] 1 ]],…,[[g n ]]Ciphertext of computing gradient sum [ [ sum ]]];
Adding random disturbance to the gradient ciphertext by using random numbers so as to protect again;
Splitting sk with its own key 1 Gradient ciphertext subjected to random number disturbance on all clients>Executing partial decryption algorithm to obtain gradient ciphertext with random disturbance protection>
Handle->And->Issue second Server->
For->Executing partial decryption algorithm to get->Use->And->Performing the full decryption algorithm to get ∈>I.e. g with noise i In plaintext form; for->The same operation is performed to obtain +.>I.e. the sum of gradients with noise;
calculating the gradient of the ith client in form of random perturbation +.>Sum of gradients->Cosine similarity of (c);
for->Obtaining ∈ using encryption algorithm>
Handle->Issue-> Calculating cosine similarity with random disturbance removed +.>
And->Executing partial decryption algorithm and full decryption algorithm to obtain cosine similarity cos of client gradient and average gradient i
Further, corresponding rewards or penalties are carried out on each client, specifically:
in all clients, for cosine similarity cos i >Lambda client, executing a rewarding algorithm:otherwise, executing a penalty algorithm: />Where λ is a threshold value, α i For client C i Is a sum of the limits of (1),for punishment, add->Sigma is the cost for rewards.
Further, the first server performs parameter aggregation which is safe and can resist a pick-up attack to obtain an aggregated gradient ciphertext, and each client updates according to the aggregated gradient ciphertext, wherein in a trap wheel, a pick-up attacker is induced to update towards an error direction by constructing a trap gradient, and the method comprises the following steps:
Initializing [ [ g ] to [ [0] ];
in all clients, for the credit alpha i >Client set of 0Polymerizing to obtain gradient ciphertext [ [ g ] after polymerization]]Updating the amount of the clients participating in aggregation;
at the trap wheel, the first serverExecuting a trap algorithm, specifically: keep the current [ [ g ]]]And is recorded as [ [ g ] reserve ]]Initializing [ [ g ]]]Is a random matrix and will [ [ g ]]]Normalizing to obtain trap gradient [ [ g ] trap ]]By going to the limit alpha i >0 client sends a trap gradient [ [ g ] trap ]]Inducing a pick-up attacker to update to the wrong direction;
a round of first servers after the trap roundTo the limit alpha i >0 client sends the last stored [ [ g ] reserve ]]To update, first server in the remaining normal rounds +.>To the limit alpha i >0 client sends the aggregated gradient ciphertext [ [ g ]]]To be updated.
According to a second aspect of an embodiment of the present application, there is provided a device for defending a pick-up car attack in federal learning for protecting privacy, including:
the system comprises a model and a key initialization module, a key center and a key segmentation module, wherein the model and key initialization module is used for initializing the model by a first server, generating a public-private key pair by the key center and broadcasting the public key, and segmenting the private key among the first server, a second server, the first server and each client by a key segmentation algorithm;
The model distribution module is used for encrypting the first initial model by using the public key and performing partial decryption by using private key segmentation held by the first initial model by using the model distribution module to obtain a second initial model, and each client downloads the second initial model and performs partial decryption algorithm and full decryption algorithm by using the private key segmentation of the second initial model to obtain a third initial model;
the local model training module is used for carrying out local model training on each client according to the third initial model to obtain respective gradients, encrypting the gradients by using a public key and sending gradient ciphertext to the first server;
the system comprises a model parameter aggregation module, a model parameter aggregation module and a model parameter analysis module, wherein the model parameter aggregation module is used for calculating the cosine similarity of the gradient and the average gradient of a client according to the gradient ciphertext of each client, carrying out corresponding rewarding or punishment on each client, carrying out parameter aggregation which is safe and can resist the attack of a riding car by the first server according to the gradient ciphertext, obtaining the aggregated gradient ciphertext, and updating each client according to the aggregated gradient ciphertext, wherein in a trap wheel, updating of the riding car attacker to the wrong direction is induced by constructing the trap gradient.
According to a third aspect of an embodiment of the present application, there is provided an electronic apparatus including:
One or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of the first aspect.
According to a fourth aspect of embodiments of the present application there is provided a computer readable storage medium having stored thereon computer instructions which when executed by a processor perform the steps of the method according to the first aspect.
The technical scheme provided by the embodiment of the application can comprise the following beneficial effects:
from the above embodiments, the present application designs a scheme for resisting malicious riding attacks in federal learning under privacy protection. Specifically, the application provides a safe ciphertext multiplication calculation mode, which can enable the participants to encrypt the model gradient and upload the model gradient to the server for aggregation, thereby protecting the privacy of the participants. On the basis, the server executes a method for resisting the attack of the riding vehicle based on a trap and a reward and punishment mechanism, calculates the similarity of the uploading model and the sum of the models of the participants under a ciphertext, punishs the participants with larger differences but does not kick out immediately until a certain participant submits the model update with large differences with the total model for many times, thereby ensuring the rights of clients with non-IID data and realizing the fairness of aggregation. Meanwhile, the taking-up vehicle participants with high camouflage capability are further identified through a trap mechanism, and the taking-up vehicle participants with high camouflage capability (only a small amount of data is camouflaged to be equal to the data quantity of the rest participants) are immediately kicked out after identification, so that the problem that the taking-up vehicle participants with high camouflage capability are wrongly divided into normal participants in the traditional method is avoided. Under the two mechanisms, the resistance of the riding and defecating attacker in the federal learning is realized, and the riding and defecating attacker which has no training data or only a small amount of training data and still participates in the federal learning is ensured to obtain a high-quality aggregation model.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flow chart illustrating a method of protecting against a pick-up car attack in federal learning for privacy protection in accordance with an exemplary embodiment.
Fig. 2 is a block diagram of a privacy preserving federal in-learning ride-on attack defense arrangement according to an exemplary embodiment.
Fig. 3 is a schematic diagram of an electronic device shown according to an example embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the application. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
Noun interpretation:
(1) Federal study
Federal learning is a distributed machine learning framework. The framework allows two or more participants to collaborate to construct a common machine learning model, where the training data for each participant is kept local and does not leave the participant during the training of the model. After the participants train, the respective model parameters are uploaded to a central server, the central server aggregates the model parameters of each participant, the aggregated parameters are issued to each participant, and then the participants continue to train for a new round, and the process is iterated until the model converges. The related information of the model can be exchanged and transmitted among the participants in an encrypted form, so that the original data of other parties cannot be reversely deduced by any one of the participants is ensured. The performance of the final federal learning model may be sufficiently close to an ideal model (meaning that all training data is put together and the resulting model is trained).
(2) Attack of taking a car
At present, the application field of federal learning is more and more extensive, and two important targets of federal learning are privacy protection and joint modeling respectively, and the two targets are attack directions of attackers. In federal learning, each participant trains a local model with its own local data and uploads the model to a server, and the aggregated model has extremely high commercial value due to carrying the participant's data information, which also results in the presence of a ride attacker in federal learning. The pick-up attacker wants to obtain the same high-quality model as other participants while not having training data locally or reducing the cost and expense of local training, which can lead to the low-contribution client obtaining the same model as the high-contribution client, and undermine fairness in federal learning.
(3) Federal learning for privacy protection
Although federal learning protects the participants' local data by way of the participants and servers exchanging model parameters, learner studies find that exchanged model gradients may also reveal private information of training data. The training mechanism of federal learning introduces new risks for privacy, and an attacker can acquire the privacy information of the local data of the participants through methods such as member inference, attribute inference, eavesdropping and the like. In order to prevent the privacy information of the participants from revealing, privacy protection needs to be carried out on the privacy information, and the federal learning algorithm for privacy protection at present can adopt safe multiparty calculation, differential privacy, encryption and the like.
FIG. 1 is a flow chart of a method of defending a pick-up car attack in federal learning for privacy protection, as shown in FIG. 1, which is applied to a terminal, according to an exemplary embodiment, may include the steps of:
(1) The first server performs model initialization, a key center generates a public key pair and broadcasts the public key, and the private key is segmented between the first server and the second server as well as between the first server and each client through a key segmentation algorithm;
(2) The first server encrypts the first initial model by using the public key and performs partial decryption by using private key segmentation held by the first server to obtain a second initial model, and each client downloads the second initial model and performs a partial decryption algorithm and a full decryption algorithm by using the private key segmentation of the second initial model to obtain a third initial model;
(3) According to the third initial model, each client performs local model training to obtain respective gradients, encrypts the gradients by using a public key, and sends gradient ciphertext to a first server;
(4) The method comprises the steps that a first server and a second server calculate the cosine similarity of a client gradient and an average gradient according to gradient ciphertext of each client, corresponding rewards or penalties are carried out on each client, the first server carries out parameter aggregation which is safe and resistant to the attack of a riding car according to the gradient ciphertext, the aggregated gradient ciphertext is obtained, each client updates according to the aggregated gradient ciphertext, and in a trap wheel, the updating of the riding car attacker to the error direction is induced by constructing the trap gradient.
The scheme is based on a trap and a punishment mechanism, and combines a homomorphic encryption algorithm to provide a federal learning method for resisting the attack of the riding vehicle under the protection of privacy. The scheme can be used for a server side in federal study under various scenes, for example, a cancer prediction model is jointly modeled by a plurality of hospitals and medical institutions in a medical scene, and the aim of protecting the privacy of patient data while resisting a riding attacker (namely, an attacker who does not have a medical data set but wants to acquire a high-value prediction model) is fulfilled by using the scheme at a central server side. The scheme comprises two central servers conforming to a specified protocolAnd->They may attempt privacy violation without collusion, trusted key center KC, and client c= { C 1 ,,...,,C n }. The scheme is as follows:
(1) Model and key initialization: the first server performs model initialization, a key center generates a public key pair and broadcasts the public key, and the private key is segmented between the first server and the second server as well as between the first server and each client through a key segmentation algorithm;
first byInitializing model parameters by adopting a random initialization mode to obtain W 0 The key center KC is responsible for generating and dividing the key, after generating a public-private key pair (pk, sk), broadcasting the public key, and setting the private key at +. >And->And C i (i∈[1,n]) And performing a key segmentation algorithm. In the previous encryption scheme, if only one server is adopted, as long as a malicious client leaks the private key, the server can decrypt the uploading gradient of all clients.
It should be noted that the privacy protection technology in this patent is based on the Paillier's double trapdoor cryptosystem, including key segmentation algorithm, partial decryption algorithm and full decryption algorithm. Wherein two large prime numbers k, l are randomly selected, λ=lcm (k-1, l-1) is calculated, n=kl
Paillier cryptographic system is a scheme satisfying addition homomorphism, in finite fieldsIn which two ciphertexts [ [ x ] are given 1 ]],[[x 2 ]]And a constant k, the Paillier cryptographic system has the following properties:
[[x 1 +x 2 ]]=[[x 1 ]]·[[x 2 ]],
[[x 1 ]] N-1 =[[-x 1 ]],
[[x 1 -x 2 ]]=[[x 1 ]]·[[x 2 ]] N-1 .
the key segmentation algorithm is as follows:
(sk)→(sk 1 ,sk 2 ) Given a private key sk=λ, randomly splitting sk into two private keys sk 1 And sk 2 And satisfies the following conditions:
(2) Model distribution: the first server encrypts the first initial model by using the public key and performs partial decryption by using private key segmentation held by the first server to obtain a second initial model, and each client downloads the second initial model and performs a partial decryption algorithm and a full decryption algorithm by using the private key segmentation of the second initial model to obtain a third initial model;
In particular, the method comprises the steps of,encrypting the initial model with pk to obtain a first initial model [ [ W ] 0 ]]In medical scene [ [ W ] 0 ]]The encrypted initial disease diagnosis model is obtained. By->Using its own private key sk 1 Executing partial decryption algorithm to obtain a second initial model [ W 0 ] 1 And the first initial model [ [ W ] 0 ]]And a second initial model [ W 0 ] 1 And distributing to the client. But at this time +.>The obtained ciphertext still cannot be known about the specific parameters of the model, because the partial private key held by the user only supports partial decryption algorithm, thus preventing +.>Information disclosure such as patient privacy in data. Then each client, namely various medical institutions such as hospitals, downloads [ [ W ] 0 ]]And [ W ] 0 ] 1 And uses its own private key sk locally 2 Performing partial decryption algorithm to obtain W 0 ] 2 Then use [ W ] 0 ] 1 And [ W ] 0 ] 2 Obtaining W through a full decryption algorithm 0 As a third initial model, the final plaintext initial model.
Specifically, the partial decryption algorithm is thatI.e. given [ [ x ]]]And the private key share sk of a certain party i Decrypting ciphertext of the model into corresponding [ x ] through part] i When in specific implementation, the parameters are firstly made into sk i To the power and then modulo N 2 I.e. in the form of a partial ciphertext.
The full decryption algorithm is ([ x) ] 1 ,[x] 2 ) X, i.e. a given decryption share of the ancestor spirit ([ x)] 1 ,[x] 2 ) X, which is a plaintext representation, can be obtained.
In particular, due to key splitting at the time of key splittingAnd->And C i (i∈[1,n]) The plaintext data of the model can therefore only be taken if the two central servers decrypt together, or if the medical institution and the server decrypt together. In the security hypothesis model, server +.>And->Is not colluded with each other, and collusion between the medical institution and the server may infringe the privacy of the patient of the medical institution, so that the medical institution and the server may also be regarded as not colluded with each other, and thus may be obtained.
(3) Client local model training: according to the third initial model, each client performs local model training to obtain respective gradients, encrypts the gradients by using a public key, and sends gradient ciphertext to a first server;
the client side aggregates the model W according to the present round (set as the t-th round) t And training the model in the local data setObtain the gradient of the present wheel->Since the encryption and decryption algorithms are all performed on integers, the encryption and decryption algorithm is needed to be performed on +.>Approximating a large integer, the specific algorithm is: />Where deg is a multiple of expansion, in particular, according to a pre-determined requirementThe accuracy of the time measurement model is determined and can be set to be 10 in general 6
(4) Model parameter aggregation that is secure and resistant to pick-up attacks: the method comprises the steps that a first server and a second server calculate the cosine similarity of a client gradient and an average gradient according to gradient ciphertext of each client, rewards and penalizes each client, the first server carries out parameter aggregation which is safe and can resist a pick-up attack according to the gradient ciphertext, the aggregated gradient ciphertext is obtained, each client updates according to the aggregated gradient ciphertext, and in a trap wheel, a pick-up attacker is induced to update towards an error direction by constructing a trap gradient;
the model parameter aggregation which is safe and can resist the attack of the ride vehicle is based on a trap and a punishment mechanism, the trap mechanism transmits random parameters to the clients in trap rounds by taking the same initial amount of each client as the weight for participating in training, so that the ride vehicle attacker who carries out self-adaptive inference attack by issuing the parameters through a global model is misled. At the same timeOnly encrypted gradient ciphertext is available, ++>Only the decrypted gradient with noise can be obtained, and the true gradient of each client cannot be obtained.
Specifically, the amount of the client is initially a 1 ,…,a n Are allGradient ciphertext [ [ g ] according to n clients of the round 1 ]],…,[[g n ]]And the current limit of each client is a 1 ,…,a n By->And->Calculating cosine similarity of gradient and average gradient of each client, rewarding and punishing each client, and aggregating parameters which are safe and can resist the attack of taking a car according to the cosine similarity, and outputting aggregated [ [ g ]]]And the next round of each client has a limit of alpha 1 ,…,α n
First, it is necessary to initialize punishment in a punishment and punishment mechanismThe reward is->Spending sigma and threshold lambda, in federal learning using an electrocardiogram anomaly classification dataset, participant n=10, the patent sets a super-parameter of +.>σ=0.005, λ=0. In particular, the range of parameter settings needs to be different according to the different needs of the employed data set. If the modeling task of federal learning is a complex task, such as a pneumovirus severity prediction task, and it is necessary to combine different dimensional data of multiple regions, such as the combined modeling of the number of patients with pneumonitis, the prevalence rate and the transmission rate, the virus mutation condition, and the like, then it is necessary to reduce the penalty->And spending sigma; if it is a simpler task, such as classification of heart rate anomalies by means of an electrocardiogram, the participants can generally model the data quickly, and the global model can converge correspondingly quickly, so that the penalty +_ can be increased >And spending sigma.
The polymerization process specifically comprises the following steps:
(4.1)and->Co-computing [ [ g ] of ith client under ciphertext i ]]Gradient and average gradient [ [ g ] avg ]]Cosine similarity cos i The proposal can ensure ∈>And->And the data uploading privacy of the participants is protected by calculating under the condition that the gradient ciphertext is not decrypted.
Specifically, byAnd->Cosine similarity of gradient and gradient sum calculated under the condition of double blindness of participant gradient, and the input of the step is gradient ciphertext [ [ g ] of n clients 1 ]],…,[[g n ]]Wherein [ [ g ] i ]]=[[x i1 ]],…,[[x im ]]Ciphertext of average gradient of client [ [ g ] avg ]]Cosine similarity cos of gradient and gradient sum of n clients is output 1 ,…,cos n Updated credit a of each client 1 ,…,a n Aggregation model ciphertext [ [ g ] after safe aggregation of the round]]The method specifically comprises the following substeps:
(4.1.1)for the current gradient ciphertext [ [ g ] 1 ]],…,[[g n ]]The ciphertext of the calculated gradient sum can be used for obtaining the ciphertext of the average gradient, and the ciphertext of the calculated gradient sum is subsequently used for judging malicious parties, specifically:
[[sum]]=[[g 1 +…+g n ]]=[[g 1 ]]·…·[[g n ]]
(4.1.2)the gradient ciphertext is protected again by the random number, so that the gradient obtained after the subsequent server decrypts the gradient ciphertext is still provided with random disturbance, the data privacy of the participants cannot be revealed because the gradient similarity needs to be calculated, and the method specifically comprises the following steps:
Gradient ciphertext [ [ g ] for the ith client i ]]Let it have m dimensions in total, for each dimension k e 1, n]Performing:
for all client gradients sum [ [ sum ] ], there is also m dimensions, for each dimension k ε [1, m ] performs:
wherein the method comprises the steps ofRepresenting a random integer for blinding the gradient cipher text.
(4.1.3)With its own key sk 1 Gradient ciphertext subjected to random number disturbance on all clientsExecuting partial decryption algorithm to obtain partial decryption and containing random disturbance protection>
(4.1.4)Handle->And->Issue->
(4.1.5)Executing the decryption algorithm twice, firstly aiming at +.>Performing partial decryption algorithm to obtain Then use->And->Performing the full decryption algorithm to get ∈>I.e. g with noise i In plaintext form; for a pair ofThe same operation is performed to obtain +.>I.e. the sum of the gradient with noise. To this end (I)>The gradient with random disturbance and the plaintext form of the gradient sum can be taken, and the subsequent operation of calculating the gradient similarity is performed on the basis.
(4.1.6)Calculating the gradient of the ith client in form of random perturbation +.>Sum of gradients->Cosine similarity of (c). I.e. < ->Here->Also with random perturbation, +.>The true cosine similarity cannot be obtained.
(4.1.7)For->Obtaining ∈ using encryption algorithm>
(4.1.8)Handle->Issue->Calculating cosine similarity [ [ cos ] with random disturbance removed i ]]The method specifically comprises the following steps:
because the patent is based on the paillier addition homomorphism, the plaintext subtraction operation can be realized through the following ciphertext-to-ciphertext operation:
where x is a I.e. g i ,x b The form of ciphertext with cosine similarity removed from random disturbance can be obtained by the formula as the gradient sum. The method can be used for converting the cosine value under the disturbance form into the true cosine value skillfully, and the gradient always exists in the form of ciphertext or disturbance in the process, so that the data privacy of the participants is not exposed.
(4.1.9)And->Executing partial decryption algorithm and full decryption algorithm to obtain cos together ab
(4.2) performing a reward and punishment based mechanism:
in all clients, for cos i >Lambda client, executing a rewarding algorithm:otherwise, executing a penalty algorithm: />
The mechanism of rewarding and punishing is based onAnd->And taking the cosine similarity of each client gradient and the average gradient which are calculated together as a standard for punishing malicious clients and rewarding normal clients, and if the line of the client is exhausted due to multiple punishments, the client can not participate in federal training any more. The reward and punishment mechanism can ensure that a client holding non-IID data is punished when the difference between the current training data distribution and other clients is large, but cannot be immediately kicked out of training, and meanwhile, a malicious client can be punished.
(4.3) byExecuting an aggregation algorithm capable of resisting a pick-up attack, wherein the client side can influence the quota of the pick-up attack if the client side launches the pick-up attack in the aggregation process, and when the quota is alpha i Can be kicked out of the federal learning training when less than 0, i.e. no longer receive +.>The transmitted model is updated, thereby avoiding pick-up attacks from taking up high quality global models. Obtaining the gradient ciphertext [ [ g ] after aggregation]]And each client updates according to the aggregated gradient ciphertext. Because the scheme is used for identifying the taking-up attacker based on the model similarity, the attacker with weak attack capability can be identified, and the taking-up attacker with strong attack capability can not be completely identified by only adopting the model similarity, the scheme is provided with the trap wheel for identifying the taking-up attacker with strong attack capability>Inducing a ride attacker to update to the wrong direction by constructing a trap gradient:
(4.3.1) initializing [ [ g ] ] to [ [0] ];
(4.3.2) in all clients, for α i >Client set of 0Polymerizing to obtain gradient ciphertext [ [ g ] after polymerization]]And updating the amount of the clients participating in aggregation. In->To normalize the amount of all participants, the weight proportion that the final participant should occupy in the final aggregation model can be obtained. If alpha i The larger the description is, the closer to the global model, the greater the final contribution to the aggregated model should be. For alpha i Clients smaller than 0 indicate that they are involved in a pick-up attack and therefore should not participate in aggregation. Wherein deg is the expansion factor:
since this patent is based on the paillier addition homomorphism, thereforeThe plaintext addition may be calculated by the following ciphertext-to-ciphertext operation:
(4.3.3) at the trap wheelThe trap algorithm is executed as follows:
keep the current [ [ g ]]]And is recorded as [ [ g ] reserve ]]Initializing [ [ g ]]]Is a random matrix and will [ [ g ]]]Normalizing to obtain trap gradient [ [ g ] trap ]]By sending a trap gradient [ [ g ] trap ]]To induce a ride attacker to update in the wrong direction. It should be noted that, clients with a credit of less than 0 will not be added to the federal learning training any more, and therefore only clients with a credit of more than 0 will be added to the federal learning trainingThe client sends, i.e. this step is intended to check if there are more aggressive ride-on attackers in the federal learning. If there are also ride attacks in federal learning, then the ride gradient it sends to the server is constructed to mimic the server or the rest of the participants' updates. Because privacy protection is achieved by the scheme, a riding attack cannot obtain the model gradient of other people through eavesdropping the communication process, and the riding attack can be achieved only by imitating the gradient distributed by the server. While the trap gradients constructed by the server are randomly constructed, so a ride-on attacker can be easily identified at the trap wheel. Generally, the trap wheel should be set in the first few wheels of model training to identify a ride-on attacker as early as possible in federal learning training;
(4.3.4) a round of first servers after the trap roundTo the limit alpha i >0 client sends the last stored [ [ g ] reserve ]]To update, first server in the remaining normal rounds +.>To the limit alpha i >0 client sends the aggregated gradient ciphertext [ [ g ]]]To be updated.
The scheme realizes the resistance to the riding and defecating attacker in the Union study scene, and the training data in the medical scene contains a large amount of sensitive information such as disease history, diagnosis result and the like of the patient. The anti-pick-up attack can prevent malicious nodes from acquiring sensitive information of other nodes, and the privacy of patients is ensured. By resisting the attack of taking a car, all the participating nodes in federal learning contribute to own data, so that the model can learn the characteristics of each node more fully, and the accuracy and performance of diagnosis are improved. Meanwhile, the attack of taking a car is resisted, so that each participating node is ensured to make a certain contribution to federal learning, the phenomenon that some nodes lose participation power due to unfairness is prevented, and fairness and sustainability of federal learning are maintained.
The application also provides an embodiment of the privacy-protected federal learning pick-up attack defense device, corresponding to the embodiment of the privacy-protected federal learning pick-up attack defense method.
Fig. 2 is a block diagram of a privacy preserving federal in-learning ride-on attack defense device according to an exemplary embodiment. Referring to fig. 2, the apparatus may include:
the model and key initialization module 21 is used for initializing a first server, generating a public-private key pair by a key center and broadcasting the public key, and dividing the private key among the first server, a second server, the first server and each client through a key division algorithm;
the model distribution module 22 is configured to encrypt a first initial model by using the public key and perform partial decryption by using a private key partition held by the first server to obtain a second initial model, and each client downloads the second initial model and performs a partial decryption algorithm and a full decryption algorithm by using the private key partition of the second initial model to obtain a third initial model;
the local model training module 23 is configured to perform local model training on each client according to the third initial model to obtain respective gradients, encrypt the gradients by using a public key, and send gradient ciphertext to the first server;
the model parameter aggregation module 24 is configured to calculate, by using the first server and the second server, a gradient of the client and a cosine similarity of the average gradient according to gradient ciphertexts of the clients, and perform corresponding rewarding or punishment on each client, so that the first server performs parameter aggregation that is safe and resistant to a pick-up attack, and obtains an aggregated gradient ciphertexts, and each client updates according to the aggregated gradient ciphertexts, where in a trap wheel, a pick-up attacker is induced to update in an error direction by constructing a trap gradient.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present application. Those of ordinary skill in the art will understand and implement the present application without undue burden.
Correspondingly, the application also provides electronic equipment, which comprises: one or more processors; a memory for storing one or more programs; the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the federal in-learning ride-on attack defense method of privacy protection as described above. As shown in fig. 3, a hardware configuration diagram of an apparatus with any data processing capability in accordance with the present application, except for a processor, a memory, and a network interface shown in fig. 3, where the apparatus with any data processing capability in the present application is generally according to the actual function of the apparatus with any data processing capability, may further include other hardware, which is not described herein.
Correspondingly, the application also provides a computer readable storage medium, wherein computer instructions are stored on the computer readable storage medium, and the instructions realize the method for defending the pick-up car attack in the federal learning of privacy protection when the instructions are executed by a processor. The computer readable storage medium may be an internal storage unit, such as a hard disk or a memory, of any of the data processing enabled devices described in any of the previous embodiments. The computer readable storage medium may also be an external storage device, such as a plug-in hard disk, a Smart Media Card (SMC), an SD Card, a Flash memory Card (Flash Card), or the like, provided on the device. Further, the computer readable storage medium may include both internal storage units and external storage devices of any device having data processing capabilities. The computer readable storage medium is used for storing the computer program and other programs and data required by the arbitrary data processing apparatus, and may also be used for temporarily storing data that has been output or is to be output.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof.

Claims (8)

1. A method for defending a ride-on attack in federal learning with privacy protection, comprising:
the first server performs model initialization, a key center generates a public key pair and broadcasts the public key, and the private key is segmented between the first server and the second server as well as between the first server and each client through a key segmentation algorithm;
the first server encrypts the first initial model by using the public key and performs partial decryption by using private key segmentation held by the first server to obtain a second initial model, and each client downloads the second initial model and performs a partial decryption algorithm and a full decryption algorithm by using the private key segmentation of the second initial model to obtain a third initial model;
according to the third initial model, each client performs local model training to obtain respective gradients, encrypts the gradients by using a public key, and sends gradient ciphertext to a first server;
the method comprises the steps that a first server and a second server calculate the cosine similarity of a client gradient and an average gradient according to gradient ciphertext of each client, corresponding rewards or penalties are carried out on each client, the first server carries out parameter aggregation which is safe and resistant to the attack of a riding car according to the gradient ciphertext, the aggregated gradient ciphertext is obtained, each client updates according to the aggregated gradient ciphertext, and in a trap wheel, the updating of the riding car attacker to the error direction is induced by constructing the trap gradient.
2. The method of claim 1, wherein each client performs local model training to obtain a respective gradient, and then performs the gradient for the current roundApproximately as a large integer.
3. The method of claim 1, wherein the first server and the second server calculate cosine similarity of the client gradient and the average gradient from the gradient ciphertext for each client, comprising:
first serverFor the current gradient ciphertext [ [ g ] 1 ]],…,[[g n ]]Ciphertext of computing gradient sum [ [ sum ]]];
Adding random disturbance to the gradient ciphertext by using random numbers so as to protect again;
splitting sk with its own key 1 Gradient ciphertext subjected to random number disturbance on all clients>Executing partial decryption algorithm to obtain partial decryption and random disturbance protectionGradient ciphertext->
Handle->And->Issue second Server->
For->Executing partial decryption algorithm to get->Use->And->Performing the full decryption algorithm to get ∈>I.e. g with noise i In plaintext form; for->Execution is identical toSample manipulation, get->I.e. the sum of gradients with noise;
calculating the gradient of the ith client in form of random perturbation +.>Sum of gradients->Cosine similarity of (c);
for->Obtaining ∈ using encryption algorithm>
Handle- >Issue->Calculating cosine similarity [ [ cos ] with random disturbance removed i ]];
And->Executing partial decryption algorithm and full decryption algorithm to obtain cosine similarity cos of client gradient and average gradient i
4. The method according to claim 1, characterized in that each client is given a corresponding reward or penalty, in particular:
in all clients, for cosine similarity cos i >Lambda client, executing a rewarding algorithm:otherwise, executing a penalty algorithm: />Where λ is a threshold value, α i For client C i Is (are) limit (are) of->For punishment, add->Sigma is the cost for rewards.
5. The method of claim 1, wherein the first server performs parameter aggregation that is safe and resistant to a ride attack to obtain an aggregated gradient ciphertext, and each client updates according to the aggregated gradient ciphertext, wherein in the trap wheel, the ride attacker is induced to update in an error direction by constructing a trap gradient, comprising:
initializing [ [ g ] to [ [0] ];
in all clients, for the credit alpha i >Client set of 0Polymerizing to obtain gradient ciphertext [ [ g ] after polymerization]]Updating the amount of the clients participating in aggregation;
At the trap wheel, the first serverExecuting a trap algorithm, specifically: keep the current [ [ g ]]]And is recorded as [ [ g ] reserve ]]Initializing [ [ g ]]]Is a random matrix and will [ [ g ]]]Normalizing to obtain trap gradient [ [ g ] trap ]]By going to the limit alpha i >0 client sends a trap gradient [ [ g ] trap ]]Inducing a pick-up attacker to update to the wrong direction;
a round of first servers after the trap roundTo the limit alpha i >0 client sends the last stored [ [ g ] reserve ]]To update, first server in the remaining normal rounds +.>To the limit alpha i >0 client sends the aggregated gradient ciphertext [ [ g ]]]To be updated.
6. A device for defending a ride-on attack in federal learning with privacy protection, comprising:
the system comprises a model and a key initialization module, a key center and a key segmentation module, wherein the model and key initialization module is used for initializing the model by a first server, generating a public-private key pair by the key center and broadcasting the public key, and segmenting the private key among the first server, a second server, the first server and each client by a key segmentation algorithm;
the model distribution module is used for encrypting the first initial model by using the public key and performing partial decryption by using private key segmentation held by the first initial model by using the model distribution module to obtain a second initial model, and each client downloads the second initial model and performs partial decryption algorithm and full decryption algorithm by using the private key segmentation of the second initial model to obtain a third initial model;
The local model training module is used for carrying out local model training on each client according to the third initial model to obtain respective gradients, encrypting the gradients by using a public key and sending gradient ciphertext to the first server;
the system comprises a model parameter aggregation module, a model parameter aggregation module and a model parameter analysis module, wherein the model parameter aggregation module is used for calculating the cosine similarity of the gradient and the average gradient of a client according to the gradient ciphertext of each client, carrying out corresponding rewarding or punishment on each client, carrying out parameter aggregation which is safe and can resist the attack of a riding car by the first server according to the gradient ciphertext, obtaining the aggregated gradient ciphertext, and updating each client according to the aggregated gradient ciphertext, wherein in a trap wheel, updating of the riding car attacker to the wrong direction is induced by constructing the trap gradient.
7. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-5.
8. A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method according to any of claims 1-5.
CN202310938055.6A 2023-07-28 2023-07-28 Method and device for defending attack of taking and riding in federal study with privacy protection Active CN117077192B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310938055.6A CN117077192B (en) 2023-07-28 2023-07-28 Method and device for defending attack of taking and riding in federal study with privacy protection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310938055.6A CN117077192B (en) 2023-07-28 2023-07-28 Method and device for defending attack of taking and riding in federal study with privacy protection

Publications (2)

Publication Number Publication Date
CN117077192A true CN117077192A (en) 2023-11-17
CN117077192B CN117077192B (en) 2024-07-05

Family

ID=88703350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310938055.6A Active CN117077192B (en) 2023-07-28 2023-07-28 Method and device for defending attack of taking and riding in federal study with privacy protection

Country Status (1)

Country Link
CN (1) CN117077192B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117828627A (en) * 2023-11-22 2024-04-05 安徽师范大学 Federal machine learning method and system with robustness and privacy protection

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113434873A (en) * 2021-06-01 2021-09-24 内蒙古大学 Federal learning privacy protection method based on homomorphic encryption
CN114266361A (en) * 2021-12-30 2022-04-01 浙江工业大学 Model weight alternation-based federal learning vehicle-mounted and free-mounted defense method and device
US20230047092A1 (en) * 2021-07-30 2023-02-16 Oracle International Corporation User-level Privacy Preservation for Federated Machine Learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113434873A (en) * 2021-06-01 2021-09-24 内蒙古大学 Federal learning privacy protection method based on homomorphic encryption
US20230047092A1 (en) * 2021-07-30 2023-02-16 Oracle International Corporation User-level Privacy Preservation for Federated Machine Learning
CN114266361A (en) * 2021-12-30 2022-04-01 浙江工业大学 Model weight alternation-based federal learning vehicle-mounted and free-mounted defense method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
董业;侯炜;陈小军;曾帅;: "基于秘密分享和梯度选择的高效安全联邦学习", 计算机研究与发展, no. 10, 9 October 2020 (2020-10-09) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117828627A (en) * 2023-11-22 2024-04-05 安徽师范大学 Federal machine learning method and system with robustness and privacy protection

Also Published As

Publication number Publication date
CN117077192B (en) 2024-07-05

Similar Documents

Publication Publication Date Title
Lyu et al. Towards fair and privacy-preserving federated deep models
US20210143987A1 (en) Privacy-preserving federated learning
US20230216669A1 (en) Systems and methods for communication, storage and processing of data provided by an entity over a blockchain network
CN112714106B (en) Block chain-based federal learning casual vehicle carrying attack defense method
Zhou et al. PPDM: A privacy-preserving protocol for cloud-assisted e-healthcare systems
CN109145612B (en) Block chain-based cloud data sharing method for preventing data tampering and user collusion
Sun et al. Permissioned blockchain frame for secure federated learning
Fang et al. A privacy-preserving and verifiable federated learning method based on blockchain
Lyu et al. Towards fair and decentralized privacy-preserving deep learning with blockchain
Sun et al. A blockchain-based audit approach for encrypted data in federated learning
Mou et al. A verifiable federated learning scheme based on secure multi-party computation
Zhang et al. A privacy protection scheme for IoT big data based on time and frequency limitation
CN117077192B (en) Method and device for defending attack of taking and riding in federal study with privacy protection
Zheng et al. Towards differential access control and privacy-preserving for secure media data sharing in the cloud
CN111581648B (en) Method of federal learning to preserve privacy in irregular users
Fan et al. Lightweight privacy and security computing for blockchained federated learning in IoT
Ghavamipour et al. Federated synthetic data generation with stronger security guarantees
Zhou et al. VDFChain: Secure and verifiable decentralized federated learning via committee-based blockchain
CN116318901A (en) Privacy and verifiable internet of things data aggregation method integrating blockchain
CN113472524B (en) Data aggregation signature system and method for resisting malicious transmission data attack
Masuda et al. Model fragmentation, shuffle and aggregation to mitigate model inversion in federated learning
CN115310120A (en) Robustness federated learning aggregation method based on double trapdoors homomorphic encryption
CN111581663B (en) Federal deep learning method for protecting privacy and facing irregular users
Hao et al. Robust and secure federated learning against hybrid attacks: a generic architecture
Ma et al. Do not perturb me: A secure byzantine-robust mechanism for machine learning in IoT

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant