CN117077192B - Method and device for defending attack of taking and riding in federal study with privacy protection - Google Patents

Method and device for defending attack of taking and riding in federal study with privacy protection Download PDF

Info

Publication number
CN117077192B
CN117077192B CN202310938055.6A CN202310938055A CN117077192B CN 117077192 B CN117077192 B CN 117077192B CN 202310938055 A CN202310938055 A CN 202310938055A CN 117077192 B CN117077192 B CN 117077192B
Authority
CN
China
Prior art keywords
gradient
server
client
ciphertext
trap
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310938055.6A
Other languages
Chinese (zh)
Other versions
CN117077192A (en
Inventor
张秉晟
孙嘉葳
任奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202310938055.6A priority Critical patent/CN117077192B/en
Publication of CN117077192A publication Critical patent/CN117077192A/en
Application granted granted Critical
Publication of CN117077192B publication Critical patent/CN117077192B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Bioethics (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Storage Device Security (AREA)

Abstract

The invention discloses a method and a device for defending a pick-up car attack in federal study with privacy protection, wherein a key center generates a public-private key pair, and private keys are divided among a first server, a second server, a first server and clients; the first server encrypts the first initial model by using the public key and carries out partial decryption by using private key segmentation held by the first server to obtain a second initial model, and each client downloads the second initial model and carries out decryption by using the private key segmentation of the second initial model to carry out local model training to obtain respective gradients; the method comprises the steps that a first server and a second server calculate the gradient of a client side and cosine similarity of average gradient according to gradient ciphertext of each client side, corresponding rewards or penalties are carried out on each client side, aggregation is carried out according to the gradient ciphertext, the aggregated gradient ciphertext is obtained, each client side is updated according to the aggregated gradient ciphertext, and in a trap wheel, a trap gradient is constructed to induce a riding attacker to update to an error direction.

Description

Method and device for defending attack of taking and riding in federal study with privacy protection
Technical Field
The invention belongs to the technical field of federal study of privacy protection, and particularly relates to a method and a device for defending a pick-up car attack in federal study of privacy protection.
Background
With the high development of artificial intelligence, the quality of training data sets plays an increasingly critical role in model training. Data barriers are difficult to break due to data monopolization or department business differences between enterprises, which results in the mutual unavailability of data between enterprises or departments. In order to break the data island, *** proposed a distributed machine learning framework in 2016-federal learning. In federal learning, all participants share a global model in each round of training, which is maintained by a unique trusted parameter server and iteratively generated and transmitted to the participants in accordance with the training round. The participants do not need to share the local data set, but only transmit the local training updated model, so that the aim of protecting the privacy of the participant data is achieved.
As the application field of federal learning becomes wider and wider, many studies find weak points in federal learning, prove that fairness and confidentiality threats exist in federal learning, and the patent aims at solving fairness attack and confidentiality in federal learning, and is specifically summarized as follows:
1. Fairness attack
In the federal learning environment, how to guarantee fairness is a big problem, because the finally aggregated model has high commercial value, which increases the probability that malicious clients want to obtain the aggregated model free of charge, commonly referred to as "white spots". Such malicious clients are simply ride-on attackers whose motivation is that they have no data locally or want to reduce the training costs and overhead locally. The riding attacker enables the low-contribution participants to obtain the same high-quality aggregation model as the high-contribution participants, so that the rights and interests of other normal participants are greatly damaged, and the fairness of federal learning is destroyed.
The current common attack of the riding and the riding mainly comprises the following steps:
(1) A random gaussian disturbance-based pick-up attack (DELTA WEIGHT ATTACK, DWA) (2) pick-up attack (stochastic perturbations attack, SPA) based on varying weights, the method presumes that the attacker has prior knowledge of the training model, and can learn in advance the approximate variance of the SGD of each round of local update model and global model of the fair participants. (3) A pick-and-place attack (ADA) based on the auxiliary dataset the method assumes that the pick-up holds a small portion of the data in the dataset and replaces the random gaussian perturbation process with an adam optimizer.
In the process of realizing the patent, the current method for resisting the attack of the pick-up car is mainly found to adopt a method for calculating the model similarity of the participants, such as calculating the Euclidean distance between models or the frequency of model weight change. However, this type of method has the following problems:
(1) Participants holding Non-independent co-distributed (Non-IID) data sets are easily identified as ride-on attackers because when a benign participant holds a Non-independent co-distributed data set, the model that is typically trained will have a low similarity to the aggregated model and be erroneously identified as a ride-on attacker. (2) When a ride attacker adopts the auxiliary data set, resulting in its very powerful camouflage ability, it may be difficult to distinguish by simply calculating the model similarity. This patent has solved above-mentioned problem through setting up trap and punishment mechanism in federal study.
2. Confidentiality attack
Confidentiality in federal learning means that sensitive information such as local data, global models, etc. is not revealed to unauthorized users. In federal learning, an attacker cannot directly obtain the privacy information of a participant's data set because the participant's data set is stored locally. However, if the original model update of the participant is directly uploaded to the central server, an attacker can obtain the local privacy information of the participant by analyzing the uploaded original model update, which still has the risk of privacy disclosure.
The current method for solving confidentiality attack mainly comprises the following steps: (1) Based on the privacy protection technology of homomorphic encryption (Homomorphic Encryption, HE), the result obtained by carrying out operation on homomorphic encrypted data and then decrypting is consistent with the result obtained by directly carrying out operation on plaintext. (2) Privacy protection technology based on Secure multiparty computing ((Secure Multi-Party Computation, SMPC)), refers to the Secure computation of a model or function problem between multiple parties without the involvement of a trusted third party. (3) Privacy enhancement (DIFFERENTIAL PRIVACY, DP) based on differential privacy. (4) blockchain techniques.
Current pick-up attack defense strategies in federal learning of privacy protection may cause privacy protection to be compromised, for example: the patent CN114266361A marks the clients with abnormal Euclidean distance and average frequency of weight change by calculating the Euclidean distance between the clients, and when a certain client is marked with the abnormality for 3 times, the client is considered as a pick-up attacker, and the client kicks out of the federal learning training; however, the method does not carry out privacy protection on the information uploaded by the client; meanwhile, the capability of the assumed pick-up attacker is weak, and if the pick-up attacker adopts pick-up attack based on random Gaussian disturbance or pick-up attack based on auxiliary data set, the pick-up attacker cannot be distinguished by Euclidean distance and weight change frequency. Patent CN112714106a implements defense against a ride-on attacker based on intelligent contracts by building intelligent contracts between task issuing institutions and clients in the blockchain according to computational certificates, which uses blockchain technology, which requires each party to verify and record all transactions, thus its performance and scalability are challenging, and blockchain technology also requires the establishment of intelligent contracts, maintenance and transmission of blockchain data, which all require a lot of storage space and computational resources. Moreover, the distribution of loss values of the WGAN-GP model is not sufficiently distinguishable from the above-described hypothetical ride attacker's ability is weak.
In summary, the conventional method for defending a pick-up attack has the following problems:
(1) Judging only from the similarity between the model parameters submitted by the riding attacker and the model parameters of other participants, and possibly failing to identify the riding attacker with strong camouflage capability;
(2) The participants with the non-independent co-distributed data sets are easy to identify as the riding car attacker;
(3) Privacy in federal learning is not protected.
Disclosure of Invention
Aiming at the problems existing in the prior art, the embodiment of the application aims to provide a method and a device for defending a pick-up car attack in federal learning with privacy protection.
According to a first aspect of an embodiment of the present application, there is provided a method for defending a pick-up car attack in federal learning for protecting privacy, including:
the first server performs model initialization, a key center generates a public key pair and broadcasts the public key, and the private key is segmented between the first server and the second server as well as between the first server and each client through a key segmentation algorithm;
The first server encrypts the first initial model by using the public key and performs partial decryption by using private key segmentation held by the first server to obtain a second initial model, and each client downloads the second initial model and performs a partial decryption algorithm and a full decryption algorithm by using the private key segmentation of the second initial model to obtain a third initial model;
According to the third initial model, each client performs local model training to obtain respective gradients, encrypts the gradients by using a public key, and sends gradient ciphertext to a first server;
The method comprises the steps that a first server and a second server calculate the cosine similarity of a client gradient and an average gradient according to gradient ciphertext of each client, corresponding rewards or penalties are carried out on each client, the first server carries out parameter aggregation which is safe and resistant to the attack of a riding car according to the gradient ciphertext, the aggregated gradient ciphertext is obtained, each client updates according to the aggregated gradient ciphertext, and in a trap wheel, the updating of the riding car attacker to the error direction is induced by constructing the trap gradient.
Further, each client performs local model training, and after each gradient is obtained, the gradient of the current round is obtainedApproximately as a large integer.
Further, the first server and the second server calculate cosine similarity of the client gradient and the average gradient according to the gradient ciphertext of each client, and the method comprises the following steps:
first server Calculating the ciphertext of the gradient sum [ [ sum ] ] for the current gradient ciphertext [ [ g 1]],…,[[gn ] ];
Adding random disturbance to the gradient ciphertext by using random numbers so as to protect again;
gradient ciphertext obtained by random number disturbance on all clients by using self key segmentation sk 1 Executing partial decryption algorithm to obtain partial decrypted gradient ciphertext with random disturbance protection
HandleAndTo a second server
For a pair ofPerforming partial decryption algorithm to obtainBy usingAndPerforming a full decryption algorithm to obtainI.e., g i with noise; for a pair ofThe same operation is performed to obtainI.e. the sum of gradients with noise;
Computing the gradient of the ith client in the form of random perturbation Sum of gradientsCosine similarity of (c);
For a pair of Obtained using an encryption algorithm
HandleIssued to Calculating cosine similarity with random disturbance removed
AndAnd executing a partial decryption algorithm and a full decryption algorithm to obtain the cosine similarity cos i of the client gradient and the average gradient.
Further, corresponding rewards or penalties are carried out on each client, specifically:
Among all clients, for clients with cosine similarity cos i > λ, a rewarding algorithm is performed: Otherwise, executing a penalty algorithm: Where lambda is the threshold, alpha i is the limit of client C i, In order to penalize the penalty,Sigma is the cost for rewards.
Further, the first server performs parameter aggregation which is safe and can resist a pick-up attack to obtain an aggregated gradient ciphertext, and each client updates according to the aggregated gradient ciphertext, wherein in a trap wheel, a pick-up attacker is induced to update towards an error direction by constructing a trap gradient, and the method comprises the following steps:
Initializing [ [ g ] to [ [0] ];
Among all clients, a set of clients for a limit α i >0 Aggregation is carried out to obtain gradient ciphertext [ g ] after aggregation, and the quota of the client side participating in aggregation is updated;
at the trap wheel, the first server Executing a trap algorithm, specifically: the current [ [ g ] ] is reserved and recorded as [ [ g reserve ] ], the [ [ g ] ] is initialized to be a random matrix, the [ [ g ] ] is normalized to obtain a trap gradient [ [ g trap ] ], and a user terminal with the limit alpha i >0 is sent with the trap gradient [ [ g trap ] ] to induce a toilet attacker to update to an error direction;
A round of first servers after the trap round Transmitting the last stored [ [ g reserve ] ] to the client with the limit alpha i >0 for updating, and performing the first server in the rest normal roundsAnd sending the aggregated gradient ciphertext [ [ g ] ] to the client with the limit alpha i >0 for updating.
According to a second aspect of an embodiment of the present application, there is provided a device for defending a pick-up car attack in federal learning for protecting privacy, including:
The system comprises a model and a key initialization module, a key center and a key segmentation module, wherein the model and key initialization module is used for initializing the model by a first server, generating a public-private key pair by the key center and broadcasting the public key, and segmenting the private key among the first server, a second server, the first server and each client by a key segmentation algorithm;
The model distribution module is used for encrypting the first initial model by using the public key and performing partial decryption by using private key segmentation held by the first initial model by using the model distribution module to obtain a second initial model, and each client downloads the second initial model and performs partial decryption algorithm and full decryption algorithm by using the private key segmentation of the second initial model to obtain a third initial model;
the local model training module is used for carrying out local model training on each client according to the third initial model to obtain respective gradients, encrypting the gradients by using a public key and sending gradient ciphertext to the first server;
The system comprises a model parameter aggregation module, a model parameter aggregation module and a model parameter analysis module, wherein the model parameter aggregation module is used for calculating the cosine similarity of the gradient and the average gradient of a client according to the gradient ciphertext of each client, carrying out corresponding rewarding or punishment on each client, carrying out parameter aggregation which is safe and can resist the attack of a riding car by the first server according to the gradient ciphertext, obtaining the aggregated gradient ciphertext, and updating each client according to the aggregated gradient ciphertext, wherein in a trap wheel, updating of the riding car attacker to the wrong direction is induced by constructing the trap gradient.
According to a third aspect of an embodiment of the present application, there is provided an electronic apparatus including:
one or more processors;
a memory for storing one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of the first aspect.
According to a fourth aspect of embodiments of the present application there is provided a computer readable storage medium having stored thereon computer instructions which when executed by a processor perform the steps of the method according to the first aspect.
The technical scheme provided by the embodiment of the application can comprise the following beneficial effects:
From the above embodiments, the present application designs a scheme for resisting malicious riding attacks in federal learning under privacy protection. Specifically, the application provides a safe ciphertext multiplication calculation mode, which can enable the participants to encrypt the model gradient and upload the model gradient to the server for aggregation, thereby protecting the privacy of the participants. On the basis, the server executes a method for resisting the attack of the riding vehicle based on a trap and a reward and punishment mechanism, calculates the similarity of the uploading model and the sum of the models of the participants under a ciphertext, punishs the participants with larger differences but does not kick out immediately until a certain participant submits the model update with large differences with the total model for many times, thereby ensuring the rights of clients with non-IID data and realizing the fairness of aggregation. Meanwhile, the taking-up vehicle participants with high camouflage capability are further identified through a trap mechanism, and the taking-up vehicle participants with high camouflage capability (only a small amount of data is camouflaged to be equal to the data quantity of the rest participants) are immediately kicked out after identification, so that the problem that the taking-up vehicle participants with high camouflage capability are wrongly divided into normal participants in the traditional method is avoided. Under the two mechanisms, the resistance of the riding and defecating attacker in the federal learning is realized, and the riding and defecating attacker which has no training data or only a small amount of training data and still participates in the federal learning is ensured to obtain a high-quality aggregation model.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flow chart illustrating a method of protecting against a pick-up car attack in federal learning for privacy protection in accordance with an exemplary embodiment.
Fig. 2 is a block diagram of a privacy preserving federal in-learning ride-on attack defense arrangement according to an exemplary embodiment.
Fig. 3 is a schematic diagram of an electronic device shown according to an example embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the application. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination" depending on the context.
Noun interpretation:
(1) Federal study
Federal learning is a distributed machine learning framework. The framework allows two or more participants to collaborate to construct a common machine learning model, where the training data for each participant is kept local and does not leave the participant during the training of the model. After the participants train, the respective model parameters are uploaded to a central server, the central server aggregates the model parameters of each participant, the aggregated parameters are issued to each participant, and then the participants continue to train for a new round, and the process is iterated until the model converges. The related information of the model can be exchanged and transmitted among the participants in an encrypted form, so that the original data of other parties cannot be reversely deduced by any one of the participants is ensured. The performance of the final federal learning model may be sufficiently close to an ideal model (meaning that all training data is put together and the resulting model is trained).
(2) Attack of taking a car
At present, the application field of federal learning is more and more extensive, and two important targets of federal learning are privacy protection and joint modeling respectively, and the two targets are attack directions of attackers. In federal learning, each participant trains a local model with its own local data and uploads the model to a server, and the aggregated model has extremely high commercial value due to carrying the participant's data information, which also results in the presence of a ride attacker in federal learning. The pick-up attacker wants to obtain the same high-quality model as other participants while not having training data locally or reducing the cost and expense of local training, which can lead to the low-contribution client obtaining the same model as the high-contribution client, and undermine fairness in federal learning.
(3) Federal learning for privacy protection
Although federal learning protects the participants' local data by way of the participants and servers exchanging model parameters, learner studies find that exchanged model gradients may also reveal private information of training data. The training mechanism of federal learning introduces new risks for privacy, and an attacker can acquire the privacy information of the local data of the participants through methods such as member inference, attribute inference, eavesdropping and the like. In order to prevent the privacy information of the participants from revealing, privacy protection needs to be carried out on the privacy information, and the federal learning algorithm for privacy protection at present can adopt safe multiparty calculation, differential privacy, encryption and the like.
FIG. 1 is a flow chart of a method of defending a pick-up car attack in federal learning for privacy protection, as shown in FIG. 1, which is applied to a terminal, according to an exemplary embodiment, may include the steps of:
(1) The first server performs model initialization, a key center generates a public key pair and broadcasts the public key, and the private key is segmented between the first server and the second server as well as between the first server and each client through a key segmentation algorithm;
(2) The first server encrypts the first initial model by using the public key and performs partial decryption by using private key segmentation held by the first server to obtain a second initial model, and each client downloads the second initial model and performs a partial decryption algorithm and a full decryption algorithm by using the private key segmentation of the second initial model to obtain a third initial model;
(3) According to the third initial model, each client performs local model training to obtain respective gradients, encrypts the gradients by using a public key, and sends gradient ciphertext to a first server;
(4) The method comprises the steps that a first server and a second server calculate the cosine similarity of a client gradient and an average gradient according to gradient ciphertext of each client, corresponding rewards or penalties are carried out on each client, the first server carries out parameter aggregation which is safe and resistant to the attack of a riding car according to the gradient ciphertext, the aggregated gradient ciphertext is obtained, each client updates according to the aggregated gradient ciphertext, and in a trap wheel, the updating of the riding car attacker to the error direction is induced by constructing the trap gradient.
The scheme is based on a trap and a punishment mechanism, and combines a homomorphic encryption algorithm to provide a federal learning method for resisting the attack of the riding vehicle under the protection of privacy. The scheme can be used for a server side in federal study under various scenes, for example, a cancer prediction model is jointly modeled by a plurality of hospitals and medical institutions in a medical scene, and the aim of protecting the privacy of patient data while resisting a riding attacker (namely, an attacker who does not have a medical data set but wants to acquire a high-value prediction model) is fulfilled by using the scheme at a central server side. The scheme comprises two central servers conforming to a specified protocolAndThey may attempt privacy violation without collusion, trusted key center KC, and client c= { C 1,,...,,Cn }. The scheme is as follows:
(1) Model and key initialization: the first server performs model initialization, a key center generates a public key pair and broadcasts the public key, and the private key is segmented between the first server and the second server as well as between the first server and each client through a key segmentation algorithm;
First by Initializing model parameters by adopting a random initialization mode to obtain W 0, generating and dividing a secret key by a key center KC, broadcasting a public key after generating a public-private key pair (pk, sk), and putting a private key in a key center KCAndAnd C i (i.e. [1, n ]) to execute the key segmentation algorithm. In the previous encryption scheme, if only one server is adopted, as long as a malicious client leaks the private key, the server can decrypt the uploading gradient of all clients.
It should be noted that the privacy protection technology in this patent is based on the Paillier's double trapdoor cryptosystem, including key segmentation algorithm, partial decryption algorithm and full decryption algorithm. Wherein two large prime numbers k, l are randomly selected, λ=lcm (k-1, l-1) is calculated, n=kl
Paillier cryptographic system is a scheme satisfying addition homomorphism, in finite fieldsGiven two ciphertexts [ [ x 1]],[[x2 ] ] and a constant k, the Paillier cryptographic system has the following properties:
[[x1+x2]]=[[x1]]·[[x2]],
[[x1]]N-1=[[-x1]],
[[x1-x2]]=[[x1]]·[[x2]]N-1.
The key segmentation algorithm is as follows:
(sk) → (sk 1,sk2) given a private key sk=λ, sk is randomly split into two private keys sk 1 and sk 2 and the following condition is satisfied:
(2) Model distribution: the first server encrypts the first initial model by using the public key and performs partial decryption by using private key segmentation held by the first server to obtain a second initial model, and each client downloads the second initial model and performs a partial decryption algorithm and a full decryption algorithm by using the private key segmentation of the second initial model to obtain a third initial model;
In particular, the method comprises the steps of, The initial model is encrypted by pk to obtain a first initial model [ W 0 ] ], and the first initial model [ W 0 ] ] is the encrypted initial disease diagnosis model under the medical scene. From the following componentsThe partial decryption algorithm is performed using the own private key sk 1 to obtain a second initial model [ W 0]1 ], and the first initial model [ [ W 0 ] ] and the second initial model [ W 0]1 ] are distributed to clients. But at this timeThe obtained ciphertext still cannot learn the specific parameters of the model, because the partial private key held by the user only supports partial decryption algorithm, thus preventingInformation disclosure such as patient privacy in data. Then each client, namely various medical institutions such as hospitals, downloads [ [ W 0 ] ] and [ W 0]1 ], performs a partial decryption algorithm on the local part by using a private key sk 2 to obtain [ W 0]2 ], and then uses [ W 0]1 and [ W 0]2 ] to obtain W 0 as a third initial model through a full decryption algorithm, namely a final plaintext initial model.
Specifically, the partial decryption algorithm is thatThe secret of the model is decrypted into corresponding [ x ] i by partial decryption given [ [ x ] ] and the private key share sk i of a certain participant, and when the method is specifically implemented, the parameters are first made to the power sk i, and then the model N 2 is in a partial secret form.
The full decryption algorithm is ([ x ] 1,[x]2) →x, i.e., given a decryption share of element progenitor ([ x ] 1,[x]2), x, which is expressed in plaintext, can be obtained.
In particular, due to key splitting at the time of key splittingAndAnd C i (i.epsilon.1, n), so that the plaintext data of the model can be taken only if the two central servers decrypt together, or if the medical institution and the server decrypt together. In the security hypothesis model, the serverAndIs not colluded with each other, and collusion between the medical institution and the server may infringe the privacy of the patient of the medical institution, so that the medical institution and the server may also be regarded as not colluded with each other, and thus may be obtained.
(3) Client local model training: according to the third initial model, each client performs local model training to obtain respective gradients, encrypts the gradients by using a public key, and sends gradient ciphertext to a first server;
the client trains the model according to the model W t aggregated by the current round (set as the t-th round) and the local data set Obtaining the gradient of the wheelSince the encryption and decryption algorithms are all carried out on integers, the encryption and decryption algorithms need to be carried out onApproximating a large integer, the specific algorithm is: where deg is a magnification, specifically, it is determined according to the accuracy of the model at the time of prediction, and it is generally set to 10 6.
(4) Model parameter aggregation that is secure and resistant to pick-up attacks: the method comprises the steps that a first server and a second server calculate the cosine similarity of a client gradient and an average gradient according to gradient ciphertext of each client, rewards and penalizes each client, the first server carries out parameter aggregation which is safe and can resist a pick-up attack according to the gradient ciphertext, the aggregated gradient ciphertext is obtained, each client updates according to the aggregated gradient ciphertext, and in a trap wheel, a pick-up attacker is induced to update towards an error direction by constructing a trap gradient;
The model parameter aggregation which is safe and can resist the attack of the ride vehicle is based on a trap and a punishment mechanism, the trap mechanism transmits random parameters to the clients in trap rounds by taking the same initial amount of each client as the weight for participating in training, so that the ride vehicle attacker who carries out self-adaptive inference attack by issuing the parameters through a global model is misled. At the same time Only the encrypted gradient ciphertext can be obtained,Only the decrypted gradient with noise can be obtained, and the true gradient of each client cannot be obtained.
Specifically, the initial client side has a limit of a 1,…,an According to the gradient ciphertext [ [ g 1]],…,[[gn ] ] of n clients of the round and the current limit of each client is a 1,…,an, the method comprises the following steps ofAndAnd calculating cosine similarity of the gradient and the average gradient of each client, rewarding and punishing each client, carrying out parameter aggregation which is safe and resistant to the attack of a pick-up car by the first server, and outputting the aggregated [ [ g ] ], wherein the amount of each client in the next round is alpha 1,…,αn.
First, it is necessary to initialize punishment in a punishment and punishment mechanismThe rewards areSpending sigma and threshold lambda, in federal learning using an electrocardiographic anomaly classification dataset, participant n=10, the patent sets the hyper-parameters asΣ=0.005, λ=0. In particular, the range of parameter settings needs to be different according to the different needs of the employed data set. If the federal learning modeling task is a complex task, such as a pneumovirus severity prediction task, and it is necessary to combine different dimensional data of multiple regions, such as the number of pneumonic patients, the prevalence rate and the transmission rate, the virus mutation situation, and the like, to perform joint modeling, then it is necessary to reduce the penaltyAnd spending sigma; if it is a simpler task, such as classifying heart rate anomalies by electrocardiogram, participants can generally model data quickly, and the global model can converge correspondingly quickly, thus increasing punishmentAnd spending sigma.
The polymerization process specifically comprises the following steps:
(4.1) And The cosine similarity cos i of the [ [ g i ] gradient and the average gradient [ [ g avg ] ] of the ith client under the ciphertext is calculated together, and the scheme can ensure thatAndAnd the data uploading privacy of the participants is protected by calculating under the condition that the gradient ciphertext is not decrypted.
Specifically, byAndThe calculated cosine similarity of the gradient and the gradient sum under the condition of double blindness to the gradient of the participators is input into n client gradient ciphertexts [ [ g 1]],…,[[gn ] ], wherein [ [ g i]]=[[xi1]],…,[[xim ] ], the average gradient ciphertexts [ [ g avg ] ] of the clients, the calculated cosine similarity is 1,…,cosn of the gradient and the gradient sum of the n clients, updated quota a 1,…,an of each client and aggregation model ciphertexts [ [ g ] ] after the safe aggregation of the clients, and the method comprises the following steps:
(4.1.1) The ciphertext of the gradient sum is calculated for the current gradient ciphertext [ [ g 1]],…,[[gn ] ], the ciphertext of the average gradient can be obtained by calculating the ciphertext of the gradient sum, and the ciphertext is subsequently used for judging malicious parties, specifically:
[[sum]]=[[g1+…+gn]]=[[g1]]·…·[[gn]]
(4.1.2) The gradient ciphertext is protected again by the random number, so that the gradient obtained after the subsequent server decrypts the gradient ciphertext is still provided with random disturbance, the data privacy of the participants cannot be revealed because the gradient similarity needs to be calculated, and the method specifically comprises the following steps:
For the gradient ciphertext [ [ g i ] ] of the ith client, let it have a total of m dimensions, execute for each dimension k ε [1, n ]:
For all client gradients sum [ [ sum ] ], there is also m dimensions, for each dimension k ε [1, m ] performs:
Wherein the method comprises the steps of Representing a random integer for blinding the gradient cipher text.
(4.1.3)Gradient ciphertext perturbed by random number for all clients with its own key sk 1 Executing partial decryption algorithm to obtain partial decryption and random disturbance protection
(4.1.4)HandleAndIssued to
(4.1.5)Executing the decryption algorithm twice, firstly, forPerforming partial decryption algorithm to obtain Then useAndPerforming a full decryption algorithm to obtainI.e., g i with noise; for a pair ofThe same operation is performed to obtainI.e. the sum of the gradient with noise. So far as the process is concerned,The gradient with random disturbance and the plaintext form of the gradient sum can be taken, and the subsequent operation of calculating the gradient similarity is performed on the basis.
(4.1.6)Computing the gradient of the ith client in the form of random perturbationSum of gradientsCosine similarity of (c). I.e.Here, theAlso with a random disturbance, and the like,The true cosine similarity cannot be obtained.
(4.1.7)For a pair ofObtained using an encryption algorithm
(4.1.8)HandleIssued toThe cosine similarity with random disturbance removed [ [ cos i ] ] is calculated as follows:
because the patent is based on the paillier addition homomorphism, the plaintext subtraction operation can be realized through the following ciphertext-to-ciphertext operation:
The x a is that g i,xb is the gradient sum, and the ciphertext form with the cosine similarity removed from random disturbance can be obtained through the formula. The method can be used for converting the cosine value under the disturbance form into the true cosine value skillfully, and the gradient always exists in the form of ciphertext or disturbance in the process, so that the data privacy of the participants is not exposed.
(4.1.9)AndThe cos ab can be obtained jointly by performing the partial decryption algorithm and the full decryption algorithm.
(4.2) Performing a reward and punishment based mechanism:
Among all clients, for clients with cos i > λ, a rewarding algorithm is performed: Otherwise, executing a penalty algorithm:
The mechanism of rewarding and punishing is based on AndAnd taking the cosine similarity of each client gradient and the average gradient which are calculated together as a standard for punishing malicious clients and rewarding normal clients, and if the line of the client is exhausted due to multiple punishments, the client can not participate in federal training any more. The reward and punishment mechanism can ensure that a client holding non-IID data is punished when the difference between the current training data distribution and other clients is large, but cannot be immediately kicked out of training, and meanwhile, a malicious client can be punished.
(4.3) ByExecuting an aggregation algorithm capable of resisting a pick-up attack, wherein the client can influence the quota of the client if the pick-up attack is launched in the aggregation process, and the client can be kicked out of federal learning training when the quota alpha i is less than 0, namely the client can not receive the training any moreThe transmitted model is updated, thereby avoiding pick-up attacks from taking up high quality global models. And obtaining the aggregated gradient ciphertext [ g ], and updating each client according to the aggregated gradient ciphertext. Because the scheme carries out the recognition of the taking-up attacker based on the model similarity, the attacker with weak attack capability can be recognized, and the taking-up attacker with strong attack capability can not be completely recognized by only adopting the model similarity, the scheme is provided with the trap wheel to recognize the taking-up attacker with strong attack capability,Inducing a ride attacker to update to the wrong direction by constructing a trap gradient:
(4.3.1) initializing [ [ g ] ] to [ [0] ];
(4.3.2) among all clients, client set for alpha i >0 And (3) performing aggregation to obtain an aggregated gradient ciphertext [ g ], and updating the amount of the client side participating in the aggregation. In the middle ofTo normalize the amount of all participants, the weight proportion that the final participant should occupy in the final aggregation model can be obtained. If α i is larger, the closer the description is to the global model, the greater the final contribution to the aggregated model should be. Clients with a i less than 0 are said to be involved in a pick-up attack and therefore should not be involved in aggregation. Wherein deg is the expansion factor:
since this patent is based on the paillier addition homomorphism, therefore The plaintext addition may be calculated by the following ciphertext-to-ciphertext operation:
(4.3.3) at the trap wheel The trap algorithm is executed as follows:
The current [ [ g ] ] is reserved and recorded as [ [ g reserve ] ], the [ [ g ] ] is initialized to be a random matrix, the [ [ g ] ] is normalized to obtain a trap gradient [ [ g trap ] ], and a riding car attacker is induced to update to the error direction by sending the trap gradient [ [ g trap ] ]. It should be noted that, the client with the quota less than 0 is not added to the federal learning training any more, and therefore only sends the client with the quota greater than 0, that is, this step is intended to check whether there are more aggressive riding attackers in the federal learning. If there are also ride attacks in federal learning, then the ride gradient it sends to the server is constructed to mimic the server or the rest of the participants' updates. Because privacy protection is achieved by the scheme, a riding attack cannot obtain the model gradient of other people through eavesdropping the communication process, and the riding attack can be achieved only by imitating the gradient distributed by the server. While the trap gradients constructed by the server are randomly constructed, so a ride-on attacker can be easily identified at the trap wheel. Generally, the trap wheel should be set in the first few wheels of model training to identify a ride-on attacker as early as possible in federal learning training;
(4.3.4) a first server round after the trap round Transmitting the last stored [ [ g reserve ] ] to the client with the limit alpha i >0 for updating, and performing the first server in the rest normal roundsAnd sending the aggregated gradient ciphertext [ [ g ] ] to the client with the limit alpha i >0 for updating.
The scheme realizes the resistance to the riding and defecating attacker in the Union study scene, and the training data in the medical scene contains a large amount of sensitive information such as disease history, diagnosis result and the like of the patient. The anti-pick-up attack can prevent malicious nodes from acquiring sensitive information of other nodes, and the privacy of patients is ensured. By resisting the attack of taking a car, all the participating nodes in federal learning contribute to own data, so that the model can learn the characteristics of each node more fully, and the accuracy and performance of diagnosis are improved. Meanwhile, the attack of taking a car is resisted, so that each participating node is ensured to make a certain contribution to federal learning, the phenomenon that some nodes lose participation power due to unfairness is prevented, and fairness and sustainability of federal learning are maintained.
The application also provides an embodiment of the privacy-protected federal learning pick-up attack defense device, corresponding to the embodiment of the privacy-protected federal learning pick-up attack defense method.
Fig. 2 is a block diagram of a privacy preserving federal in-learning ride-on attack defense device according to an exemplary embodiment. Referring to fig. 2, the apparatus may include:
The model and key initialization module 21 is used for initializing a first server, generating a public-private key pair by a key center and broadcasting the public key, and dividing the private key among the first server, a second server, the first server and each client through a key division algorithm;
the model distribution module 22 is configured to encrypt a first initial model by using the public key and perform partial decryption by using a private key partition held by the first server to obtain a second initial model, and each client downloads the second initial model and performs a partial decryption algorithm and a full decryption algorithm by using the private key partition of the second initial model to obtain a third initial model;
the local model training module 23 is configured to perform local model training on each client according to the third initial model to obtain respective gradients, encrypt the gradients by using a public key, and send gradient ciphertext to the first server;
The model parameter aggregation module 24 is configured to calculate, by using the first server and the second server, a gradient of the client and a cosine similarity of the average gradient according to gradient ciphertexts of the clients, and perform corresponding rewarding or punishment on each client, so that the first server performs parameter aggregation that is safe and resistant to a pick-up attack, and obtains an aggregated gradient ciphertexts, and each client updates according to the aggregated gradient ciphertexts, where in a trap wheel, a pick-up attacker is induced to update in an error direction by constructing a trap gradient.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present application. Those of ordinary skill in the art will understand and implement the present application without undue burden.
Correspondingly, the application also provides electronic equipment, which comprises: one or more processors; a memory for storing one or more programs; the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the federal in-learning ride-on attack defense method of privacy protection as described above. As shown in fig. 3, a hardware configuration diagram of an apparatus with any data processing capability in accordance with the present application, except for a processor, a memory, and a network interface shown in fig. 3, where the apparatus with any data processing capability in the present application is generally according to the actual function of the apparatus with any data processing capability, may further include other hardware, which is not described herein.
Correspondingly, the application also provides a computer readable storage medium, wherein computer instructions are stored on the computer readable storage medium, and the instructions realize the method for defending the pick-up car attack in the federal learning of privacy protection when the instructions are executed by a processor. The computer readable storage medium may be an internal storage unit, such as a hard disk or a memory, of any of the data processing enabled devices described in any of the previous embodiments. The computer readable storage medium may also be an external storage device, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), an SD card, a flash memory card (FLASH CARD), etc. provided on the device. Further, the computer readable storage medium may include both internal storage units and external storage devices of any device having data processing capabilities. The computer readable storage medium is used for storing the computer program and other programs and data required by the arbitrary data processing apparatus, and may also be used for temporarily storing data that has been output or is to be output.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof.

Claims (6)

1. A method for defending a ride-on attack in federal learning with privacy protection, comprising:
the first server performs model initialization, a key center generates a public key pair and broadcasts the public key, and the private key is segmented between the first server and the second server as well as between the first server and each client through a key segmentation algorithm;
The first server encrypts the first initial model by using the public key and performs partial decryption by using private key segmentation held by the first server to obtain a second initial model, and each client downloads the second initial model and performs a partial decryption algorithm and a full decryption algorithm by using the private key segmentation of the second initial model to obtain a third initial model;
According to the third initial model, each client performs local model training to obtain respective gradients, encrypts the gradients by using a public key, and sends gradient ciphertext to a first server;
the method comprises the steps that a first server and a second server calculate the cosine similarity of a client gradient and an average gradient according to gradient ciphertext of each client, corresponding rewards or penalties are carried out on each client, the first server carries out safe and parameter aggregation capable of resisting a pick-up attack according to the gradient ciphertext, the aggregated gradient ciphertext is obtained, each client updates according to the aggregated gradient ciphertext, and in a trap wheel, a pick-up attacker is induced to update towards an error direction by constructing a trap gradient;
The method for calculating the cosine similarity of the client gradient and the average gradient by the first server and the second server according to the gradient ciphertext of each client comprises the following steps:
first server Calculating the ciphertext of the gradient sum [ [ sum ] ] for the current gradient ciphertext [ [ g 1]],…,[[gn ] ];
Adding random disturbance to the gradient ciphertext by using random numbers so as to protect again;
gradient ciphertext obtained by random number disturbance on all clients by using self key segmentation sk 1 Executing partial decryption algorithm to obtain partial decrypted gradient ciphertext with random disturbance protection
The number of [ [ sum ] ],AndTo a second server
For a pair ofPerforming partial decryption algorithm to obtainBy usingAndPerforming a full decryption algorithm to obtainI.e., g i with noise; for a pair ofThe same operation is performed to obtainI.e. the sum of gradients with noise;
Computing the gradient of the ith client in the form of random perturbation Sum of gradientsCosine similarity of (c);
For a pair of Obtained using an encryption algorithm
HandleIssued to Calculating cosine similarity [ cos i ] with random disturbance removed;
And Executing a partial decryption algorithm and a full decryption algorithm to obtain cosine similarity cos i of the client gradient and the average gradient;
The first server performs parameter aggregation which is safe and can resist the attack of the taking vehicle, so as to obtain an aggregated gradient ciphertext, and each client updates according to the aggregated gradient ciphertext, wherein in a trap wheel, the taking vehicle attacker is induced to update towards an error direction by constructing a trap gradient, and the method comprises the following steps:
Initializing [ [ g ] to [ [0] ];
Among all clients, a set of clients for a limit α i >0 Aggregation is carried out to obtain gradient ciphertext [ g ] after aggregation, and the quota of the client side participating in aggregation is updated;
at the trap wheel, the first server Executing a trap algorithm, specifically: the current [ [ g ] ] is reserved and recorded as [ [ g reserve ] ], the [ [ g ] ] is initialized to be a random matrix, the [ [ g ] ] is normalized to obtain a trap gradient [ [ g trap ] ], and a user terminal with the limit alpha i >0 is sent with the trap gradient [ [ g trap ] ] to induce a toilet attacker to update to an error direction;
A round of first servers after the trap round Transmitting the last stored [ [ g reserve ] ] to the client with the limit alpha i >0 for updating, and performing the first server in the rest normal roundsAnd sending the aggregated gradient ciphertext [ [ g ] ] to the client with the limit alpha i >0 for updating.
2. The method of claim 1, wherein each client performs local model training to obtain a respective gradient, and then performs the gradient for the current roundApproximately as a large integer.
3. The method according to claim 1, characterized in that each client is given a corresponding reward or penalty, in particular:
Among all clients, for clients with cosine similarity cos i > λ, a rewarding algorithm is performed: Otherwise, executing a penalty algorithm: Where lambda is the threshold, alpha i is the limit of client C i, In order to penalize the penalty,Sigma is the cost for rewards.
4. A device for defending a ride-on attack in federal learning with privacy protection, comprising:
The system comprises a model and a key initialization module, a key center and a key segmentation module, wherein the model and key initialization module is used for initializing the model by a first server, generating a public-private key pair by the key center and broadcasting the public key, and segmenting the private key among the first server, a second server, the first server and each client by a key segmentation algorithm;
The model distribution module is used for encrypting the first initial model by using the public key and performing partial decryption by using private key segmentation held by the first initial model by using the model distribution module to obtain a second initial model, and each client downloads the second initial model and performs partial decryption algorithm and full decryption algorithm by using the private key segmentation of the second initial model to obtain a third initial model;
the local model training module is used for carrying out local model training on each client according to the third initial model to obtain respective gradients, encrypting the gradients by using a public key and sending gradient ciphertext to the first server;
The system comprises a model parameter aggregation module, a model parameter aggregation module and a model parameter analysis module, wherein the model parameter aggregation module is used for calculating the cosine similarity of the gradient and the average gradient of a client according to the gradient ciphertext of each client, carrying out corresponding rewarding or punishing on each client, carrying out parameter aggregation which is safe and can resist the attack of a riding car by the first server according to the gradient ciphertext, obtaining the aggregated gradient ciphertext, and updating each client according to the aggregated gradient ciphertext, wherein in a trap wheel, updating of the riding car attacker to the wrong direction is induced by constructing the trap gradient;
The method for calculating the cosine similarity of the client gradient and the average gradient by the first server and the second server according to the gradient ciphertext of each client comprises the following steps:
first server Calculating the ciphertext of the gradient sum [ [ sum ] ] for the current gradient ciphertext [ [ g 1]],…,[[gn ] ];
Adding random disturbance to the gradient ciphertext by using random numbers so as to protect again;
gradient ciphertext obtained by random number disturbance on all clients by using self key segmentation sk 1 Executing partial decryption algorithm to obtain partial decrypted gradient ciphertext with random disturbance protection
The number of [ [ sum ] ],AndTo a second server
For a pair ofPerforming partial decryption algorithm to obtainBy usingAndPerforming a full decryption algorithm to obtainI.e., g i with noise; for a pair ofThe same operation is performed to obtainI.e. the sum of gradients with noise;
Computing the gradient of the ith client in the form of random perturbation Sum of gradientsCosine similarity of (c);
For a pair of Obtained using an encryption algorithm
HandleIssued to Calculating cosine similarity [ cos i ] with random disturbance removed;
And Executing a partial decryption algorithm and a full decryption algorithm to obtain cosine similarity cos i of the client gradient and the average gradient;
The first server performs parameter aggregation which is safe and can resist the attack of the taking vehicle, so as to obtain an aggregated gradient ciphertext, and each client updates according to the aggregated gradient ciphertext, wherein in a trap wheel, the taking vehicle attacker is induced to update towards an error direction by constructing a trap gradient, and the method comprises the following steps:
Initializing [ [ g ] to [ [0] ];
Among all clients, a set of clients for a limit α i >0 Aggregation is carried out to obtain gradient ciphertext [ g ] after aggregation, and the quota of the client side participating in aggregation is updated;
at the trap wheel, the first server Executing a trap algorithm, specifically: the current [ [ g ] ] is reserved and recorded as [ [ g reserve ] ], the [ [ g ] ] is initialized to be a random matrix, the [ [ g ] ] is normalized to obtain a trap gradient [ [ g trap ] ], and a user terminal with the limit alpha i >0 is sent with the trap gradient [ [ g trap ] ] to induce a toilet attacker to update to an error direction;
A round of first servers after the trap round Transmitting the last stored [ [ g reserve ] ] to the client with the limit alpha i >0 for updating, and performing the first server in the rest normal roundsAnd sending the aggregated gradient ciphertext [ [ g ] ] to the client with the limit alpha i >0 for updating.
5. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-3.
6. A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method of any of claims 1-3.
CN202310938055.6A 2023-07-28 2023-07-28 Method and device for defending attack of taking and riding in federal study with privacy protection Active CN117077192B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310938055.6A CN117077192B (en) 2023-07-28 2023-07-28 Method and device for defending attack of taking and riding in federal study with privacy protection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310938055.6A CN117077192B (en) 2023-07-28 2023-07-28 Method and device for defending attack of taking and riding in federal study with privacy protection

Publications (2)

Publication Number Publication Date
CN117077192A CN117077192A (en) 2023-11-17
CN117077192B true CN117077192B (en) 2024-07-05

Family

ID=88703350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310938055.6A Active CN117077192B (en) 2023-07-28 2023-07-28 Method and device for defending attack of taking and riding in federal study with privacy protection

Country Status (1)

Country Link
CN (1) CN117077192B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117828627A (en) * 2023-11-22 2024-04-05 安徽师范大学 Federal machine learning method and system with robustness and privacy protection

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113434873A (en) * 2021-06-01 2021-09-24 内蒙古大学 Federal learning privacy protection method based on homomorphic encryption
CN114266361A (en) * 2021-12-30 2022-04-01 浙江工业大学 Model weight alternation-based federal learning vehicle-mounted and free-mounted defense method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230047092A1 (en) * 2021-07-30 2023-02-16 Oracle International Corporation User-level Privacy Preservation for Federated Machine Learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113434873A (en) * 2021-06-01 2021-09-24 内蒙古大学 Federal learning privacy protection method based on homomorphic encryption
CN114266361A (en) * 2021-12-30 2022-04-01 浙江工业大学 Model weight alternation-based federal learning vehicle-mounted and free-mounted defense method and device

Also Published As

Publication number Publication date
CN117077192A (en) 2023-11-17

Similar Documents

Publication Publication Date Title
Lyu et al. Towards fair and privacy-preserving federated deep models
US20210143987A1 (en) Privacy-preserving federated learning
Zhou et al. PPDM: A privacy-preserving protocol for cloud-assisted e-healthcare systems
CN112714106B (en) Block chain-based federal learning casual vehicle carrying attack defense method
CN112106322A (en) Password-based threshold token generation
CN113221105B (en) Robustness federated learning algorithm based on partial parameter aggregation
Lyu et al. Towards fair and decentralized privacy-preserving deep learning with blockchain
Sun et al. Permissioned blockchain frame for secure federated learning
Fang et al. A privacy-preserving and verifiable federated learning method based on blockchain
CN113139204B (en) Medical data privacy protection method using zero-knowledge proof and shuffling algorithm
CN117077192B (en) Method and device for defending attack of taking and riding in federal study with privacy protection
Zhang et al. A privacy protection scheme for IoT big data based on time and frequency limitation
Mou et al. A verifiable federated learning scheme based on secure multi-party computation
CN111581648B (en) Method of federal learning to preserve privacy in irregular users
Zheng et al. Towards differential access control and privacy-preserving for secure media data sharing in the cloud
Fan et al. Lightweight privacy and security computing for blockchained federated learning in IoT
Ghavamipour et al. Federated synthetic data generation with stronger security guarantees
CN117521853A (en) Privacy protection federal learning method with verifiable aggregation result and verifiable gradient quality
Zhou et al. VDFChain: Secure and verifiable decentralized federated learning via committee-based blockchain
CN116318901A (en) Privacy and verifiable internet of things data aggregation method integrating blockchain
Masuda et al. Model fragmentation, shuffle and aggregation to mitigate model inversion in federated learning
CN113472524B (en) Data aggregation signature system and method for resisting malicious transmission data attack
CN116340986A (en) Block chain-based privacy protection method and system for resisting federal learning gradient attack
Liu et al. Federated Learning with Anomaly Client Detection and Decentralized Parameter Aggregation
CN115310120A (en) Robustness federated learning aggregation method based on double trapdoors homomorphic encryption

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant