CN117436078B - Bidirectional model poisoning detection method and system in federal learning - Google Patents

Bidirectional model poisoning detection method and system in federal learning Download PDF

Info

Publication number
CN117436078B
CN117436078B CN202311734020.7A CN202311734020A CN117436078B CN 117436078 B CN117436078 B CN 117436078B CN 202311734020 A CN202311734020 A CN 202311734020A CN 117436078 B CN117436078 B CN 117436078B
Authority
CN
China
Prior art keywords
client
gradient
central server
poisoning
verification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311734020.7A
Other languages
Chinese (zh)
Other versions
CN117436078A (en
Inventor
赵金东
欧璇璇
丁智颖
王文硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yantai University
Original Assignee
Yantai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yantai University filed Critical Yantai University
Priority to CN202311734020.7A priority Critical patent/CN117436078B/en
Publication of CN117436078A publication Critical patent/CN117436078A/en
Application granted granted Critical
Publication of CN117436078B publication Critical patent/CN117436078B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning

Abstract

The invention relates to the technical field of federal learning, in particular to a method and a system for detecting the poisoning of a bidirectional model in federal learning, wherein the method comprises the following steps: collecting data and constructing a data set; the client trains a local model based on the data set, acquires a local gradient of the client and uploads the local gradient to the central server; the central server performs aggregation based on the local gradient of the client, acquires a global gradient and sends the global gradient to a verifier; the verifier verifies the global gradient based on a preset bidirectional toxin-throwing defense model and sends the global gradient passing verification to the central server; and updating the global model by the central server based on the global gradient passing the verification, and finishing the poisoning detection of the client and the central server. The invention can simultaneously consider the safety of the server side and the client side in the federal learning process, and improves the wide application of federal learning in various fields.

Description

Bidirectional model poisoning detection method and system in federal learning
Technical Field
The invention relates to the technical field of federal learning, in particular to a method and a system for detecting two-way model poisoning in federal learning.
Background
The Bidirectional Poisoning Attack (BPA) occurs on the client and the central server at the same time, the performance of the global model is seriously influenced, the safety and the privacy of a user are endangered, the traditional poisoning is defined as that the damaged client tries to break the global model in the federal training process, the huge harm caused by the poisoning of the server is ignored, and the medical data has higher privacy, so that the application of federal learning is hindered.
Among all security threats, a poisoning attack poses the greatest risk to the application of federal learning. In recent years, considerable effort has been made to explore effective solutions to address these risks: (1) The intrusion detection framework in federal migration learning proposed by Fan et al (Fan et al 2020). (2) Based on the client-side defense framework, named leukocyte federal learning (Sun et al 2021 b), this framework is unique in that it is able to detect not only a poisoning attack, but also to eliminate negative effects on the global model, similar to leukocyte killing bacteria. This framework requires high performance from the client. (3) Zhang et al propose a FL-Detector scheme. Malicious clients are identified by comparing the predicted local model with the actual local model to prevent model poisoning attacks (Zhang et al 2022). But only for single model poisoning attacks.
The second is the technology of defending global model modification: (1) Federal learning can be verified, which belongs to the field of federal learning for privacy protection. Federal learning can be validated in an effort to establish trust between participants (Zhang & Yu, 2022). To prove that the aggregate result is not falsified or modified, the server needs to generate a certificate, signature or ciphertext (Mou et al, 2021; hahn et al, 2021; jiang et al, 2021). (2) Xu et al for the first time proposed a double-mask protocol to build a verifiable federal learning framework (Xu et al, 2019). In this scheme, the central server provides a proof using homomorphic hash functions and pseudo-random functions to prove the correctness of the global model. However, this approach requires a high communication overhead. (3) Guo et al propose a dimension independent communication overhead verifiable aggregation scheme that requires each client to submit only the hash value of its gradient vector, and not the vector itself (Guo et al 2020). But the scheme is also cryptography-based and involves generating public/private keys for all clients, which has a certain security risk. (4) Mou et al propose a verifiable federal learning scheme based on secure multiparty computing (Mou et al 2021; gao & Yu 2023; su et al 2023). However, encryption protocols also incur significant computational overhead and time delays.
Existing research only solves a specific problem in a certain aspect, but does not effectively solve the real challenges faced by two-way poisoning attacks. In addition, these cryptography-based methods of detecting server poisoning attacks have significant costs in terms of computation, communication, and storage. Therefore, there is a need for a method and system for bi-directional model poisoning detection in federal learning.
Disclosure of Invention
The invention aims to provide a bidirectional model poisoning detection method and system in federal learning, strengthen the defense against server side poisoning, and can simultaneously give consideration to the safety of the server side and the client side in the federal learning process, thereby improving the wide application of federal learning in various fields.
In order to achieve the above object, the present invention provides the following solutions:
a method for detecting the poisoning of a bidirectional model in federal learning comprises the following steps:
collecting data and constructing a data set;
the client trains a local model based on the data set, acquires a local gradient of the client and uploads the local gradient to the central server;
the central server performs aggregation based on the local gradient of the client, acquires a global gradient and sends the global gradient to a verifier;
the verifier verifies the global gradient based on a preset bidirectional toxin-throwing defense model and sends the global gradient passing verification to the central server;
and updating the global model by the central server based on the global gradient passing the verification, and finishing the poisoning detection of the client and the central server.
Preferably, the verifying the global gradient by the verifier based on a preset bi-directional poisoning defense model includes:
the verifier generates a verification vector based on the global gradient and sends the verification vector to the client and the central server;
the client and the central server acquire verification information based on the verification vector and send the verification information back to the verifier;
the verifier inputs the verification information and the global gradient into the bidirectional toxin-throwing defense model to verify whether the client and the central server are in toxin or not.
Preferably, the verifier generating a verification vector based on the global gradient comprises:
randomly generating a satisfaction based on the global gradientUnit vector of (2),/>Global gradient for the (t-1) th round,/->For randomly generated vectors +.>Is cosine similarity threshold, when +.>When the number of the unit vectors is smaller than 0, let the unit vectors +.>=-/>Ensure->Always greater than 0;
based onCalculate->Wherein->Always greater than 0;
based onGenerating the verification vector, wherein ∈>Is a verification vector.
Preferably, the authentication information includes a client authentication information set and a server authentication information set, the client authentication information set beingCV=Wherein->Numbering clients->Dot product value for client local gradient and authentication vector,>for client local gradient,/->Is a client set;
the server verification information set is as followsSV=Wherein->For global gradient and verification vector dot product value, < >>To send a set of clients of the local gradient to the central server.
Preferably, the verifier inputs the verification information and global gradient into the bi-directional poisoning defense model, and verifying whether the client and the central server are poisoning comprises:
the verifier filters malicious clients based on the client verification information set and identifies a poisoning client from the remaining clients;
acquiring local gradients of the malicious client and the poisoning client, verifying the local gradients of the malicious client and the poisoning client, and acquiring harmful gradients;
carrying out clearing treatment on the harmful gradient to obtain a global gradient passing verification;
verifying whether the central server is detoxified based on the verification-passed global gradient and verification vector.
Preferably, the method for removing the harmful gradient and obtaining the global gradient passing verification comprises the following steps:
wherein (1)>To verify the passing global gradient +.>Is an unverified global gradient, +.>For the number of gradients involved in the polymerization, +.>For malicious client and poisoning client total set, +.>The total number of malicious clients and poisoning clients.
Preferably, verifying whether the central server is detoxified based on the verified global gradient and verification vector comprises:
verifying whether the central server is detoxified or not through the dot product between the global gradient passing verification and the verification vector, wherein the verification method comprises the following steps:
if the equation is satisfied, the central server verifies to pass, otherwise, the central server throws the poison;
wherein,for verifying the vector +.>To verify the passing global gradient +.>For benign client quantity, ++>Is a benign set of clients.
In order to further achieve the above object, the present invention further provides a bidirectional model poisoning detection system in federal learning, including: the device comprises a data acquisition module, a detection information acquisition module and a poisoning detection module;
the data acquisition module is used for collecting data through the Internet of things equipment and constructing a data set;
the detection information acquisition module is used for a client to train a local model based on the data set, acquire a local gradient of the client, and the central server performs aggregation based on the local gradient of the client to acquire a global gradient;
the poisoning detection module is used for verifying the global gradient based on a preset bidirectional poisoning defense model by the verifier, sending the verified global gradient to the central server, and updating the global model by the central server based on the verified global gradient to finish poisoning detection of the client and the central server.
The beneficial effects of the invention are as follows:
the invention detects the bidirectional poisoning attack based on the unencrypted bidirectional defense of the vector dot product operation, and the unencrypted design ensures lower expenditure; the angle between the continuous global gradients is quantized by using the cosine similarity, and the cosine similarity comprises dot products, so that the two-way defense of client side poisoning and server poisoning is realized by using the dot products, the safety of the server side and the client side can be considered in the federal learning process, and the wide application of federal learning in various fields is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for detecting model poisoning in federal learning according to an embodiment of the present invention;
FIG. 2 is a flow chart of a bi-directional poisoning defence algorithm in accordance with an embodiment of the present invention;
FIG. 3 is a graph of the defense effect on MNIST data sets according to an embodiment of the present invention, wherein (a) is the defense against DBA on MNIST data sets and (b) is the defense against UMP on MNIST data sets;
FIG. 4 is a graph of the effects of defenses on an ORGANCNIST dataset according to an embodiment of the invention, wherein (a) is the defenses against DBA on the ORGANCNIST dataset and (b) is the defenses against UMP on the ORGANCNIST dataset;
fig. 5 is a graph of the defending effect on the CIFAR10 dataset according to an embodiment of the present invention, wherein (a) is defending against DBA on the CIFAR10 dataset and (b) is defending against UMP on the CIFAR10 dataset.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
The embodiment provides a bidirectional model poisoning detection method in federal learning, which is used for training local model update based on data collected by Internet of things equipment and uploading the local model update to a central server for safe data fusion so as to generate a global model for prediction. As shown in fig. 1, the method specifically includes:
step 1, collecting data through Internet of things equipment, constructing a data set, and sending the data set to a client;
step 2, the client trains the local model based on the data set and updates the local modelUploading to a central server;
step 3, the central server performs aggregation to obtain an unverified global gradientAnd will->Sending to a verifier;
step 4, the verifier receives and storesThen according to the global gradient of the (t-1) th round +.>Generating a verification vector +.>And will->Transmitting to the client and the central server;
the validation vector has two uses: firstly, revealing the gradient of a poisoning user by calculating cosine similarity between a verification vector and a local gradient as a filtering tool; second, it is verified whether the server is engaged in the poisoning based on the verification vector.
The step of generating the verification vector comprises:
(1) Randomly generating a satisfying formulaUnit vector of (2)𝒖;
(2) When (when)When the total weight is less than 0, let ∈ ->Ensure->Always greater than 0;
(3) Using the formulaCalculate->Wherein->Always greater than 0;
(4) Finally according to the formulaGenerating a verification vector;
step 5, the client and the central server calculate the verification vectorGenerating authentication information, and then transmitting the authentication information back to the authenticator, the authentication information including client authentication information set { }>Sum server authentication information setSV
Client side generating verification information setCVThe process of (1) is as follows:
(1) Calculating a local gradient:
(2) Generating a verification information set:
the central server generates a verification information setSVThe process of (2) is as follows:
wherein,are those client sets that upload local updates.
Step 6, the verifier verifies the information set of the clientCVServer verification information setSVGlobal gradients with unverifiedTransmitting the virus to a preset bidirectional virus protection model, filtering the virus of a client, checking the virus of a server, and if the verification process is successful, carrying out the verification on the global gradient +.>Transmitting to a central server;
the verifier has the function of thoroughly eliminating the negative influence of the poisoning client and the malicious client on the global model under the condition of not influencing the precision of the final model, and finally verifying the correctness of the aggregation result of the central server.
The bidirectional poisoning defense model is constructed based on a bidirectional poisoning defense algorithm and comprises a client side poisoning detection algorithm and a server poisoning detection algorithm.
In this embodiment, the specific process of step 6 is shown in fig. 2, and includes:
(1) The verifier filters out malicious clients;
collecting detailed description in client verification information set, and setting corresponding locally updated client set asThe corresponding verification information isCV=/>
The method for filtering the malicious client comprises the following steps:
wherein,is a set of authentication information that excludes malicious clients, < ->Is a set of authentication information for a malicious client,is a set of malicious clients, and +.>Is a collection of users that excludes malicious clients.
Malicious clients do not intentionally respond or provide erroneous authentication information, which can interfere with the proper authentication and hamper the authentication process.
(2) Executing a client poisoning detection algorithm, and identifying poisoning clients from the remaining clients;
first, use is made ofCalculating cosine similarity between the non-malicious local gradient and the verification vector, denoted +.>
The Gap statistic is then used to determine the number of clusters. If the result exceeds 1 cluster, the cosine similarity set is divided into 2 clusters using k-means. Users belonging to smaller clusters form a set of poisoning clients. Finally, the verifier calculates the malicious user set +.>And benign user set->. Then willAnd sending the data to a server.
(3) The verifier requests the local gradients of the malicious client and the poisoning client from the central server, and verifies the local gradients by using the hash value to obtain the harmful gradients;
(4) Eliminating the negative effect of harmful gradients, namely deleting the harmful gradients which are aggregated into the global gradient, thereby obtainingThe method comprises the following steps:
(5) Verifier is based onThe dot product result between the verification vector and the verification vector is used for checking whether the server throws poison by using the result of filtering the poisoning client and verification information, and the method is as follows;
if the equation is satisfied, the verification is passed, otherwise, the server side is detoxified.
Step 7, central server useUpdating the global model to obtain a new global gradient +.>And propagates it to the parties.
In order to further optimize the technical scheme, the embodiment also provides a bidirectional model poisoning detection system in federal learning, which comprises: the device comprises a data acquisition module, a detection information acquisition module and a poisoning detection module;
the data acquisition module is used for collecting data through the Internet of things equipment and constructing a data set;
the detection information acquisition module is used for training the local model based on the data set through the client side, acquiring local model updating parameters and uploading the local model updating parameters to the central server, and the central server performs aggregation based on the local model updating parameters to acquire a global gradient;
and the poisoning detection module is used for verifying the global gradient based on a preset bidirectional poisoning defense model through the verifier, and if the verification is successful, the global gradient is sent to the central server to update the global model, so that the poisoning detection of the client and the central server is completed.
The method proposed in this embodiment is verified:
the results of the evaluation from three aspects of filtering the security of the malicious client, verifying the correctness of the vector and detecting the security of the server poisoning show that the embodiment has a great effect on filtering the malicious client and the poisoning client and detecting the server poisoning respectively.
(1) Security aspects of filtering malicious clients:
according to the formulaMalicious users can be effectively filtered out. The core idea of malicious client filtering is that the central server and the client generate verification information respectively. Only if the authentication information of both parties match, the client is considered to have been authenticated. The different aspects of the benign server and the malignant client prove that the embodiment has excellent filtering effect on the malicious client.
(2) The aspect of the safety of the poison throwing of the detection server:
from the formulaAnd the expansion of the global model shows that the probability that the malicious central server successfully falsifies the global model and is not detected is 0 in mathematics.
(3) Verification of the correctness aspects of the vector:
proof thatAnd->The cosine similarity between the two is more than or equal tosAnd the correctness of the verification vector is verified.
And (II) detecting the calculation overhead aspect of server poisoning.
Since each client only needs to calculate its own validation information, the validator only needs to aggregate the validation information and compare it to the dot product of the global gradient and the validation vector, which is negligible compared to the calculation cost of the server.
The server computational cost is then studiedThe relationship between gradient length l and client number n, the result shows that oneThe computation time does not change significantly with the increase in gradient length, once the number of clients is fixed. It shows that training a more complex model does not significantly increase the time required to generate the validation information set. Through analysis, the embodiment can completely process huge parameter complexity and huge clients of the deep neural network under the condition of considering calculation overhead.
And (III) evaluating communication overhead.
Compared with a defending mode based on passwords, the client only needs to upload the authentication information of the client, and can adapt to the workload of processing a large number of client devices.
And (IV) performance detection.
The method provided by the embodiment is compared with the FLDector scheme in detection precision and false positive rate and false negative rate, the obtained detection results are shown in table 1, and from the overall point of view, the detection precision, the false positive rate and the false negative rate are superior to or equivalent to the FL-Detector method, and compared with other methods, the precision is improved to a certain extent.
TABLE 1
And (V) performance to eliminate adverse effects of malicious clients.
The performance of eliminating adverse effects of the malicious client by the method proposed in this embodiment is verified by the MNIST dataset, the organic dataset and the CIFAR-10 dataset, respectively, and test results are shown in fig. 3, fig. 4 and fig. 5, where (a) in fig. 3 is the defense against DBA on the MNIST dataset, (b) in fig. 3 is the defense against UMP on the MNIST dataset, fig. 4 (a) is the defense against DBA on the organnist dataset, fig. 4 (b) is the defense against UMP on the organnist dataset, fig. 5 (a) is the defense against DBA on the CIFAR10 dataset, and (b) in fig. 5 is the defense against UMP on the CIFAR10 dataset.
Test results show that the embodiment shows excellent performance on various attacks of different data sets, and the defense efficiency is stable along with the increase of iteration times and convergence of the model. The defense effect on MNIST data set, organization information data set and CIFAR-10 data set is good and stable against the increment performance of malicious clients, and cannot be reduced along with the increment of malicious clients.
The above embodiments are merely illustrative of the preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, but various modifications and improvements made by those skilled in the art to which the present invention pertains are made without departing from the spirit of the present invention, and all modifications and improvements fall within the scope of the present invention as defined in the appended claims.

Claims (4)

1. The method for detecting the poisoning of the bidirectional model in the federal learning is characterized by comprising the following steps of:
collecting data and constructing a data set;
the client trains a local model based on the data set, acquires a local gradient of the client and uploads the local gradient to the central server;
the central server performs aggregation based on the local gradient of the client, acquires a global gradient and sends the global gradient to a verifier;
the verifier verifies the global gradient based on a preset bidirectional toxin-throwing defense model and sends the global gradient passing verification to the central server;
the central server updates a global model based on the global gradient passing through the verification, and the poisoning detection of the client and the central server is completed;
wherein the verifying the global gradient based on a preset bi-directional poisoning defense model by the verifier comprises:
the verifier generates a verification vector based on the global gradient and sends the verification vector to the client and the central server;
the client and the central server acquire verification information based on the verification vector and send the verification information back to the verifier;
the verifier inputs the verification information and the global gradient into the bidirectional toxin-throwing defense model to verify whether the client and the central server are in toxin throwing or not;
the verifier generating a verification vector based on the global gradient includes:
randomly generating a satisfaction based on the global gradientUnit vector of (2)Global gradient for the (t-1) th round,/->For randomly generated vectors +.>Is cosine similarity threshold, when +.>When the number of the unit vectors is smaller than 0, let the unit vectors +.>Ensure->Always greater than 0;
based onCalculate->Wherein->Always greater than 0;
based onGenerating the verification vector, wherein ∈>Is a verification vector;
the verification information comprises a client verification information set and a server verification information set, wherein the client verification information set is thatWherein->Numbering clients->Dot product value for client local gradient and authentication vector,>for client local gradient,/->Is a client set;
the server verification information set is as followsWherein->For global gradient and verification vector dot product value, < >>A client set for sending a local gradient to a central server;
inputting the verification information and global gradient into the bi-directional poisoning defense model by a verifier, and verifying whether the client and the central server are poisoning includes:
the verifier filters malicious clients based on the client verification information set and identifies a poisoning client from the remaining clients;
acquiring local gradients of the malicious client and the poisoning client, verifying the local gradients of the malicious client and the poisoning client, and acquiring harmful gradients;
carrying out clearing treatment on the harmful gradient to obtain a global gradient passing verification;
verifying whether the central server is detoxified based on the verification-passed global gradient and verification vector.
2. The method for detecting the poisoning of the bidirectional model in the federal learning according to claim 1, wherein the method for clearing the harmful gradient and obtaining the global gradient passing verification comprises the following steps:
wherein (1)>To verify the passing global gradient +.>Is an unverified global gradient, +.>For the number of gradients involved in the polymerization, +.>For malicious client and poisoning client total set, +.>The total number of malicious clients and poisoning clients.
3. The federal learning bi-directional model poisoning detection method of claim 2, wherein verifying whether the central server is poisoning based on the verified global gradient and verification vector comprises:
verifying whether the central server is detoxified or not through the dot product between the global gradient passing verification and the verification vector, wherein the verification method comprises the following steps:
if the equation is satisfied, the central server verifies to pass, otherwise, the central server throws the poison;
wherein,for verifying the vector +.>To verify the passing global gradient +.>For benign client quantity, ++>Is a benign set of clients.
4. A system for a bi-directional model poisoning detection method in federal learning according to any one of claims 1-3, comprising: the device comprises a data acquisition module, a detection information acquisition module and a poisoning detection module;
the data acquisition module is used for collecting data through the Internet of things equipment and constructing a data set;
the detection information acquisition module is used for a client to train a local model based on the data set, acquire a local gradient of the client, and the central server performs aggregation based on the local gradient of the client to acquire a global gradient;
the poisoning detection module is used for verifying the global gradient based on a preset bidirectional poisoning defense model by the verifier, sending the verified global gradient to the central server, and updating the global model by the central server based on the verified global gradient to finish poisoning detection of the client and the central server.
CN202311734020.7A 2023-12-18 2023-12-18 Bidirectional model poisoning detection method and system in federal learning Active CN117436078B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311734020.7A CN117436078B (en) 2023-12-18 2023-12-18 Bidirectional model poisoning detection method and system in federal learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311734020.7A CN117436078B (en) 2023-12-18 2023-12-18 Bidirectional model poisoning detection method and system in federal learning

Publications (2)

Publication Number Publication Date
CN117436078A CN117436078A (en) 2024-01-23
CN117436078B true CN117436078B (en) 2024-03-12

Family

ID=89553710

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311734020.7A Active CN117436078B (en) 2023-12-18 2023-12-18 Bidirectional model poisoning detection method and system in federal learning

Country Status (1)

Country Link
CN (1) CN117436078B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807544A (en) * 2020-12-31 2021-12-17 京东科技控股股份有限公司 Method and device for training federated learning model and electronic equipment
CN114036566A (en) * 2021-11-22 2022-02-11 中央财经大学 Verifiable federal learning method and device based on block chain and lightweight commitment
CN114186237A (en) * 2021-10-26 2022-03-15 北京理工大学 Truth-value discovery-based robust federated learning model aggregation method
CN114363043A (en) * 2021-12-30 2022-04-15 华东师范大学 Asynchronous federated learning method based on verifiable aggregation and differential privacy in peer-to-peer network
CN115577360A (en) * 2022-11-14 2023-01-06 湖南大学 Gradient-independent clustering federal learning method and system
CN116629379A (en) * 2023-05-24 2023-08-22 中国电信股份有限公司北京研究院 Federal learning aggregation method and device, storage medium and electronic equipment
CN116842577A (en) * 2023-08-28 2023-10-03 杭州海康威视数字技术股份有限公司 Federal learning model poisoning attack detection and defense method, device and equipment
CN117216805A (en) * 2023-09-01 2023-12-12 淮阴工学院 Data integrity audit method suitable for resisting Bayesian and hordeolum attacks in federal learning scene

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3970074A1 (en) * 2019-05-16 2022-03-23 FRAUNHOFER-GESELLSCHAFT zur Förderung der angewandten Forschung e.V. Concepts for federated learning, client classification and training data similarity measurement

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807544A (en) * 2020-12-31 2021-12-17 京东科技控股股份有限公司 Method and device for training federated learning model and electronic equipment
CN114186237A (en) * 2021-10-26 2022-03-15 北京理工大学 Truth-value discovery-based robust federated learning model aggregation method
CN114036566A (en) * 2021-11-22 2022-02-11 中央财经大学 Verifiable federal learning method and device based on block chain and lightweight commitment
CN114363043A (en) * 2021-12-30 2022-04-15 华东师范大学 Asynchronous federated learning method based on verifiable aggregation and differential privacy in peer-to-peer network
CN115577360A (en) * 2022-11-14 2023-01-06 湖南大学 Gradient-independent clustering federal learning method and system
CN116629379A (en) * 2023-05-24 2023-08-22 中国电信股份有限公司北京研究院 Federal learning aggregation method and device, storage medium and electronic equipment
CN116842577A (en) * 2023-08-28 2023-10-03 杭州海康威视数字技术股份有限公司 Federal learning model poisoning attack detection and defense method, device and equipment
CN117216805A (en) * 2023-09-01 2023-12-12 淮阴工学院 Data integrity audit method suitable for resisting Bayesian and hordeolum attacks in federal learning scene

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Jian Xu等.Byzantine-robust Federated Learning through Collaborative Malicious Gradient Filtering.2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS).2022,全文. *
亢飞 ; 李建彬 ; .基于数据复杂度的投毒数据检测方法.计算机应用研究.2020,(第07期),全文. *
黄湘洲.联邦学习的投毒检测***设计与实现.中国优秀博硕士学位论文全文数据库(硕士).2023,全文. *

Also Published As

Publication number Publication date
CN117436078A (en) 2024-01-23

Similar Documents

Publication Publication Date Title
CN108964919A (en) The lightweight anonymous authentication method with secret protection based on car networking
Guezzaz et al. A Global Intrusion Detection System using PcapSockS Sniffer and Multilayer Perceptron Classifier.
CN109104284A (en) A kind of block chain anonymity transport protocol based on ring signatures
US11170786B1 (en) Federated speaker verification method based on differential privacy
CN112329009B (en) Defense method for noise attack in joint learning
CN114363043B (en) Asynchronous federal learning method based on verifiable aggregation and differential privacy in peer-to-peer network
Singh et al. SINN-RD: Spline interpolation-envisioned neural network-based ransomware detection scheme
Veena et al. C SVM classification and KNN techniques for cyber crime detection
Mohammed et al. Blockchain-enabled bioacoustics signal authentication for cloud-based electronic medical records
CN117436078B (en) Bidirectional model poisoning detection method and system in federal learning
Vladyko et al. Blockchain Models to Improve the Service Security on Board Communications
Ariffin et al. Vulnerabilities detection using attack recognition technique in multi-factor authentication
Szymoniak Using a security protocol to protect against false links
Gautam et al. Anomaly detection system using entropy based technique
Ahmed et al. CCF Based System Framework In Federated Learning Against Data Poisoning Attacks
Huang Application of computer data mining technology based on AKN algorithm in denial of service attack defense detection
CN106682490A (en) CFL artificial immune computer model construction method
Gao et al. Privacy-preserving verifiable asynchronous federated learning
Wang et al. Privacy-Preserving Robust Federated Learning with Distributed Differential Privacy
Abi-Char et al. An enhanced authenticated key agreement protocol with a neural network-based model for joining-phase in mobile environments
Ahmad et al. Cloud denial of service detection by dendritic cell mechanism
Yao et al. Privacy-Preserving Collaborative Intrusion Detection in Edge of Internet of Things: A Robust and Efficient Deep Generative Learning Approach
CN114925361B (en) Trusted platform based embedded equipment software remote auditing method and device
CN117436878B (en) Multi-channel payment method and payment system based on blockchain technology
CN116796317A (en) Malicious code model detection method, device and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant