CN112668726A - Personalized federal learning method with efficient communication and privacy protection - Google Patents

Personalized federal learning method with efficient communication and privacy protection Download PDF

Info

Publication number
CN112668726A
CN112668726A CN202011568563.2A CN202011568563A CN112668726A CN 112668726 A CN112668726 A CN 112668726A CN 202011568563 A CN202011568563 A CN 202011568563A CN 112668726 A CN112668726 A CN 112668726A
Authority
CN
China
Prior art keywords
client
personalized
federal learning
model
model parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011568563.2A
Other languages
Chinese (zh)
Other versions
CN112668726B (en
Inventor
梅媛
肖丹阳
吴维刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202011568563.2A priority Critical patent/CN112668726B/en
Publication of CN112668726A publication Critical patent/CN112668726A/en
Application granted granted Critical
Publication of CN112668726B publication Critical patent/CN112668726B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Transfer Between Computers (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention provides a personalized federal learning method with efficient communication and privacy protection, which comprises the following steps: s1: pulling a current global model W from a central servertInitializing local models of all clients
Figure DDA0002861764770000011
S2: executing E-round local training to obtain a new local model
Figure DDA0002861764770000012
S3: will be provided with
Figure DDA0002861764770000013
Sending the model parameters of (a) to the central server; s4: in a central serviceThe received model parameters are aggregated in the device to obtain an aggregation result Wt+1(ii) a S5: according to Wt+1Update all client's local model to
Figure DDA0002861764770000014
S6: judging whether the preset iteration times are finished or not; if yes, completing personalized federal learning; if not, let t be t +1, and return to step S2 to perform the next round of personalized federal learning. The invention provides an efficient communication and privacy protection personalized federal learning method, which solves the problem that the balance between a personalized client local model and a personalized client global model is not realized by the conventional personalized federal learning method.

Description

Personalized federal learning method with efficient communication and privacy protection
Technical Field
The invention relates to the technical field of federal learning, in particular to a personalized federal learning method with efficient communication and privacy protection.
Background
Machine learning has achieved tremendous success in the fields of computer vision, speech recognition, natural language processing, and the like. In order to accomplish large-scale machine learning training tasks using massive data, distributed machine learning has been proposed and is attracting much attention. Federal learning is a new distributed machine learning approach. In federal learning, because the central server aggregates model updates from various clients using the federal averaging (FedAvg) algorithm, all parties participating in federal training get a uniform global model after the training is finished. Most of the existing federal learning algorithms pay attention to improving the global model effect of federal learning. However, in a federal environment, the own data on many clients are often not independently and equally distributed, and thus the trained global model is difficult to adapt to each client, that is, the global model is likely to perform less well than the model trained by a single client. Thus federated learning of federated client training would be meaningless. Personalized federal learning is therefore necessary. However, the existing personalized federal learning method only emphasizes the importance of the personalized client model, and the balance between the personalized client local model and the global model is not realized, so that the effect loss of the global model is obvious.
In the prior art, for example, a chinese patent disclosed in 28.08.2020/08.78, a decentralized federal machine learning method under privacy protection, with the publication number of CN111600707A, solves the problems that the existing federal learning is vulnerable to DoS attacks and single-point failures of parameter central servers are prone to occur; the secret distribution protocol can be verified by combining PVSS to protect the parameters of the participant model from model inversion attack and data member reasoning attack. Meanwhile, parameter aggregation is carried out by different participants in each training task, and when an untrusted aggregator appears or the aggregator is attacked, the aggregator can restore to normal automatically, so that the robustness of federal learning is improved; meanwhile, the method also ensures the performance of federal learning, effectively improves the safe training environment of federal learning, but does not realize the balance between the local model and the global model of the personalized client.
Disclosure of Invention
The invention provides an individualized federated learning method which is efficient in communication and protects privacy, and aims to overcome the technical defect that the existing individualized federated learning method does not realize the balance between an individualized client local model and a global model.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a personalized federal learning method for efficient communication and privacy protection comprises the following steps:
s1: pulling a current global model W from a central servertInitializing local models of all clients
Figure BDA0002861764750000021
Wherein i is a client serial number, and t is the number of rounds of current personalized federal learning;
s2: executing E-round local training in the client i to obtain a new local model
Figure BDA0002861764750000022
S3: based on the way of variable frequency update of the hierarchical parameter combination, will
Figure BDA0002861764750000023
Sending the model parameters of (a) to the central server;
s4: aggregating the received model parameters in the central server to obtain an aggregated result Wt+1
S5: based on the way of variable frequency update of the hierarchical parameter combination, according to Wt+1Update all client's local model to
Figure BDA0002861764750000024
S6: judging whether the preset iteration times are finished or not;
if yes, completing personalized federal learning;
if not, let t be t +1, and return to step S2 to perform the next round of personalized federal learning.
Preferably, in step S2, only k clients are selected from each round of personalized federal learning to perform local training; the total number of the clients is K, and the ratio of K to K is C.
Preferably, in step S2, the data on client i is divided into data according to the preset data batch size B
Figure BDA0002861764750000025
Individual batches and set as a collection
Figure BDA0002861764750000026
For the
Figure BDA0002861764750000027
Performing local training according to the following formula to obtain a local model
Figure BDA0002861764750000028
Figure BDA0002861764750000029
Wherein N isiIs the amount of data on client i, biIs a set
Figure BDA00028617647500000210
The elements (A) and (B) in (B),
Figure BDA00028617647500000211
model parameters before local training are executed on a client i, eta is a learning rate, and l is a clientLoss function on the terminals.
Preferably, when the training object of the personalized federal learning is a deep neural network model, the deep neural network model is regarded as a combination of a global layer and a personalized layer;
the shallow network part of the deep neural network model is defined as a global layer and is responsible for extracting global characteristics of client data; the deep network part of the deep neural network model is defined as a personalization layer and is responsible for capturing personalized features of the client data.
Preferably, in step S3, the variable frequency updating method for the hierarchical parameter combination specifically includes:
if it is currently in the early stage of personalized federal learning, i.e., 0<T is less than or equal to T × p, and T% fearlierNot equal to 0 or currently in the late stage of personalized federal learning, i.e., Tp<T is less than or equal to T, and T% flatetWhen the model parameters are not equal to 0, only sending the model parameters of the shallow layer part to a central server;
if it is currently in the early stage of personalized federal learning, i.e., 0<T is less than or equal to T × p, and T% fearlier0 or currently in the late stages of personalized federal learning, i.e., T × p<T is less than or equal to T, and T% flaterWhen the model parameters of all layers are 0, sending the model parameters of all layers to a central server;
wherein T is the current round number of the personalized federal study, T is the total round number of the personalized federal study, p is the round number ratio of the personalized federal study in the early stage, and fearlierPeriod for sending model parameters of all layers to the central server for personalized federal learning early stage, flaterAnd sending model parameters of all layers to a central server for the later period of the personalized federal learning.
Preferably, the method further comprises the following steps: gaussian noise is added to the model parameters sent from the client to the central server.
Preferably, only the model parameters of the shallow part are sent to the central server in the client i according to the following formula, and gaussian noise is added:
Figure BDA0002861764750000031
wherein the content of the first and second substances,
Figure BDA0002861764750000032
model parameters sent from the client i to the central server; m is a masking matrix used for masking the deep layer parameters to participate in polymerization; dp ∈ (0, 1)]For controlling the degree of influence of noise; RN N (0, sigma)2) That is, RN obeys a mean of 0 and variance of σ2Is normally distributed.
Preferably, the model parameters of all layers are sent to the central server in the client i according to the following formula, and gaussian noise is added:
Figure BDA0002861764750000033
wherein the content of the first and second substances,
Figure BDA0002861764750000034
model parameters sent from the client i to the central server; dp ∈ (0, 1)]For controlling the degree of influence of noise; RN N (0, sigma)2) That is, RN obeys a mean of 0 and variance of σ2Is normally distributed.
Preferably, the frequency of sending the model parameters of all layers to the server by the client in the early stage in the personalized federal learning is higher than that of sending the model parameters in the later stage, namely
Figure BDA0002861764750000035
Preferably, in step S4,
carrying out aggregation operation on the model parameters received by the central server through the following formula in each round of personalized federal learning to obtain an aggregation result Wt+1
Figure BDA0002861764750000036
Wherein K is the total number of clients and C is each roundClient-side proportion, N, participating in personalized federal learningiThe data volume on the client i, N is the data volume on all the clients participating in the personalized federal learning in each turn,
Figure BDA0002861764750000037
are model parameters sent to the central server.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention provides an individualized federal learning method with high-efficiency communication and privacy protection, which is used for carrying out individualized federal learning based on a mode of carrying out variable frequency updating on hierarchical parameter combinations and can effectively realize the balance of a global model and an individualized model; meanwhile, parameter communication traffic in personalized federal learning can be reduced, and light and efficient communication efficiency is achieved.
Drawings
FIG. 1 is a schematic flow chart of the steps for carrying out the present invention;
FIG. 2 is a schematic diagram of an image classification task using a deep neural network model according to the present invention;
FIG. 3 is a schematic diagram of the variable frequency for personalized federal learning in the present invention;
FIG. 4 is a schematic diagram of layered parameters for personalized federal learning in the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 1, a personalized federal learning method for efficient communication and privacy protection includes the following steps:
s1: pulling a current global model W from a central servertInitializing local models of all clients
Figure BDA0002861764750000041
Wherein i is a client serial number, and t is the number of rounds of current personalized federal learning;
s2: executing E-round local training in the client i to obtain a new local model
Figure BDA0002861764750000042
S3: based on the way of variable frequency update of the hierarchical parameter combination, will
Figure BDA0002861764750000043
Sending the model parameters of (a) to the central server;
s4: aggregating the received model parameters in the central server to obtain an aggregated result Wt+1
S5: based on the way of variable frequency update of the hierarchical parameter combination, according to Wt+1Update all client's local model to
Figure BDA0002861764750000044
S6: judging whether the preset iteration times are finished or not;
if yes, completing personalized federal learning;
if not, let t be t +1, and return to step S2 to perform the next round of personalized federal learning.
More specifically, in step S2, only k clients are selected from each round of personalized federal learning to perform local training; the total number of the clients is K, and the ratio of K to K is C.
In the specific implementation process, considering the bandwidth and delay limits of the communication between the client and the central server, the client with the proportion of C is selected from the K clients each time, and the set St participates in the current round t of personalized federal learning. The selected client needs to complete local training of the preset round number E.
More specifically, in step S2, the data on client i is divided into data of preset data batch size B
Figure BDA0002861764750000051
Individual batches and set as a collection
Figure BDA0002861764750000052
For the
Figure BDA0002861764750000053
Performing local training according to the following formula to obtain a local model
Figure BDA0002861764750000054
Figure BDA0002861764750000055
Wherein N isiIs the amount of data on client i, biIs a set
Figure BDA0002861764750000056
The elements (A) and (B) in (B),
Figure BDA0002861764750000057
the model parameters before local training is executed on the client i, η is the learning rate, and l is the loss function on the client.
More specifically, when the training object of personalized federal learning is a deep neural network model, taking the task of image classification by using the deep neural network model as an example: the common and general features contained in the image are typically captured by the shallow network portion of the deep neural network model, while the more advanced and characteristic features are identified by the deep network portion. As shown in fig. 2, the shallow network part near the input picture extracts low-order features, and the deep network part near the output extracts high-order features. In personalized federal learning, according to the definition of a global model and a client local model: the global model focuses mainly on the general low-order features of the data on the client, and the local model focuses mainly on the specific high-order features of the data on the client. We therefore consider the deep neural network model for personalized federal learning as a combination of a global model and a personalized model, where the shallow network part of the deep neural network model is defined as the global layer and the deep network part is defined as the personalized layer.
More specifically, in step S3, the method for performing variable frequency update on the hierarchical parameter combination specifically includes:
if it is currently in the early stage of personalized federal learning, i.e., 0<T is less than or equal to T × p, and T% fearlierNot equal to 0 or currently in the late stage of personalized federal learning, i.e., Tp<T is less than or equal to T, and T% flaterWhen the model parameters are not equal to 0, only sending the model parameters of the shallow layer part to a central server;
if it is currently in the early stage of personalized federal learning, i.e., 0<T is less than or equal to T × p, and T% fearlier0 or currently in the late stages of personalized federal learning, i.e., T × p<T is less than or equal to T, and T% flaterWhen the model parameters of all layers are 0, sending the model parameters of all layers to a central server;
wherein T is the current round number of the personalized federal study, T is the total round number of the personalized federal study, p is the round number ratio of the personalized federal study in the early stage, and fearlierPeriod for sending model parameters of all layers to the central server for personalized federal learning early stage, flaterAnd sending model parameters of all layers to a central server for the later period of the personalized federal learning.
In the specific implementation process, as shown in fig. 3-4, personalized federal learning is performed by adopting a hierarchical parameter mode based on variable frequency, and only when t% f is reached in the early and later periods of the personalized federal learningearlier0 or t% flaterWhen 0, the client sends all layer parameters of the model to the central server. Under other conditions, the client masks the deep-layer parameters and only sends the shallow-layer parameters to the central server, so that the communication traffic is obviously reduced, and the traffic is effectively reducedA credit cost. In fig. 3, the update period of the personalized federal learning in the early stage is set to 4, and the update period in the later stage is set to 8. Namely, in the early stage, the aggregation average of all layer parameters is executed once every 4 rounds; at the end, the aggregate average of all layer parameters was performed every 8 rounds.
More specifically, the method further comprises the following steps: gaussian noise is added to the model parameters sent from the client to the central server.
In the specific implementation process, Gaussian noise is added to model parameters sent to the central server by the client side based on the differential privacy layer by layer, real parameters are encrypted, and the privacy of the client side is further protected.
More specifically, in the client i, only the model parameters of the shallow part are sent to the central server according to the following formula, and gaussian noise is added:
Figure BDA0002861764750000061
wherein the content of the first and second substances,
Figure BDA0002861764750000062
model parameters sent from the client i to the central server; m is a masking matrix used for masking the deep layer parameters to participate in polymerization; dp ∈ (0, 1)]For controlling the degree of influence of noise; RN N (0, sigma)2) That is, RN obeys a mean of 0 and variance of σ2Is normally distributed.
More specifically, in the client i, the model parameters of all layers are sent to the central server according to the following formula, and gaussian noise is added:
Figure BDA0002861764750000063
wherein the content of the first and second substances,
Figure BDA0002861764750000064
model parameters sent from the client i to the central server; dp ∈ (0, 1)]For controlling the degree of influence of noise; RN N (0, sigma)2) That is, RN obeys a mean of 0 and variance of σ2Is normally distributed.
More specifically, the frequency of sending model parameters of all layers to the server by the client in the early stage in personalized federal learning is higher than that of sending the model parameters in the later stage, namely
Figure BDA0002861764750000065
In the specific implementation process, according to the accumulated learning strategy, in the early stage of personalized federal learning, the gravity center is the global feature extracted from the client; in the late stages of personalized federal learning, the center of gravity is the local model on the personalized client. Therefore, in this embodiment, the aggregation frequency of all layer parameters in the later stage of personalized federal learning is lower than that in the earlier stage, so that the local model on the client has the personalized capability, and the effects of the global model and the personalized model in personalized federal learning are balanced.
More specifically, in step S4,
carrying out aggregation operation on the model parameters received by the central server through the following formula in each round of personalized federal learning to obtain an aggregation result Wt+1
Figure BDA0002861764750000071
Wherein K is the total number of the clients, C is the client proportion of each round of participation in individualized federal learning, and N is the client proportion of each round of participation in individualized federal learningiThe data volume on the client i, N is the data volume on all the clients participating in the personalized federal learning in each turn,
Figure BDA0002861764750000072
are model parameters sent to the central server.
In the specific implementation, when 0<T is less than or equal to T × p and T% fearlierNot equal to 0 or T × p<T is less than or equal to T and T% flaterWhen not equal to 0, the client only sends the parameters of the shallow part of the deep neural network model to perform all-layer parameter aggregation, so after the central server performs the operation,in step S5, each client only needs to update the parameters of the shallow part of the local model, and the parameters of the deep part are kept unchanged, i.e. the parameters of the personalization layer on the client only depend on its own data. And when 0<T is less than or equal to T × p and T% fearlierWhen 0 or T p<T is less than or equal to T and T% flaterWhen the parameter is 0, the client sends all the parameters of the network model to the central server, and the central server presets different periods (f) according to the front period and the back periodearlierAnd flater) For which a periodic aggregate averaging is performed, each client will need to update the parameters of all layers in step S5.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A personalized federal learning method for efficient communication and privacy protection is characterized by comprising the following steps:
s1: pulling a current global model W from a central servertInitializing local models of all clients
Figure FDA0002861764740000011
Wherein i is a client serial number, and t is the number of rounds of current personalized federal learning;
s2: executing E-round local training in the client i to obtain a new local model
Figure FDA0002861764740000012
S3: based on the way of variable frequency update of the hierarchical parameter combination, will
Figure FDA0002861764740000013
Sending the model parameters of (a) to the central server;
s4: aggregating the received model parameters in the central server to obtain an aggregated result Wt+1
S5: based on the way of variable frequency update of the hierarchical parameter combination, according to Wt+1Update all client's local model to
Figure FDA0002861764740000014
S6: judging whether the preset iteration times are finished or not;
if yes, completing personalized federal learning;
if not, let t be t +1, and return to step S2 to perform the next round of personalized federal learning.
2. The method according to claim 1, wherein in step S2, only k clients are selected from each round of personalized federal learning to perform local training; the total number of the clients is K, and the ratio of K to K is C.
3. The method of claim 1, wherein in step S2, the data on client i is divided into data of a preset data batch size B
Figure FDA0002861764740000015
Individual batches and set as a collection
Figure FDA0002861764740000016
For the
Figure FDA0002861764740000017
Performing local training according to the following formula to obtain a local model
Figure FDA0002861764740000018
Figure FDA0002861764740000019
Wherein N isiIs the amount of data on client i, biIs a set
Figure FDA00028617647400000110
The elements (A) and (B) in (B),
Figure FDA00028617647400000111
the model parameters before local training is executed on the client i, η is the learning rate, and l is the loss function on the client.
4. The method of claim 1, wherein when the training object of the personalized federal learning is a deep neural network model, the deep neural network model is regarded as a combination of a global layer and a personalized layer;
the shallow network part of the deep neural network model is defined as a global layer and is responsible for extracting global characteristics of client data; the deep network part of the deep neural network model is defined as a personalization layer and is responsible for capturing personalized features of the client data.
5. The method according to claim 1, wherein in step S3, the variable frequency updating of the hierarchical parameter combination specifically comprises:
if it is currently in the early stage of personalized federal learning, i.e., 0<T is less than or equal to T × p, and T% fearlierNot equal to 0 or currently in the late stage of personalized federal learning, i.e., Tp<T is less than or equal to T, and T% flaterWhen the model parameters are not equal to 0, only sending the model parameters of the shallow layer part to a central server;
if it is currently in the early stage of personalized federal learning, i.e., 0<T is less than or equal to T × p, and T% fearlier0 or currently in the late stages of personalized federal learning, i.e., T × p<T is less than or equal to T, and T% flaterWhen the model parameters of all layers are 0, sending the model parameters of all layers to a central server;
wherein T is the current round number of the personalized federal study, T is the total round number of the personalized federal study, p is the round number ratio of the personalized federal study in the early stage, and fearlierPeriod for sending model parameters of all layers to the central server for personalized federal learning early stage, flaterAnd sending model parameters of all layers to a central server for the later period of the personalized federal learning.
6. The method of claim 5, further comprising: gaussian noise is added to the model parameters sent from the client to the central server.
7. The method of claim 6, wherein only the shallow part of the model parameters are sent to the central server in client i according to the following formula, and Gaussian noise is added:
Figure FDA0002861764740000021
wherein the content of the first and second substances,
Figure FDA0002861764740000022
model parameters sent from the client i to the central server; m is a masking matrix used for masking the deep layer parameters to participate in polymerization; dp ∈ (0, 1)]For controlling the degree of influence of noise; RN N (0, sigma)2) That is, RN obeys a mean of 0 and variance of σ2Is normally distributed.
8. The method of claim 6, wherein model parameters of all layers are sent to the central server in client i according to the following formula, and Gaussian noise is added:
Figure FDA0002861764740000023
wherein the content of the first and second substances,
Figure FDA0002861764740000024
model parameters sent from the client i to the central server; dp ∈ (0, 1)]For controlling the degree of influence of noise; RN N (0, sigma)2) That is, RN obeys a mean of 0 and variance of σ2Is normally distributed.
9. The method of claim 6, wherein the client sends all layers of model parameters to the server at an early stage more frequently than at a later stage in the personalized federated learning, i.e. the client sends all layers of model parameters to the server at an early stage
Figure FDA0002861764740000031
10. The method for personalized federal learning with efficient communication and privacy protection as claimed in claim 1, wherein, in step S4,
carrying out aggregation operation on the model parameters received by the central server through the following formula in each round of personalized federal learning to obtain an aggregation result Wt+1
Figure FDA0002861764740000032
Wherein K is the total number of the clients, C is the client proportion of each round of participation in individualized federal learning, and N is the client proportion of each round of participation in individualized federal learningiIs the amount of data on the client i,n is the amount of data on all clients participating in the personalized federal learning per round,
Figure FDA0002861764740000033
are model parameters sent to the central server.
CN202011568563.2A 2020-12-25 2020-12-25 Personalized federal learning method with efficient communication and privacy protection Active CN112668726B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011568563.2A CN112668726B (en) 2020-12-25 2020-12-25 Personalized federal learning method with efficient communication and privacy protection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011568563.2A CN112668726B (en) 2020-12-25 2020-12-25 Personalized federal learning method with efficient communication and privacy protection

Publications (2)

Publication Number Publication Date
CN112668726A true CN112668726A (en) 2021-04-16
CN112668726B CN112668726B (en) 2023-07-11

Family

ID=75409693

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011568563.2A Active CN112668726B (en) 2020-12-25 2020-12-25 Personalized federal learning method with efficient communication and privacy protection

Country Status (1)

Country Link
CN (1) CN112668726B (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967812A (en) * 2021-04-20 2021-06-15 钟爱健康科技(广东)有限公司 Anti-theft attack medical diagnosis model protection method based on federal learning
CN113095513A (en) * 2021-04-25 2021-07-09 中山大学 Double-layer fair federal learning method, device and storage medium
CN113268920A (en) * 2021-05-11 2021-08-17 西安交通大学 Safe sharing method for sensing data of unmanned aerial vehicle cluster based on federal learning
CN113344221A (en) * 2021-05-10 2021-09-03 上海大学 Federal learning method and system based on neural network architecture search
CN113361618A (en) * 2021-06-17 2021-09-07 武汉卓尔信息科技有限公司 Industrial data joint modeling method and system based on federal learning
CN113361694A (en) * 2021-06-30 2021-09-07 哈尔滨工业大学 Layered federated learning method and system applying differential privacy protection
CN113378243A (en) * 2021-07-14 2021-09-10 南京信息工程大学 Personalized federal learning method based on multi-head attention mechanism
CN113435604A (en) * 2021-06-16 2021-09-24 清华大学 Method and device for optimizing federated learning
CN113516249A (en) * 2021-06-18 2021-10-19 重庆大学 Federal learning method, system, server and medium based on semi-asynchronization
CN113645197A (en) * 2021-07-20 2021-11-12 华中科技大学 Decentralized federal learning method, device and system
CN113642738A (en) * 2021-08-12 2021-11-12 上海大学 Multi-party secure collaborative machine learning method and system based on hierarchical network structure
CN113656833A (en) * 2021-08-09 2021-11-16 浙江工业大学 Privacy stealing defense method based on evolutionary computation under vertical federal architecture
CN114239860A (en) * 2021-12-07 2022-03-25 支付宝(杭州)信息技术有限公司 Model training method and device based on privacy protection
CN114357526A (en) * 2022-03-15 2022-04-15 中电云数智科技有限公司 Differential privacy joint training method for medical diagnosis model for resisting inference attack
CN114492847A (en) * 2022-04-18 2022-05-13 奥罗科技(天津)有限公司 Efficient and personalized federal learning system and method
CN114595831A (en) * 2022-03-01 2022-06-07 北京交通大学 Federal learning method integrating adaptive weight distribution and personalized differential privacy
CN114863499A (en) * 2022-06-30 2022-08-05 广州脉泽科技有限公司 Finger vein and palm vein identification method based on federal learning
CN114862416A (en) * 2022-04-11 2022-08-05 北京航空航天大学 Cross-platform credit evaluation method under federated learning environment
WO2022222816A1 (en) * 2021-04-21 2022-10-27 支付宝(杭州)信息技术有限公司 Method, system and apparatus for training privacy protection model
CN116016212A (en) * 2022-12-26 2023-04-25 电子科技大学 Decentralised federation learning method and device for bandwidth perception
CN116227621A (en) * 2022-12-29 2023-06-06 国网四川省电力公司电力科学研究院 Federal learning model training method based on power data
WO2023109246A1 (en) * 2021-12-17 2023-06-22 新智我来网络科技有限公司 Method and apparatus for breakpoint privacy protection, and device and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1664819A (en) * 2004-03-02 2005-09-07 微软公司 Principles and methods for personalizing newsfeeds via an analysis of information dynamics
CN101639852A (en) * 2009-09-08 2010-02-03 中国科学院地理科学与资源研究所 Method and system for sharing distributed geoscience data
CN110572253A (en) * 2019-09-16 2019-12-13 济南大学 Method and system for enhancing privacy of federated learning training data
CN111079977A (en) * 2019-11-18 2020-04-28 中国矿业大学 Heterogeneous federated learning mine electromagnetic radiation trend tracking method based on SVD algorithm
CN111553484A (en) * 2020-04-30 2020-08-18 同盾控股有限公司 Method, device and system for federal learning
CN111611610A (en) * 2020-04-12 2020-09-01 西安电子科技大学 Federal learning information processing method, system, storage medium, program, and terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1664819A (en) * 2004-03-02 2005-09-07 微软公司 Principles and methods for personalizing newsfeeds via an analysis of information dynamics
CN101256591A (en) * 2004-03-02 2008-09-03 微软公司 Principles and methods for personalizing newsfeeds via an analysis of information novelty and dynamics
CN101639852A (en) * 2009-09-08 2010-02-03 中国科学院地理科学与资源研究所 Method and system for sharing distributed geoscience data
CN110572253A (en) * 2019-09-16 2019-12-13 济南大学 Method and system for enhancing privacy of federated learning training data
CN111079977A (en) * 2019-11-18 2020-04-28 中国矿业大学 Heterogeneous federated learning mine electromagnetic radiation trend tracking method based on SVD algorithm
CN111611610A (en) * 2020-04-12 2020-09-01 西安电子科技大学 Federal learning information processing method, system, storage medium, program, and terminal
CN111553484A (en) * 2020-04-30 2020-08-18 同盾控股有限公司 Method, device and system for federal learning

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967812A (en) * 2021-04-20 2021-06-15 钟爱健康科技(广东)有限公司 Anti-theft attack medical diagnosis model protection method based on federal learning
WO2022222816A1 (en) * 2021-04-21 2022-10-27 支付宝(杭州)信息技术有限公司 Method, system and apparatus for training privacy protection model
CN113095513A (en) * 2021-04-25 2021-07-09 中山大学 Double-layer fair federal learning method, device and storage medium
CN113344221A (en) * 2021-05-10 2021-09-03 上海大学 Federal learning method and system based on neural network architecture search
CN113268920A (en) * 2021-05-11 2021-08-17 西安交通大学 Safe sharing method for sensing data of unmanned aerial vehicle cluster based on federal learning
CN113435604A (en) * 2021-06-16 2021-09-24 清华大学 Method and device for optimizing federated learning
CN113435604B (en) * 2021-06-16 2024-05-07 清华大学 Federal learning optimization method and device
CN113361618A (en) * 2021-06-17 2021-09-07 武汉卓尔信息科技有限公司 Industrial data joint modeling method and system based on federal learning
CN113516249A (en) * 2021-06-18 2021-10-19 重庆大学 Federal learning method, system, server and medium based on semi-asynchronization
CN113361694A (en) * 2021-06-30 2021-09-07 哈尔滨工业大学 Layered federated learning method and system applying differential privacy protection
CN113361694B (en) * 2021-06-30 2022-03-15 哈尔滨工业大学 Layered federated learning method and system applying differential privacy protection
CN113378243B (en) * 2021-07-14 2023-09-29 南京信息工程大学 Personalized federal learning method based on multi-head attention mechanism
CN113378243A (en) * 2021-07-14 2021-09-10 南京信息工程大学 Personalized federal learning method based on multi-head attention mechanism
CN113645197A (en) * 2021-07-20 2021-11-12 华中科技大学 Decentralized federal learning method, device and system
CN113645197B (en) * 2021-07-20 2022-04-29 华中科技大学 Decentralized federal learning method, device and system
CN113656833A (en) * 2021-08-09 2021-11-16 浙江工业大学 Privacy stealing defense method based on evolutionary computation under vertical federal architecture
CN113642738A (en) * 2021-08-12 2021-11-12 上海大学 Multi-party secure collaborative machine learning method and system based on hierarchical network structure
CN113642738B (en) * 2021-08-12 2023-09-01 上海大学 Multi-party safety cooperation machine learning method and system based on hierarchical network structure
CN114239860A (en) * 2021-12-07 2022-03-25 支付宝(杭州)信息技术有限公司 Model training method and device based on privacy protection
WO2023109246A1 (en) * 2021-12-17 2023-06-22 新智我来网络科技有限公司 Method and apparatus for breakpoint privacy protection, and device and medium
CN114595831A (en) * 2022-03-01 2022-06-07 北京交通大学 Federal learning method integrating adaptive weight distribution and personalized differential privacy
CN114595831B (en) * 2022-03-01 2022-11-11 北京交通大学 Federal learning method integrating adaptive weight distribution and personalized differential privacy
CN114357526A (en) * 2022-03-15 2022-04-15 中电云数智科技有限公司 Differential privacy joint training method for medical diagnosis model for resisting inference attack
CN114862416A (en) * 2022-04-11 2022-08-05 北京航空航天大学 Cross-platform credit evaluation method under federated learning environment
CN114492847B (en) * 2022-04-18 2022-06-24 奥罗科技(天津)有限公司 Efficient personalized federal learning system and method
CN114492847A (en) * 2022-04-18 2022-05-13 奥罗科技(天津)有限公司 Efficient and personalized federal learning system and method
CN114863499A (en) * 2022-06-30 2022-08-05 广州脉泽科技有限公司 Finger vein and palm vein identification method based on federal learning
CN116016212A (en) * 2022-12-26 2023-04-25 电子科技大学 Decentralised federation learning method and device for bandwidth perception
CN116016212B (en) * 2022-12-26 2024-06-04 电子科技大学 Decentralised federation learning method and device for bandwidth perception
CN116227621A (en) * 2022-12-29 2023-06-06 国网四川省电力公司电力科学研究院 Federal learning model training method based on power data
CN116227621B (en) * 2022-12-29 2023-10-24 国网四川省电力公司电力科学研究院 Federal learning model training method based on power data

Also Published As

Publication number Publication date
CN112668726B (en) 2023-07-11

Similar Documents

Publication Publication Date Title
CN112668726A (en) Personalized federal learning method with efficient communication and privacy protection
CN109919920B (en) Method for evaluating quality of full-reference and no-reference images with unified structure
CN109639479B (en) Network traffic data enhancement method and device based on generation countermeasure network
CN111625820A (en) Federal defense method based on AIoT-oriented security
Qin et al. Federated learning-based network intrusion detection with a feature selection approach
CN114841364A (en) Federal learning method capable of meeting personalized local differential privacy requirements
CN110598982B (en) Active wind control method and system based on intelligent interaction
CN113179244B (en) Federal deep network behavior feature modeling method for industrial internet boundary safety
CN112560059A (en) Vertical federal model stealing defense method based on neural pathway feature extraction
CN107945199A (en) Infrared Image Segmentation and system based on bat algorithm and Otsu algorithm
CN114219732A (en) Image defogging method and system based on sky region segmentation and transmissivity refinement
CN102902674B (en) Bundle of services component class method and system
CN1658576A (en) Detection and defence method for data flous of large network station
CN114362988A (en) Network traffic identification method and device
CN111737318B (en) Phishing susceptibility crowd screening method
CN115510472B (en) Multi-difference privacy protection method and system for cloud edge aggregation system
CN114925848A (en) Target detection method based on transverse federated learning framework
CN116561622A (en) Federal learning method for class unbalanced data distribution
CN113194092B (en) Accurate malicious flow variety detection method
CN116017463A (en) Wireless sensor network malicious node identification method based on dynamic trust mechanism
CN114882582A (en) Gait recognition model training method and system based on federal learning mode
CN110009579B (en) Image restoration method and system based on brain storm optimization algorithm
CN113949653A (en) Encryption protocol identification method and system based on deep learning
CN112270233A (en) Mask classification method based on transfer learning and Mobilenet network
CN111835720A (en) VPN flow WEB fingerprint identification method based on feature enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant