CN114564746B - Federal learning method and system based on client weight evaluation - Google Patents

Federal learning method and system based on client weight evaluation Download PDF

Info

Publication number
CN114564746B
CN114564746B CN202210186854.8A CN202210186854A CN114564746B CN 114564746 B CN114564746 B CN 114564746B CN 202210186854 A CN202210186854 A CN 202210186854A CN 114564746 B CN114564746 B CN 114564746B
Authority
CN
China
Prior art keywords
client
federal learning
local model
local
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210186854.8A
Other languages
Chinese (zh)
Other versions
CN114564746A (en
Inventor
陈文智
魏成坤
江鑫楠
林东宇
王总辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202210186854.8A priority Critical patent/CN114564746B/en
Publication of CN114564746A publication Critical patent/CN114564746A/en
Application granted granted Critical
Publication of CN114564746B publication Critical patent/CN114564746B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6227Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database where protection concerns the structure of data, e.g. records, types, queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Bioethics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a federal learning method based on client weight evaluation, which comprises the following steps: the client side participating in the federation study establishes a secure communication channel with the central server, and performs the initialization of the federation study; after the client side performs parameter optimization on the local model by utilizing the local data, uploading local model information of the current round to a central server, wherein the local model information comprises local model parameters and/or model gradients; the central server evaluates the contribution rate of the local model to federal learning according to the local model information and adopts an ablation hypothesis, the local model information is aggregated according to the aggregation weight given to the local model according to the contribution rate, so as to obtain a global model of the current round, and the global model of the current round is downloaded to the client for federal learning of the next round.

Description

Federal learning method and system based on client weight evaluation
Technical Field
The invention belongs to the field of artificial intelligence and information security, and particularly relates to a federal learning method and system based on client weight evaluation.
Background
With the rise of artificial intelligence, huge data information becomes precious wealth for supporting machine learning and perfecting model training. Traditional artificial intelligence training mainly uses a centralized learning framework. In the centralized learning framework, participating users are mainly responsible for providing data and then enjoying services. Although the centralized learning framework has the advantages of centralization, centralization and the like, the limitations are obvious, and the limitations comprise:
(1) In reality, data from different industries exists mostly in the form of data islands. Meanwhile, due to the problems of industry competition, privacy safety, complex administrative procedures and the like, even if data integration is realized among different departments of the same company, the data integration is often faced with heavy resistance. Therefore, it is extremely difficult and costly to integrate data distributed throughout various institutions in reality.
(2) Data security and privacy security has received increasing attention in recent years. In the centralized learning framework, if the centralized server of the third party is malicious or is broken by an attacker, the privacy data stored by the centralized server is at risk of disclosure, causing bad effects.
(3) In the traditional centralized learning framework, the accuracy of the model and the data volume of training often show a positive correlation. However, as the training data volume is continuously increased, the requirement of the model on hardware computing power is continuously increased, the computing time consumed by model training is also continuously prolonged, and the resource allocation and construction cost required by the centralization server are also continuously increased. The traditional centralized learning framework is no longer the best solution for large-scale data training.
Based on the three points, how to design a machine learning framework capable of fully utilizing mass data on the premise of meeting the requirements of data privacy, safety and supervision becomes an important subject of the current artificial intelligence development. In order to solve the problems of the traditional centralized learning framework, different possible solutions are proposed by expert scholars at home and abroad, and federal learning is one of the solutions.
Federal learning is to make the participating users of federal learning perform learning cooperation on the premise of not exchanging data under the premise of meeting the requirements of privacy protection and data security, so that the training effect of machine learning is improved. Federal learning breaks the data island, so that the data is fully applied. Meanwhile, a large number of participants have strong computing resources, and federal learning can utilize the resources to perform data training at proper time, so that the computing pressure of a central server is relieved.
Federal learning has a number of advantages over traditional centralized machine learning frameworks. However, within the framework of federal learning, not all clients contribute to the global model to the same extent and impact size. At the same time, some models of poor uploading performance of other clients cannot be excluded from blocking federal aggregation.
Disclosure of Invention
In view of the problems in federal learning, the invention aims to provide a federal learning method and a federal learning system based on client weight evaluation, which are used for evaluating the contribution degree of a client according to non-privacy data of the client and distributing corresponding aggregation weights for subsequent participation of the federal learning so as to improve federal learning aggregation effects.
To achieve the above object, an embodiment provides a federal learning method based on client weight evaluation, including the steps of:
The client side participating in the federation study establishes a secure communication channel with the central server, and performs the initialization of the federation study;
After the client side performs parameter optimization on the local model by utilizing the local data, uploading local model information of the current round to a central server, wherein the local model information comprises local model parameters or model gradients;
The central server evaluates the contribution rate of the local model to federal learning according to the local model information and adopts an ablation hypothesis, gives an aggregation weight to the local model according to the contribution rate, and aggregates the local model information according to the aggregation weight so as to obtain a global model of the current round, and the global model of the current round is downloaded to the client for federal learning of the next round.
In one embodiment, the central server evaluates the contribution rate of the local model to federal learning from the local model information and using ablation assumptions, including:
The client m performs parameter optimization on the global model w g serving as a local model by using the local data D m, and the corresponding original loss function L origin is expressed as:
Lorigin=L(Dm,wg) (1)
Assuming that the client k does not participate in federation learning during the previous round of federation learning, the obtained global model w g\k performs parameter optimization on the global model w g\k serving as the local model by using the local data D m, and the corresponding loss function L ab is expressed as:
Lab=L(Dm,wg\k) (2)
The influence degree L com of the client k on the local model of the client m is obtained by comparing the difference value between the loss function L ab under the ablation assumption and the original loss function L origin:
Lcom=Lab-Lorigin (3)
After taylor expansion on l=l (D m, w) at w=w g, we get:
Substituting w=w g\k into taylor expansion (4), yields:
The method comprises the following steps of:
where w k represents the local model of client k and n represents the number of clients;
Analysis formula (6), when L com is positive, indicates that client k has a positive effect on the local model of client m, when L com is negative, indicates that client k has a negative effect on the local model of client m, and when L com is zero, indicates that the influence of client k on the local model of client m is considered to be ignored;
And determining the contribution rate of the local model of the client k to federal learning by using the average influence degree of the client k on the local models of all other clients.
In one embodiment, determining the contribution rate of the local model of the client k to federal learning by the average of the influence degrees of the client k on the local models of all other clients includes:
Constructing a first mapping relation between the influence degree mean value and the contribution rate, and determining the contribution rate corresponding to the current influence degree mean value according to the first mapping relation, wherein the first mapping relation comprises a positive correlation mapping relation, and the larger the influence degree mean value is, the larger the corresponding contribution rate is;
preferably, the average of the influence degree of the client k on all other client local models is taken as the contribution rate of the client k local model to federal learning.
In one embodiment, the assigning the aggregate weight of the local model according to the contribution rate includes:
constructing a second mapping relation between the contribution rate and the aggregation weight, and determining the aggregation weight corresponding to the current contribution rate according to the second mapping relation, wherein the first mapping relation comprises a positive correlation mapping relation, and the larger the contribution rate is, the larger the corresponding aggregation weight is;
Preferably, the contribution rate is used as the aggregation weight of the local model.
In one embodiment, the aggregating the local model information according to the aggregate weight includes:
And adopting a federal average aggregation algorithm, and carrying out weighted average on local model parameters of the client according to the aggregation weight to obtain a global model.
In one embodiment, the method further comprises:
Determining the homogeneity or heterogeneity between the client k and the client m according to the influence degree of the client k on the local model of the client m, and when the absolute value of the influence degree is greater than a set threshold value, considering that the client k and the client m have heterogeneity, otherwise, considering that the client k and the client m have homogeneity; and selecting the client to participate in federal learning according to the homogeneity guidance.
In one embodiment, the selecting a client to participate in federal learning according to the homogeneity guidance comprises:
Setting different threshold levels, classifying the homogeneity into different levels according to the different threshold levels, classifying the clients into corresponding sets of different homogeneity levels, selecting n clients from the corresponding sets of high homogeneity levels as representative of participating in federal learning, and the rest of unselected clients do not participate in federal learning, wherein n is preferably at most 1/3 of the total number of clients in the set.
In one embodiment, when clients are selected as representatives to participate in federal learning, the number of selected clients increases in turn, depending on the level of homogeneity.
In one embodiment, the initializing federal learning includes:
negotiating model structure, gradient descent mode, model learning rate, number of clients participating in federal learning, turn of each turn of training of clients, and total communication turn.
In order to achieve the above object, an embodiment of the present invention further provides a federal learning system based on client weight evaluation, which includes a client and a central server that participate in federal learning, where the client and the central server that participate in federal learning perform federal learning by using the federal learning method based on client weight evaluation.
Compared with the prior art, the invention has the beneficial effects that at least the following steps are included:
The influence degree and the contribution rate of the client on the linkage learning are evaluated according to the non-privacy data information such as the local model information by adopting an ablation hypothesis experiment, and the influence degree and the contribution rate of the client on the linkage learning can be obtained while the local data are protected from being disclosed;
the evaluation mechanism of influence is refined by adopting an ablation hypothesis experiment, so that evaluation of influence degree supports a certain client to a certain client, homogeneity and heterogeneity among different clients are further explored based on the influence degree, clients are screened to participate in federal learning according to the homogeneity, and communication burden of federal learning can be reduced;
The model aggregation is guided by the aggregation weight determined based on the contribution rate, so that the influence proportion of the client with positive effect on the linkage learning can be improved, and the influence of the client with negative effect on training on the linkage learning can be reduced as much as possible.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a federal learning method based on client weight evaluation provided by an embodiment;
The embodiment of fig. 2 provides a framework diagram of a federal learning method based on client weight evaluation.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the detailed description is presented by way of example only and is not intended to limit the scope of the invention.
Within the federal learning framework, different clients have different degrees of contribution and impact of their resulting local training models to the federal learning framework due to differences in the distribution of their training data sets, data set sizes, and other conditions. Therefore, the embodiment provides a federal learning method and a federal learning device based on client weight evaluation, and weight distribution is performed on the client according to the magnitude of the contribution degree and the magnitude of the gain degree of the client to the bang learning so as to improve learning efficiency.
Fig. 1 is a flowchart of a federal learning method based on client weight evaluation provided in an embodiment. The embodiment of fig. 2 provides a framework diagram of a federal learning method based on client weight evaluation. As shown in fig. 1 and 2, an embodiment provides a federal learning method based on client weight evaluation, including the steps of:
Step 1, a secure communication channel is established between a client side participating in federation learning and a central server, and federation learning is initialized.
In an embodiment, clients participating in federal learning have different local data and hardware conditions, and can perform training of a local model. The central server is used as the leading of federal learning and used for initiating federal learning tasks and initializing federal learning.
Before federation learning, the client side participating in federation learning establishes a secure communication channel with the central server to provide communication guarantee for data communication of subsequent federation learning. The initialization of the federal learning comprises a negotiation model structure and federal learning related parameters, wherein the federal learning related parameters comprise a gradient descent mode, a model learning rate, the number of clients participating in federal learning, the round local epoch of each round of training of the clients, the batch size of each round of training of the clients, the global epoch of the total communication round (namely, global communication ending condition) and the like. The initialized model structure is issued to each participating client for federal learning.
In a specific embodiment, the network parameters are updated using a random gradient descent (Stochastic GRADIENT DESCENT, SGD), and the learning rate (LEARNING RATE) is initially 0.00001. The Local epoch for each participating client is set to 20 rounds and the trained Batch Size is set to 64.
In an embodiment, the model structure includes LeNet, MLP, vgg-9, vgg-11, resNet-18, and the like.
And 2, after carrying out parameter optimization on the local model by using local data, uploading local model information of the current turn to a central server by the client, wherein the local model information comprises local model parameters and/or model gradients.
The client needs to perform data preprocessing before performing parameter optimization on the local model by utilizing the local data, including performing operations such as data augmentation and normalization processing on the local data, so as to improve the quality of the data participating in federal learning training. For image data, the data augmentation mode mainly comprises image rotation, image clipping, image scaling, image flipping and the like.
And the client performs parameter optimization on the local model by using the preprocessed local data, and uploads the local model information subjected to current round parameter optimization to a central server so as to aggregate to obtain a global model. In an embodiment, the uploaded local model information is local model parameters or model gradient changes, and the local model parameters and the model gradient changes can be combined with a global model to calculate a corresponding loss function L com;
and 3, estimating the contribution rate of the local model to federal learning by using an ablation hypothesis according to the local model information by the central server, giving an aggregation weight to the local model according to the contribution rate, and aggregating the local model information according to the aggregation weight to obtain the global model of the current turn.
In an embodiment, evaluating the contribution rate of the local model to federal learning based on the local model information and using an ablation hypothesis experiment includes:
The client m performs parameter optimization on the global model w g serving as a local model by using the local data D m, and the corresponding original loss function L origin is expressed as:
Lorigin=L(Dm,wg) (1)
Assuming that the client k does not participate in federation learning during the previous round of federation learning, the obtained global model w g\k performs parameter optimization on the global model w g\k serving as the local model by using the local data D m, and the corresponding loss function L ab is expressed as:
Lab=L(Dm,wg\k) (2)
The influence degree L com of the client k on the local model of the client m is obtained by comparing the difference value between the loss function L ab under the ablation assumption and the original loss function L origin:
Lcom=Lab-Lorigin (3)
After taylor expansion on l=l (D m, w) at w=w g, we get:
Substituting w=w g\k into taylor expansion (4), yields:
The method comprises the following steps of:
Where w k represents the local model of client k and n represents the client data;
Analysis formula (6), when L com is positive, that is, if the loss function after deleting the client k is increased compared with the original loss function, it indicates that the client k has a negative effect on the local model of the client m, when L com is negative, that is, if the loss function after deleting the client k is reduced compared with the original loss function, it indicates that the client k has a positive effect on the local model of the client m, and when L com is zero, it indicates that the client k has no effect on the local model of the client m;
And determining the contribution rate of the local model of the client k to federal learning by using the average influence degree of the client k on the local models of all other clients. In an embodiment, a first mapping relationship between the influence mean value and the contribution rate may be constructed, and the contribution rate corresponding to the current influence mean value is determined according to the first mapping relationship, where the first mapping relationship includes a positive correlation mapping relationship, and the larger the influence mean value is, the larger the corresponding contribution rate is. In one possible implementation, the average of the degree of influence of the client k on all other client local models is taken as the contribution rate of the client k local model to federal learning.
After determining the contribution rate of the local model of each client to federal learning, an aggregate weight of the local model is assigned in accordance with the contribution rate. Specifically, a second mapping relation between the contribution rate and the aggregation weight is constructed, and the aggregation weight corresponding to the current contribution rate is determined according to the second mapping relation, wherein the first mapping relation comprises a positive correlation mapping relation, and the larger the contribution rate is, the larger the corresponding aggregation weight is. In one possible implementation, the contribution rate is used as the aggregate weight of the local model.
In the embodiment, when local model information is aggregated according to the aggregation weight, a federal average aggregation (FedAvg) algorithm is adopted, that is, local model parameters of a client are weighted and averaged according to the aggregation weight, so that a global model of a current round is obtained, the weight duty ratio of the local model with larger federal learning gain is emphasized, and the training effect of federal learning is improved.
And 4, the central server downloads the global model of the current round to the client to perform federal learning of the next round.
In the embodiment, in each round of federation learning, a central server downloads a global model of a current round to a client, the client uses the global model as a current local model, and local data is utilized to perform next round of federation learning on the current local model.
And 5, repeating the steps 2-4 until the global epochs of the total communication rounds are reached, and ending the federal learning to obtain a final global model.
The federal learning method based on client weight evaluation provided in the above embodiment further includes:
Determining the homogeneity or heterogeneity between the client k and the client m according to the influence degree of the client k on the local model of the client m, and when the absolute value of the influence degree is greater than a set threshold value, considering that the client k and the client m have homogeneity, otherwise, considering that the client k and the client m have heterogeneity; and selecting the client to participate in federal learning according to the homogeneity guidance.
Specifically, selecting a client to participate in federal learning according to homogeneity guidance includes:
setting different threshold levels, classifying the homogeneity into different levels according to the different threshold levels, classifying the clients into corresponding sets of different homogeneity levels, selecting n clients from the corresponding sets of high homogeneity levels as representative of participating in federal learning, and the rest of unselected clients do not participate in federal learning, wherein n is preferably at most 1/3 of the total number of clients in the set. For example, 3 thresholds are set, clients are divided into 3 different homogeneity levels according to the 3 thresholds, and clients with homogeneity are correspondingly divided into 3 sets, so that fewer clients from the 3 sets can be selected as representatives to participate in federal learning respectively.
In one possible embodiment, when a client is selected as a representative to participate in federal learning, the number of selected clients increases in turn, depending on the level of homogeneity. Assuming that there are 3 homogeneity levels, i.e., a high homogeneity level, a medium homogeneity level, and a low homogeneity level, respectively, the number of representative clients selected among the 3 sets corresponding to the three levels increases in order, e.g., 1,2, and 3, respectively. Thus, the accuracy of the global model can be ensured while the data communication overhead is reduced.
The embodiment also provides a federation learning system based on the client weight evaluation, which comprises a client and a central server which participate in federation learning, wherein the client and the central server which participate in federation learning adopt the federation learning method based on the client weight evaluation to perform federation learning.
The foregoing detailed description of the preferred embodiments and advantages of the invention will be appreciated that the foregoing description is merely illustrative of the presently preferred embodiments of the invention, and that no changes, additions, substitutions and equivalents of those embodiments are intended to be included within the scope of the invention.

Claims (12)

1. The federal learning method based on the client weight evaluation is characterized by comprising the following steps of:
The client side participating in the federation study establishes a secure communication channel with the central server, and performs the initialization of the federation study;
After the client side performs parameter optimization on the local model by utilizing the local data, uploading local model information of the current round to a central server, wherein the local model information comprises local model parameters or model gradients;
The central server evaluates the contribution rate of the local model to federal learning according to the local model information and adopts an ablation hypothesis, gives an aggregation weight to the local model according to the contribution rate, and aggregates the local model information according to the aggregation weight to obtain a global model of the current round, and the global model of the current round is downloaded to the client for federal learning of the next round;
wherein, the central server evaluates the contribution rate of the local model to federal learning according to the local model information and by adopting an ablation hypothesis, and comprises:
The client m performs parameter optimization on the global model w g serving as a local model by using the local data D m, and the corresponding original loss function L origin is expressed as:
Lorigin=L(Dm,wg) (1)
Assuming that the client k does not participate in federation learning during the previous round of federation learning, the obtained global model w g\k performs parameter optimization on the global model w g\k serving as the local model by using the local data D m, and the corresponding loss function L ab is expressed as:
Lab=L(Dm,wg\k) (2)
The influence degree L com of the client k on the local model of the client m is obtained by comparing the difference value between the loss function L ab under the ablation assumption and the original loss function L origin:
Lcom=Lab-Lorigin (3)
After taylor expansion on l=l (D m, w) at w=w g, we get:
Substituting w=w g\k into taylor expansion (4), yields:
The method comprises the following steps of:
where w k represents the local model of client k and n represents the number of clients;
Analysis formula (6), when L com is positive, indicates that client k has a positive effect on the local model of client m, when L com is negative, indicates that client k has a negative effect on the local model of client m, and when L com is zero, indicates that the influence of client k on the local model of client m is considered to be ignored;
And determining the contribution rate of the local model of the client k to federal learning by using the average influence degree of the client k on the local models of all other clients.
2. The federal learning method based on client weight evaluation according to claim 1, wherein determining the contribution rate of the local model of the client k to federal learning by the average of the influence degrees of the client k to all other client local models comprises:
A first mapping relation between the influence degree mean value and the contribution rate is constructed, and the contribution rate corresponding to the current influence degree mean value is determined according to the first mapping relation, wherein the first mapping relation comprises a positive correlation mapping relation, and the larger the influence degree mean value is, the larger the corresponding contribution rate is.
3. The federal learning method based on client weight evaluation according to claim 2, wherein the average of the influence degree of the client k on all other client local models is taken as the contribution rate of the client k local model to federal learning.
4. A federal learning method according to claim 1,2 or 3, wherein the assigning aggregate weights to the local models in terms of contribution rates comprises:
constructing a second mapping relation between the contribution rate and the aggregation weight, and determining the aggregation weight corresponding to the current contribution rate according to the second mapping relation, wherein the first mapping relation comprises a positive correlation mapping relation, and the larger the contribution rate is, the larger the corresponding aggregation weight is.
5. The federal learning method based on client weight evaluation according to claim 4, wherein the contribution rate is used as the aggregate weight of the local model.
6. The federal learning method based on client weight evaluation according to claim 1, wherein the aggregating local model information according to aggregate weights comprises:
And adopting a federal average aggregation algorithm, and carrying out weighted average on local model parameters of the client according to the aggregation weight to obtain a global model.
7. The federal learning method based on client weight evaluation according to claim 2, wherein the method further comprises:
Determining the homogeneity or heterogeneity between the client k and the client m according to the influence degree of the client k on the local model of the client m, and when the absolute value of the influence degree is greater than a set threshold value, considering that the client k and the client m have homogeneity, otherwise, considering that the client k and the client m have heterogeneity; and selecting the client to participate in federal learning according to the homogeneity guidance.
8. The federal learning method based on client weight evaluation according to claim 7, wherein the selecting clients to participate in federal learning according to homogeneity guidance comprises:
Setting different threshold levels, classifying the homogeneity into different levels according to the different threshold levels, classifying the clients into corresponding sets of different homogeneity levels, selecting n clients from the corresponding sets of high homogeneity levels as representatives to participate in federal learning, and enabling the rest unselected clients not to participate in federal learning.
9. The federal learning method based on client weight evaluation of claim 8, wherein n is at most 1/3 of the total number of clients in the collection.
10. The federal learning method based on the client weight evaluation according to claim 8 or 9, wherein when a client is selected as a representative to participate in federal learning, the number of selected clients is sequentially increased according to the level of homogeneity.
11. The federal learning method based on client weight evaluation according to claim 1, wherein the initializing federal learning comprises:
negotiating model structure, gradient descent mode, model learning rate, number of clients participating in federal learning, turn of each turn of training of clients, and total communication turn.
12. The federal learning system based on the client weight evaluation comprises a client and a central server which participate in federal learning, and is characterized in that the client and the central server which participate in federal learning adopt the federal learning method based on the client weight evaluation as claimed in any one of claims 1 to 11 to perform federal learning.
CN202210186854.8A 2022-02-28 2022-02-28 Federal learning method and system based on client weight evaluation Active CN114564746B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210186854.8A CN114564746B (en) 2022-02-28 2022-02-28 Federal learning method and system based on client weight evaluation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210186854.8A CN114564746B (en) 2022-02-28 2022-02-28 Federal learning method and system based on client weight evaluation

Publications (2)

Publication Number Publication Date
CN114564746A CN114564746A (en) 2022-05-31
CN114564746B true CN114564746B (en) 2024-05-14

Family

ID=81716586

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210186854.8A Active CN114564746B (en) 2022-02-28 2022-02-28 Federal learning method and system based on client weight evaluation

Country Status (1)

Country Link
CN (1) CN114564746B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117675596A (en) * 2022-08-09 2024-03-08 华为技术有限公司 Data analysis method and device
CN116306910B (en) * 2022-09-07 2023-10-03 北京交通大学 Fair privacy calculation method based on federal node contribution
CN115423208A (en) * 2022-09-27 2022-12-02 深圳先进技术研究院 Electronic insurance value prediction method and device based on privacy calculation
CN117131951A (en) * 2023-02-16 2023-11-28 荣耀终端有限公司 Federal learning method and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340453A (en) * 2020-02-28 2020-06-26 深圳前海微众银行股份有限公司 Federal learning development method, device, equipment and storage medium
CN112506753A (en) * 2020-12-14 2021-03-16 德清阿尔法创新研究院 Efficient contribution evaluation method in federated learning scene
CN112949837A (en) * 2021-04-13 2021-06-11 中国人民武装警察部队警官学院 Target recognition federal deep learning method based on trusted network
CN112966763A (en) * 2021-03-17 2021-06-15 北京邮电大学 Training method and device for classification model, electronic equipment and storage medium
WO2021152329A1 (en) * 2020-01-30 2021-08-05 Vision Semantics Limited De-centralised learning for re-identification
CN113222179A (en) * 2021-03-18 2021-08-06 北京邮电大学 Federal learning model compression method based on model sparsification and weight quantization
CN113435604A (en) * 2021-06-16 2021-09-24 清华大学 Method and device for optimizing federated learning
CN113947156A (en) * 2021-10-22 2022-01-18 河南大学 Health crowd-sourcing perception system and federal learning method for cost optimization thereof
CN114091356A (en) * 2022-01-18 2022-02-25 北京邮电大学 Method and device for federated learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10810469B2 (en) * 2018-05-09 2020-10-20 Adobe Inc. Extracting material properties from a single image

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021152329A1 (en) * 2020-01-30 2021-08-05 Vision Semantics Limited De-centralised learning for re-identification
CN111340453A (en) * 2020-02-28 2020-06-26 深圳前海微众银行股份有限公司 Federal learning development method, device, equipment and storage medium
CN112506753A (en) * 2020-12-14 2021-03-16 德清阿尔法创新研究院 Efficient contribution evaluation method in federated learning scene
CN112966763A (en) * 2021-03-17 2021-06-15 北京邮电大学 Training method and device for classification model, electronic equipment and storage medium
CN113222179A (en) * 2021-03-18 2021-08-06 北京邮电大学 Federal learning model compression method based on model sparsification and weight quantization
CN112949837A (en) * 2021-04-13 2021-06-11 中国人民武装警察部队警官学院 Target recognition federal deep learning method based on trusted network
CN113435604A (en) * 2021-06-16 2021-09-24 清华大学 Method and device for optimizing federated learning
CN113947156A (en) * 2021-10-22 2022-01-18 河南大学 Health crowd-sourcing perception system and federal learning method for cost optimization thereof
CN114091356A (en) * 2022-01-18 2022-02-25 北京邮电大学 Method and device for federated learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Federated Continual Learning with weighted inter-client transfer;Jaehong Yoon 等;Proceedings of Machine Learning Research;20211231;第139卷;12073-12086 *
周子钦 等.基于多任务学习的有限样本多视角三维形状识别算法.计算机科学.2020,第47卷(第04期),125-130. *
基于卷积神经网络的联邦学习算法研究;尹祥;中国优秀硕士学位论文全文数据库 信息科技辑;20220115(第第1期期);I140-425 *

Also Published As

Publication number Publication date
CN114564746A (en) 2022-05-31

Similar Documents

Publication Publication Date Title
CN114564746B (en) Federal learning method and system based on client weight evaluation
CN113762530B (en) Precision feedback federal learning method for privacy protection
CN111629380B (en) Dynamic resource allocation method for high concurrency multi-service industrial 5G network
WO2021128805A1 (en) Wireless network resource allocation method employing generative adversarial reinforcement learning
CN113112027A (en) Federal learning method based on dynamic adjustment model aggregation weight
CN110022531B (en) Localized differential privacy urban garbage data report and privacy calculation method
CN109214119A (en) Bridge Earthquake Resistance Design method based on response surface model
CN116681144A (en) Federal learning model aggregation method based on dynamic self-adaptive knowledge distillation
CN115761378B (en) Power inspection image classification and detection method and system based on federal learning
CN114580662A (en) Federal learning method and system based on anchor point aggregation
CN115081532A (en) Federal continuous learning training method based on memory replay and differential privacy
CN114462509A (en) Distributed Internet of things equipment anomaly detection method
CN113691594A (en) Method for solving data imbalance problem in federal learning based on second derivative
CN115374479A (en) Federal learning privacy protection method under non-independent same distributed data scene
CN115879542A (en) Federal learning method oriented to non-independent same-distribution heterogeneous data
CN116187469A (en) Client member reasoning attack method based on federal distillation learning framework
CN117171814B (en) Federal learning model integrity verification method, system, equipment and medium based on differential privacy
CN101986608B (en) Method for evaluating heterogeneous overlay network load balance degree
Han et al. Adaptive Batch Homomorphic Encryption for Joint Federated Learning in Cross-Device Scenarios
Singhal et al. Greedy Shapley Client Selection for Communication-Efficient Federated Learning
CN116227547A (en) Federal learning model optimization method and device based on self-adaptive differential privacy
CN115002031B (en) Federal learning network flow classification model training method, model and classification method based on unbalanced data distribution
CN115118591B (en) Cluster federation learning method based on alliance game
CN115129888A (en) Active content caching method based on network edge knowledge graph
CN115659212B (en) Federal learning efficiency evaluation method based on TDD communication under cross-domain heterogeneous scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant