CN115640305A - Fair and credible federal learning method based on block chain - Google Patents

Fair and credible federal learning method based on block chain Download PDF

Info

Publication number
CN115640305A
CN115640305A CN202211651581.6A CN202211651581A CN115640305A CN 115640305 A CN115640305 A CN 115640305A CN 202211651581 A CN202211651581 A CN 202211651581A CN 115640305 A CN115640305 A CN 115640305A
Authority
CN
China
Prior art keywords
model
miners
local
model parameters
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211651581.6A
Other languages
Chinese (zh)
Other versions
CN115640305B (en
Inventor
古天龙
王梦圆
李龙
李晶晶
郝峰锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan University
Original Assignee
Jinan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan University filed Critical Jinan University
Priority to CN202211651581.6A priority Critical patent/CN115640305B/en
Publication of CN115640305A publication Critical patent/CN115640305A/en
Application granted granted Critical
Publication of CN115640305B publication Critical patent/CN115640305B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a block chain-based fair and credible federal learning method, which comprises the following steps: the method comprises the following steps that a model demander issues a training task and a trading contract transmission task on a blockchain, a client trains a global model to generate local model parameters, and the local model parameters are encrypted and transmitted to corresponding miners on the blockchain; corresponding miners propagate and verify the encrypted local model parameters, corresponding miners aggregate the local model parameters passing verification, update the global model according to the aggregation result, generate a new block based on the update result, and broadcast the new block; all miners verify the new blocks and reach consensus; the incentive contract calculates the contribution of the client and generates a latest global model based on the consensus result; and repeating the S2-S6 until the training end condition is met to obtain the optimization model. The trading contract passes the optimized global model to the model requirers.

Description

Fair and credible federal learning method based on block chain
Technical Field
The invention relates to the technical field of mobile communication, in particular to a block chain-based fair and credible federal learning method.
Background
Federal learning was first proposed by *** in 2016, and the purpose of this was to address the privacy disclosure problem that exists in the training of models by server collection of data. The distributed machine learning framework is essentially a distributed machine learning framework, and can complete global model training under the condition of not directly acquiring client data, so that data privacy is effectively protected. In addition, it does not need to transmit client data to the server, thereby reducing communication overhead due to data transmission. Federal learning enables the design and training of cross-institution, cross-department machine learning models, algorithms, and is therefore receiving widespread attention and being applied in the fields of industrial manufacturing, healthcare, product sales, and the like. However, conventional federal learning also has obvious limitations, such as a single point failure problem of a server, an inability to trace a training process, an inefficient incentive mechanism, and the like.
The blockchain is used as a data sharing technology, has the advantages of decentralization, non-tampering, traceability and the like, is widely concerned in recent years, and is used for assisting in improving the implementation effect of federal learning, namely, federal learning (BCFL) based on the blockchain. In the BCFL framework, the decentralized framework of the block chain eliminates the dependence of the traditional federal learning on a central server, and realizes decentralized model aggregation. In addition, the distributed account book can ensure that the records of the process of the Federal learning training cannot be falsified and denied, and can utilize the traceability tracking and auditing of data to the client, thereby improving the credibility of the system and the client. In addition, by means of a block chain incentive mechanism, according to the performance (such as the provided data volume, the local model performance and the communication time delay) of the client in the training process, the client is encouraged to actively participate in the training process by paying digital currency, improving credit, providing a global model with better performance and the like, more training data and a local model with better performance are provided, and therefore the performance of the global model is improved.
BCFL, a new paradigm for machine learning, has typical advantages, but also has unsolved problems. From the standpoint of trusted AI, machine learning needs to meet the requirements of privacy, accountability, fairness, and the like. BCFL possesses privacy and traceability, but does not guarantee property fairness. The global model obtained by BCFL training may have bias or discrimination when applied to various domain decisions or predictions. In the prior art, sensitive attribute values of all users are used for alleviating the prejudice, but the aim of the BCFL is to protect privacy by not granting the data access right to the users, and in addition, characteristics of cross-device and data isomerism in various application fields and the like ensure that the attributes of a model trained by the BCFL are fair and have great challenges. However, in order to enhance the trust level of society and public on BCFL, deepen the application depth and breadth of AI in people's work and life, it is necessary to enhance the property fairness of the global model while protecting the privacy of user local data.
Disclosure of Invention
In order to solve the problem that the global model in the prior art is lack of attribute fairness, the invention provides a fair and credible federated learning method based on a block chain, which not only ensures privacy of local data of a client and traceability of training, but also obviously enhances attribute fairness of a trained model obtained by a model demander during decision making.
In order to achieve the technical purpose, the invention provides the following technical scheme: a block chain-based fair and credible federal learning method comprises the following steps:
s1, a model demander issues a training task and a trading contract transmission task on a block chain, wherein the trading contract comprises an initial global model and a training end condition;
s2, a client trains a global model to generate local model parameters, and the local model parameters are encrypted and transmitted to corresponding miners on a block chain, wherein the global model comprises an initial global model and a latest global model, the initial training aims at the initial global model, and the subsequent training aims at the corresponding latest global model;
s3, corresponding to the propagation of miners and verifying the encrypted local model parameters,
s4, corresponding miners aggregate the local model parameters passing the verification, update corresponding global models according to the aggregation result, generate new blocks based on the update result, and broadcast the new blocks;
s5, all miners verify the new blocks and achieve consensus;
s6, calculating the contribution of the client side and generating a latest global model by the incentive contract based on the consensus result;
s7, repeating S2-S6 until the training end condition is met, and obtaining an optimization model;
and S8, transmitting the optimization model to the model demander by the trading contract.
Optionally, the trading contract further comprises a learning rate and an optimizer.
Optionally, the process of training the global model by the client includes:
and at the client, local data and a global model are obtained, the local data are used for training the global model through a fairness sampling algorithm and a fairness constraint function, and local model parameters are generated.
Optionally, the process of propagating and verifying the encrypted local model parameter corresponding to the miners includes:
and broadcasting the encrypted local model parameters to all miners through the eight diagrams protocol by the corresponding miners, decrypting and verifying the encrypted local model parameters, wherein whether the signature and the components of the local model parameters are valid is verified, if so, the local model parameters are verified to be passed, and the local model parameters which are verified to be passed are added into the corresponding transaction pool.
Optionally, the process of aggregating the local model parameters after verification by the corresponding miners includes:
when the local model parameters passing the verification in the transaction pool corresponding to the miners reach a threshold value, performing model aggregation on the local model parameters in the transaction pool corresponding to the miners through an aggregation mechanism based on double weights, wherein the aggregation mechanism adaptively adjusts the aggregation weights of the local model parameters according to the attribute fairness measurement and the delay degree of the local model parameters.
Optionally, the process of verifying and agreeing on the new block by all miners includes:
broadcasting the new block to all miners corresponding to the miners, checking the information of the new block by the miners who receive the new block, and achieving consensus in a voting way; wherein the information of the new block comprises: the signature of the new block, the data of the local model parameters, the verification condition of the local model parameters and the aggregation condition of the global model.
Optionally, the exciting the contract to calculate the contribution of the client and generating the latest global model process includes:
after the consensus is achieved, the incentive contract is triggered, the incentive contract obtains the performance of a local model corresponding to the client and the score of the miners on the client, the contribution of the client in the training process of the round is calculated according to the performance and the score, the latest global model is generated according to the contribution result, and the latest global model is distributed to the client.
Optionally, after the transaction contract transmits the optimization model to the model demander, the method further includes:
the model demander awards money to different clients according to the contribution of the clients through a trading contract.
The invention has the following technical effects:
1. according to the method, the attribute fairness index is considered in multiple links of model aggregation, client contribution evaluation and the like, and the attribute fairness of the model obtained by a model demander is improved; the dual-weight-based aggregation mechanism also considers the obsolescence degree of the local model, so that the aggregation efficiency of the global model is improved; multidimensional evaluation indexes provided by combining a self-report-based method and an evaluation-based method can objectively and comprehensively measure the contribution of a client; the incentive mechanism can provide two rewards of currency and model, and can meet the requirements of different clients.
2. The invention constructs a fair and credible Federal learning system based on the block chain, solves the problems that the Federal learning depends on a central server to cause single-point failure and is not trusted to cause the client to be unwilling to participate in training, can obviously improve the attribute fairness of a global model during decision making while ensuring the accuracy of the model, and has good aggregation efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings required in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic diagram of an overall structure provided by an embodiment of the present invention;
fig. 2 is a schematic data structure diagram of a distributed ledger provided by an embodiment of the present invention;
fig. 3 is a schematic diagram of an overall work flow provided by the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention discloses a block chain-based fair and credible federal learning method, and relates to the technical field of mobile communication. The method comprises the following steps: the model demander issues a training task and deals with a contract transmission task; the client locally trains a local model and sends the encrypted local model parameters to the blockchain miners; the miners propagate and verify local model parameters; the miners aggregate the local model parameters to generate new blocks and broadcast the new blocks to other miners; other miners verify the block and reach consensus; incentive contracts calculate the contribution of the client and distribute model rewards; repeating the steps until the training end condition is met; the trading contract returns the model to the model demander and distributes monetary rewards to the client. According to the method, a decentralized and traceable training process is realized by means of the block chain, privacy, accountability and stability of trusted AI are guaranteed, meanwhile, the system has attribute fairness by means of local training of the client, block chain aggregation and excitation, and finally fair and credible federal learning is realized.
In order to achieve the technical purpose, the invention discloses a block chain-based fair and credible federal learning method, which comprises the following steps of:
step 1: the model demander issues a training task and deals with a contract transmission task. Wherein the trade contracts specify important matters related to the federal learning training, such as initial models, learning rates, optimizers and training end conditions. If the content provided by the model demander is incomplete, the task will not be initiated, i.e., the trade contract execution fails.
Step 2: and the client locally trains to generate a local model and sends the encrypted local model parameters to the blockchain miners. The client uses local data and the latest global model to train and generate a local model of the client, and the property fairness of the local model is improved through a fairness sampling algorithm and a fairness constraint function.
And step 3: the miners propagate and verify the local model parameters. After receiving the local model parameters, miners diffuse the local model parameters among the miners by using the eight diagrams protocol, decrypt and verify the local model parameters and check whether signatures, components and the like of the local model parameters are effective. If the local model parameters are valid, the local model parameters are put into the own trading pool.
And 4, step 4: the miners aggregate the local model parameters to generate new blocks and broadcast them to other miners. And when the transaction pool of the miners contains a certain number of local model parameters, the miners adopt a dual-weight-based aggregation mechanism to perform model aggregation. The aggregation mechanism adaptively adjusts the aggregation weight of the local model parameters according to the attribute fairness metrics and the delay degree of the local model parameters.
And 5: other miners verify the block and agree on it. After receiving the new block, the miners verify the block, and check the signature of the block, the number of local model parameters, the verification condition of the local model parameters and the aggregation condition of the global model. Miners agree by voting, i.e., vote a block generator after verifying that a block is valid, and terminate the verification of the same batch of blocks. The miners who obtain the highest ticket number obtain the accounting right, and other miners store the blocks in the distributed account books of the miners.
Step 6: the incentive contract calculates the contribution of the client and distributes the model rewards. The incentive contract collects the performance of the local models of the clients and the reputation scores of miners to the clients, calculates the contribution of each client in the training process of the current round, and distributes the corresponding sparse global model to the clients according to the calculation result.
And 7: and repeating the steps 2-6 until the training end condition is met.
And 8: the trading contract returns the model to the model demander and distributes a monetary reward to the client. The transaction contract downloads the latest global model and returns the latest global model to the model demander, and corresponding monetary awards are distributed according to the total contribution of the client in the whole training process.
The above will now be explained in detail:
fig. 1 is a schematic overall structure diagram of the block chain-based fair and trusted federal learning method in this embodiment. As shown in the figure, the system mainly comprises entities such as a model demander, a client and a blockchain network. Because the system mainly uses miners, intelligent contracts and distributed accounts in the block chain network to finish model training, the model training is respectively introduced as follows:
1. model requesters are publishers of the federal learning task, i.e., propose the joint building of machine learning models for prediction or decision-making. The model demander also needs to provide monetary rewards.
2. The client is a trainer of the local model in federal learning, and the client mainly works to train the local model based on the received global model by using collected and stored training data and sends updated local model parameters to the block chain network for global model aggregation.
3. Miners are maintainers of the blockchain and are responsible for receiving and verifying local model parameters, performing model aggregation, and generating new blocks for recording global models. The new block is added to the existing block after each mine is verified and agreed.
4. The intelligent contract is described by a computer language and can be automatically executed according to preset trigger conditions under the condition that a trusted third party does not exist. The system mainly designs two intelligent contracts, wherein the trading contract is used for issuing a federal learning task, returning a global model and distributing monetary rewards, and the incentive contract is used for calculating the contribution of each client in the FL and distributing the model rewards according to the calculation result.
5. The distributed ledger is used for storing blocks, the content of which is shared and synchronized by all miners, and the data structure of which is shown in fig. 2. The block consists of a block head and a block body. The block header includes the block's number, the global model, the hash value of the previous block, and the ID and signature of the miners who mined the block. The blockbody contains verified local model parameters for the aggregated global model. In order to prevent the local model parameters from being tampered, the local model parameters include a signature of a miner for verifying the model parameters besides a client signature, namely, double protection is realized.
Fig. 3 is an overall work flow diagram of the block chain-based fair and trusted federal learning method according to the embodiment. The flow of this embodiment is as follows:
s1: the model demander triggers a trading contract and proposes a FL model training task according to the specification of the contract, wherein important contents related to FL training are specified, such as an initial model, a learning rate, an optimizer, a training end condition and the like. If the content provided by the model demander is incomplete, the task will not be initiated, i.e., the trade contract execution fails. If the task is successfully initiated, the deal contract will automatically deliver the FL training task to all online clients and miners.
S2: after the client receives the task, the local data and the latest global model, namely the global model with different compression degrees in the step S6, are used for training to obtain the local model, the private key is used for signing the local model parameters to obtain the local model parameters, the local model parameters are sent to the blockchain miners, and the initial training is carried out aiming at the initial model:
s2.1: after the client obtains the latest global model, the client uses the local data to test the global model, and dynamically adjusts the sampling frequency of the sensitive attribute group according to the performance of the global model in the aspect of attribute fairness. By increasing the sampling frequency of the sensitive attribute group with a poor test result and reducing the sampling frequency of the sensitive attribute group with a good test result, the purpose of balancing the number of samples of each sensitive attribute group is achieved, and the attribute fairness of the local model is improved.
S2.2: the client trains the local model by means of a fairness constraint function as shown in equation (1). The function is composed of two parts, wherein the first part is used for measuring the difference degree between the predicted value and the actual value of the local model on the whole data, and the second part is used for measuring the difference degree between the predicted value and the actual value of the local model on each attribute group data.
Figure 606854DEST_PATH_IMAGE001
wherein ,
Figure 930519DEST_PATH_IMAGE002
as a function of the loss at the client,
Figure 585622DEST_PATH_IMAGE003
is as followsqThe authenticity of the label of the individual specimen,
Figure 793881DEST_PATH_IMAGE004
is as followsqThe characteristics of the individual samples are such that,
Figure 936149DEST_PATH_IMAGE005
is to
Figure 774923DEST_PATH_IMAGE004
The predicted value of (a) is obtained,
Figure 448481DEST_PATH_IMAGE006
in order to train the total number of samples,
Figure 975277DEST_PATH_IMAGE007
represents the cross entropy loss between the true and predicted values of the overall data, [ Y [ ]]A value set is taken for the real label,
Figure 457205DEST_PATH_IMAGE008
is as followsqThe sensitivity of the individual samples is given by their properties,
Figure 716148DEST_PATH_IMAGE009
represents the average of the sensitivity attributes of the population of samples,
Figure 549106DEST_PATH_IMAGE010
when the predicted value isyWhen the temperature of the water is higher than the set temperature,
Figure 692643DEST_PATH_IMAGE005
is compared to the average prediction probability.
S3: after receiving the local model parameters, miners diffuse the local model parameters among the miners by using the eight diagrams protocol, decrypt and verify the local model parameters, and check whether signatures, components and the like of the local model parameters are effective. If the local model parameters are valid, the local model parameters are put into the own trading pool.
S4: and when the transaction pool of the miners contains a certain amount of local model parameters, the miners adopt an aggregation mechanism to perform model aggregation to obtain a global model. Further, miners use a private key to sign the local model parameters used for aggregation and the obtained global model, pack the local model parameters to generate a new block, and broadcast the new block to other miners:
s4.1: when the miners' trading pool containsLAnd when the local model parameters are acquired, the miners calculate the fairness weight related to the fairness of the local model parameters according to the formula (2), and calculate the time weight related to the training delay degree according to the formula (3).
Figure 481607DEST_PATH_IMAGE011
wherein ,
Figure 583556DEST_PATH_IMAGE012
representing a clientiIn the first placetThe fair weight of the wheel is given to,
Figure 107072DEST_PATH_IMAGE013
is a clientiIn the first placetThe local model obtained by the round of training,
Figure 523141DEST_PATH_IMAGE014
is a local model
Figure 228929DEST_PATH_IMAGE015
The accuracy of the process is improved by the accuracy of the process,
Figure 642724DEST_PATH_IMAGE016
is a local model
Figure 433962DEST_PATH_IMAGE015
Is fair.
Figure 325826DEST_PATH_IMAGE017
wherein ,
Figure 457028DEST_PATH_IMAGE018
representing a clientiIn the first placetThe time weight of the round is given to,
Figure 572883DEST_PATH_IMAGE019
is a clientiIn the first placet-1 round of model awards to be obtained,
Figure 398887DEST_PATH_IMAGE020
representing a clientiReceive from
Figure 609289DEST_PATH_IMAGE019
At the time of the day,
Figure 571560DEST_PATH_IMAGE021
representing a local model
Figure 327157DEST_PATH_IMAGE013
The upload time of (c). S
Figure 92988DEST_PATH_IMAGE022
As a function of the time weight, it can be calculated by equation (4).
Figure 326654DEST_PATH_IMAGE023
wherein ,b>and 0, for controlling the degree of difference of time weight among different clients.
S4.2: the miners polymerized according to formula (5)LAnd obtaining the latest local model aggregation according to the local model parameters.
Figure 940169DEST_PATH_IMAGE024
wherein ,
Figure 115936DEST_PATH_IMAGE025
representing the minersjIn the first placetAnd polymerizing the obtained local models.
S4.3: the miners update the global model by means of equation (6).
Figure 854216DEST_PATH_IMAGE026
wherein
Figure 140841DEST_PATH_IMAGE027
Representing the minersjIn the first placetThe global model obtained in the turn is obtained,α t the average delay parameter can be obtained from equation (7).
Figure 280966DEST_PATH_IMAGE028
This formula relies on mixing hyper-parametersαControlling the influence of the average delay degree on the model aggregation weight,α∈(0, 1)。
s5: after receiving the new block, other miners verify the block, and check the signature of the block, the number of local model parameters, the verification condition of the local model parameters and the aggregation condition of the global model. Miners agree by voting, i.e., vote a block generator after verifying that a block is valid, and terminate the verification of the same batch of blocks. The miners who obtain the highest ticket number obtain the accounting right, and other miners store the blocks in the distributed account books of the miners.
S6: after the new block is confirmed, an incentive contract is triggered, collects the performance of the client local model and the reputation score of miners to the client, calculates the contribution of each client in the training process, and distributes a corresponding sparse global model to the clients according to the calculation result:
s6.1: aiming at the multidimensional property of the contribution of the client, the incentive contract combines a self-report-based method and an evaluation-based method, and the contribution of the client is evaluated through a multidimensional evaluation index shown in a formula (8).
Figure 362054DEST_PATH_IMAGE029
wherein ,
Figure 509002DEST_PATH_IMAGE030
representing a clientiIn the first placetThe contribution of the wheel(s) is,
Figure 84471DEST_PATH_IMAGE031
representing a clientiIn the first placetThe wheel is calculated from equation (9) based on the contribution value from the reporting method,
Figure 859529DEST_PATH_IMAGE032
the normalized contribution value based on the self-reporting method can be calculated by equation (10),
Figure 783623DEST_PATH_IMAGE033
representing a clientiIn the first placetThe client contribution value obtained by the round based on the evaluation method can be calculated by the formula (11),
Figure 27653DEST_PATH_IMAGE034
for the normalized contribution value based on the evaluation method, the contribution value can be calculated by the formula (12)
Figure 656081DEST_PATH_IMAGE035
wherein ,Nfor the number of clients, max {. And min {. Are functions for finding the maximum and minimum values of the elements in the set, CT, respectively t Show participation intA set of clients for a round of training process.
Figure 770798DEST_PATH_IMAGE036
wherein ,
Figure 928110DEST_PATH_IMAGE037
respectively indicate minersjTo the clientiThe weighted average of the training time score, the interaction frequency score and the signature effectiveness score represents the contribution evaluation of miners to the client, namely the reputation of the client,Kthe number of miners.
S6.2: and (3) after the global model aggregation is completed, issuing model rewards according to the contribution of the clients, wherein the clients with different contributions obtain global model parameters with different compression degrees, as shown in a formula (13).
Figure 49650DEST_PATH_IMAGE038
wherein ,
Figure 232501DEST_PATH_IMAGE039
representing a clientiIn the first placetModel reward of the wheel, spark (-) is a sparsification function, GM t Is as followstWheel finalThe global model of (a) is,
Figure 919834DEST_PATH_IMAGE040
is based on clientiContribution of the firsttThe wheel global model compression ratio can be calculated by equation (14).
Figure 326676DEST_PATH_IMAGE041
Wherein, the index (·) and sort (·) functions sort clients according to contribution degree, so as to ensure that clients with large contribution obtain a global model with low compression rate,βfor controlling the degree of difference in compression rate.
S7: and repeating the second step to the sixth step until the training end condition is met.
S8: when the federal learning task is finished, the trading contract is triggered again, the latest global model is downloaded and returned to the model demander, the total contribution of the client in the whole training process is calculated according to the formula (15), and corresponding monetary rewards are issued for the client at one time.
Figure 997829DEST_PATH_IMAGE042
wherein ,CY i Representing a clientiThe monetary reward received, CY being the total amount of money submitted by the model demander, C i For the clientiThe total contribution of (c) can be calculated by equation (16).
Figure 640162DEST_PATH_IMAGE043
wherein ,Tis the total number of rounds of training iterations.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (8)

1. A block chain-based fair and credible federal learning method is characterized by comprising the following steps:
s1, a model demander issues a training task and a trading contract transmission task on a block chain, wherein the trading contract comprises an initial global model and a training end condition;
s2, a client trains a global model to generate local model parameters, and the local model parameters are encrypted and transmitted to corresponding miners on a block chain, wherein the global model comprises an initial global model and a latest global model, the initial training aims at the initial global model, and the subsequent training aims at the corresponding latest global model;
s3, corresponding to the propagation of miners and verifying the encrypted local model parameters,
s4, corresponding miners aggregate the local model parameters after the verification is passed, update corresponding global models according to the aggregation result, generate new blocks based on the update result, and broadcast the new blocks;
s5, all miners verify the new blocks and achieve consensus;
s6, calculating the contribution of the client and generating a latest global model by the incentive contract based on the consensus result;
s7, repeating S2-S6 until the training end condition is met, and obtaining an optimization model;
and S8, the trading contract transmits the optimization model to the model demander so as to realize fair optimization of the initial global model.
2. The block chain based fair trusted federated learning method of claim 1, characterized in that:
the trading contract further includes a learning rate and an optimizer.
3. The block chain based fair and trusted federal learning method as claimed in claim 1, wherein:
the process of training the global model by the client comprises the following steps:
and at the client, local data and a global model are obtained, the local data are used for training the global model through a fair sampling algorithm and a fair constraint function, and local model parameters are generated.
4. The block chain based fair and trusted federal learning method as claimed in claim 1, wherein:
the process of propagating and verifying the encrypted local model parameters corresponding to the miners comprises the following steps:
and the corresponding miners broadcast the encrypted local model parameters to all miners through the eight diagrams protocol, and decrypt and verify the encrypted local model parameters, wherein whether the signatures and the components of the local model parameters are valid is verified, if so, the local model parameters are verified to be valid, and the local model parameters which are verified to be valid are added into the corresponding transaction pools.
5. The block chain based fair and trusted federal learning method as claimed in claim 1, wherein:
the process of aggregating the local model parameters after the verification is finished by the corresponding miners comprises the following steps:
when the local model parameters passing the verification in the transaction pool corresponding to the miners reach a threshold value, performing model aggregation on the local model parameters in the transaction pool corresponding to the miners through an aggregation mechanism based on double weights, wherein the aggregation mechanism adaptively adjusts the aggregation weights of the local model parameters according to the attribute fairness measurement and the delay degree of the local model parameters.
6. The block chain based fair and trusted federal learning method as claimed in claim 1, wherein:
the process of all miners verifying and agreeing on the new block includes:
broadcasting the new block to all miners corresponding to the miners, checking the information of the new block by the miners who receive the new block, and achieving consensus in a voting way; wherein the information of the new block comprises: the signature of the new block, the data of the local model parameters, the verification condition of the local model parameters and the aggregation condition of the global model.
7. The block chain based fair and trusted federal learning method as claimed in claim 1, wherein:
the incentive contract computing client contribution and generating the latest global model process comprises:
after the consensus is achieved, the incentive contract is triggered, the incentive contract obtains the performance of a local model corresponding to the client and the score of the miners on the client, the contribution of the client in the training process of the round is calculated according to the performance and the score, the latest global model is generated according to the contribution result, and the latest global model is distributed to the client.
8. The block chain based fair trusted federated learning method of claim 1, characterized in that:
after the transaction contract transmits the optimization model to the model demander, the method further comprises the following steps:
the model demander rewards different clients with money according to the contribution of the clients through a trading contract.
CN202211651581.6A 2022-12-22 2022-12-22 Fair and reliable federal learning method based on blockchain Active CN115640305B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211651581.6A CN115640305B (en) 2022-12-22 2022-12-22 Fair and reliable federal learning method based on blockchain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211651581.6A CN115640305B (en) 2022-12-22 2022-12-22 Fair and reliable federal learning method based on blockchain

Publications (2)

Publication Number Publication Date
CN115640305A true CN115640305A (en) 2023-01-24
CN115640305B CN115640305B (en) 2023-09-29

Family

ID=84948229

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211651581.6A Active CN115640305B (en) 2022-12-22 2022-12-22 Fair and reliable federal learning method based on blockchain

Country Status (1)

Country Link
CN (1) CN115640305B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597498A (en) * 2023-07-07 2023-08-15 暨南大学 Fair face attribute classification method based on blockchain and federal learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113467927A (en) * 2021-05-20 2021-10-01 杭州趣链科技有限公司 Block chain based trusted participant federated learning method and device
US20220210140A1 (en) * 2020-12-30 2022-06-30 Atb Financial Systems and methods for federated learning on blockchain
CN115426353A (en) * 2022-08-29 2022-12-02 广东工业大学 Method for constructing federated learning architecture integrating block chain state fragmentation and credit mechanism

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220210140A1 (en) * 2020-12-30 2022-06-30 Atb Financial Systems and methods for federated learning on blockchain
CN113467927A (en) * 2021-05-20 2021-10-01 杭州趣链科技有限公司 Block chain based trusted participant federated learning method and device
CN115426353A (en) * 2022-08-29 2022-12-02 广东工业大学 Method for constructing federated learning architecture integrating block chain state fragmentation and credit mechanism

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王超: "基于区块链的联邦学习安全机制研究", pages 8 - 52 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597498A (en) * 2023-07-07 2023-08-15 暨南大学 Fair face attribute classification method based on blockchain and federal learning
CN116597498B (en) * 2023-07-07 2023-10-24 暨南大学 Fair face attribute classification method based on blockchain and federal learning

Also Published As

Publication number Publication date
CN115640305B (en) 2023-09-29

Similar Documents

Publication Publication Date Title
Baza et al. B-ride: Ride sharing with privacy-preservation, trust and fair payment atop public blockchain
Lavi et al. Redesigning bitcoin’s fee market
CN112348204B (en) Safe sharing method for marine Internet of things data under edge computing framework based on federal learning and block chain technology
Feng et al. MCS-Chain: Decentralized and trustworthy mobile crowdsourcing based on blockchain
CN112434280B (en) Federal learning defense method based on blockchain
CN103746957B (en) Trust evaluation system based on privacy protection and construction method thereof
CN109993647A (en) A kind of pay taxes credit investigation system and processing method based on block chain
CN114048515B (en) Medical big data sharing method based on federal learning and block chain
CN114386043A (en) Method for evaluating depocenter privacy keeping credit facing crowd sensing
CN115396442A (en) Calculation force sharing system and method for urban rail transit
Zhu et al. Blockchain technology in internet of things
CN115640305A (en) Fair and credible federal learning method based on block chain
Jain et al. Blockchain for the common good: A digital currency for citizen philanthropy and social entrepreneurship
Liu et al. A survey on blockchain-enabled federated learning and its prospects with digital twin
CN101242410B (en) Grid subjective trust processing method based on simple object access protocol
Ali et al. Incentive-driven federated learning and associated security challenges: A systematic review
Chen et al. A blockchain-based creditable and distributed incentive mechanism for participant mobile crowdsensing in edge computing
Wang et al. Towards a Smart Privacy‐Preserving Incentive Mechanism for Vehicular Crowd Sensing
Li et al. An incentive mechanism for nondeterministic vehicular crowdsensing with blockchain
CN116975817A (en) Web3.0-based multi-element entity digital identity tracing method
Dong et al. DAON: A decentralized autonomous oracle network to provide secure data for smart contracts
Sharma et al. Introduction to blockchain and distributed systems—fundamental theories and concepts
Zhang et al. Integrating blockchain and deep learning into extremely resource-constrained IoT: an energy-saving zero-knowledge PoL approach
Jain et al. A security analysis of lightweight consensus algorithm for wearable kidney
Zhao et al. Safe and Efficient Delegated Proof of Stake Consensus Mechanism Based on Dynamic Credit in Electronic Transaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant