CN115037477A - Block chain-based federated learning privacy protection method - Google Patents

Block chain-based federated learning privacy protection method Download PDF

Info

Publication number
CN115037477A
CN115037477A CN202210599679.5A CN202210599679A CN115037477A CN 115037477 A CN115037477 A CN 115037477A CN 202210599679 A CN202210599679 A CN 202210599679A CN 115037477 A CN115037477 A CN 115037477A
Authority
CN
China
Prior art keywords
model
participant
participants
transaction
ciphertext
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210599679.5A
Other languages
Chinese (zh)
Inventor
马海英
黄双龙
郭嘉乐
曹东杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN202210599679.5A priority Critical patent/CN115037477A/en
Publication of CN115037477A publication Critical patent/CN115037477A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3247Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials involving digital signatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/008Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols involving homomorphic encryption
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3236Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using cryptographic hash functions
    • H04L9/3239Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using cryptographic hash functions involving non-keyed hash functions, e.g. modification detection codes [MDCs], MD5, SHA or RIPEMD
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a block chain-based federal learning privacy protection method, which belongs to the technical field of federal learning, privacy protection and block chains, and adopts the technical scheme that: the method comprises the following steps: global initialization, a system administrator constructs a block chain and distributes security parameters for all participants of the task; the participants train the local model, and encrypt the model parameters by using the security parameters and the Chinese remainder theorem; leading selected by the consensus mechanism executes ciphertext aggregation; the participant decrypts the aggregated ciphertext and updates the local model; the model owner obtains the final model and the participant invokes an incentive contract to obtain rewards. The invention has the beneficial effects that: the invention records the federal learning process by using the block chain, realizes the security parameter distribution and the model aggregation, protects the privacy by using the Chinese remainder theorem and the blinding technology, avoids the possibility that the aggregation party and part of the participants conspire to steal the privacy of other participants, and enhances the security in the federal learning process.

Description

Block chain-based federal learning privacy protection method
Technical Field
The invention relates to the technical fields of federal learning, privacy protection and block chains, in particular to a block chain-based federal learning privacy protection method.
Background
With the development of internet of things and artificial intelligence technologies, a large number of IoT devices are deployed in an industrial environment to collect potentially privacy-sensitive data, then share the data to the cloud and perform machine-learning-based intelligent decisions. However, these data may be highly private to industrial organizations.
In this context, Federal Learning (FL) is receiving increasing attention from both academia and industry because it enables participants to collaboratively train models without revealing the original training data. Specifically, after agreement on the initial global model, each participant should use its local data set to compute the gradient (or model parameters) and then upload the gradient to the server. The server acts as an aggregator, aggregating all received local results to update the global model and distribute the new model to the participants. While FL avoids data exposure of uploading raw data for model training, current research suggests that publicly shared gradients may also leak sensitive information.
To address this problem, several significant solutions have been proposed. The main idea of these schemes is to combine FL with various privacy protection technologies such as Differential Privacy (DP), secure multi-party computing (MPC), and Homomorphic Encryption (HE) to further improve the security of data. Generally, DP-based schemes reduce privacy leakage by adding noise to the uploaded gradient, which requires careful selection of the parameters that generate the noise to balance privacy and model accuracy. Whereas in MPC-based schemes the local gradient is masked by a random matrix, the seed generating the mask is shared secretly between the participants using a threshold secret sharing scheme, and when the server aggregates the masking gradient, the random matrix will be cancelled out to obtain a true result. However, it results in the global model in model training still being available to the server, so inference attacks are still likely to result in privacy leaks. HE-based schemes can prevent such attacks, where the participants encrypt before uploading the local gradient, and the server will aggregate them in ciphertext form. However, it may incur a large computational overhead, which makes such a scheme impractical in IIoT applications given the existence of resource-constrained internet of things devices.
In addition, a single aggregation server may be attacked, or fail, by a malicious organization. To address this problem, blockchain technology has become an option due to its decentralized, publicly transparent, auditable, and non-tamperable nature. Generally, a model parameter file of a convolutional neural network is large, the model parameter file is often stored under a chain of a distributed file system (such as an interplanetary file system, IPFS), a hash value of file content is used for locating the file position, the hash value is recorded only in a block chain, and a participant downloads the model parameter file from the IPFS according to the hash value.
In addition to the above problems, the application of FL in IIoT systems faces another challenge. Because training the model inevitably consumes resources of the customer, such as computing resources, communication bandwidth, power resources, the customer may be reluctant to participate and share their model updates unless sufficient rewards are obtained. The performance of the FL model may be affected if not enough customers are involved in the training process.
How to solve the above technical problems is the subject of the present invention.
Disclosure of Invention
The invention aims to provide a block chain-based federal learning privacy protection method, which utilizes the Chinese remainder theorem and the blinding technology to protect data privacy and realizes security parameter distribution, ciphertext aggregation and incentive distribution by means of a block chain.
The invention idea of the invention is as follows: the method comprises the steps that a block chain is constructed through a system administrator SM, and a model owner MO and a user are registered to the block chain; MO issues a federal learning task, and a user voluntarily participates in the federal learning task to become a participant; the SM generates the security parameters needed for encryption for all participants. The participants encrypt the local models thereof by means of the Chinese remainder theorem and the blinding technology, and upload the model cryptographs to the blockchain network. And the consensus mechanism selects one block chain node as an aggregator and aggregates all the model ciphertexts received in each round. And the block chain records the federal learning task, the safety parameters, the model ciphertext of the participant, the aggregation model ciphertext and the excitation result in the method, and stores the related ciphertext by using an IPFS chain.
The concrete contents are as follows: the system administrator SM builds a blockchain on which users and model owners register to obtain accounts that they can use to generate transactions. The model owner MO issues an FL task, participants voluntarily participate in the task, after the number of the participants meets the task requirement, the model owner generates a participant List of the task, and a system administrator generates security parameters required by encryption and decryption for the participants in the List and sends the security parameters to the participants through a secret channel; downloading an initial model and security parameters by participants in each List, training the model by using a local data set to obtain local model parameters, encrypting the local model parameters, uploading a ciphertext to an IPFS (Internet protocol file system), generating a transaction containing an IPFS address and uploading the transaction to a block chain, and calling a time contract to check whether the uploading time is before the deadline time during uploading; selecting a part of consensus nodes to form a committee by the consensus protocol of the block chain, selecting one committee member to become a leader, verifying the transaction validity of all participants by the committee, and performing ciphertext aggregation by the leader after verification; after the leader aggregation is completed, generating a transaction containing an aggregation model parameter ciphertext address, packaging the transaction into a new block, and verifying the correctness of an aggregation result by a committee member; when more than 2/3 members agree to the tile, the committee agrees on the tile and broadcasts it to the blockchain; the participants in the List inquire the transaction to obtain the address of the aggregation ciphertext in the IPFS, download the ciphertext, decrypt, reflect and decode the ciphertext to obtain the real aggregation model parameter, and finally replace the local model parameter to complete updating; after a certain turn, testing whether the accuracy of the model meets the requirement of the MO by the participants in the List, and uploading a finished transaction when the accuracy meets the requirement; after all participants upload and complete the transaction, the SM decrypts the aggregation model parameter ciphertext of the last round and sends the decrypted ciphertext to the MO through a secret channel, the MO obtains a final model parameter, then the MO tests the model accuracy, the task is finished after the requirement is met, and the participants in the List call an incentive contract to obtain the reward; otherwise all participants in the List continue the next round of model training until they all meet the given termination condition.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: a block chain-based federated learning privacy protection method comprises the following steps:
s10, system global initialization, firstly, a system administrator SM constructs a block chain, participants and a model owner register on the block chain to obtain own account numbers and a pair of public and private keys, a model owner MO issues a federal learning task, and the participants meeting the task requirements participate in the task; when the number of people participating in the task meets the task requirement, the model owner determines a participant List; a system administrator generates a safety parameter SP of the federal learning task according to the participant List and sends the SP to all members of the participant List through a secret channel;
s20, model training and encryption, wherein participants in the List download initial models and security parameters, use local data set to train the models to obtain local model parameters, then encode, blind and encrypt the local model parameters, and upload local model parameter ciphertext ct i (r) Generating a transaction of the model training of the current round according to the address in the IPFS in the distributed file system IPFS; when the participants in the List send transactions, a function for detecting time is called to detect whether the transaction uploading time is before the deadline of the round, and when all the participants in the List finish ciphertext uploading before the deadline, a ciphertext aggregation stage is started;
s30, aggregating model parameter ciphertexts, selecting a part of workers to form a committee by an Algorand consensus protocol by using a verifiable random function, selecting a leader from the committee, verifying the transaction validity of all participants by all members of the committee, and aggregating the leaders after the verification is passed: for ciphertext { ct i (r) I is 1,2, N, and the addition operation is executed to obtain the aggregation ciphertext C of the round r After leader aggregation is completed, the aggregation ciphertext C is aggregated r Stored in IPFS, generates a transaction containing its address and packs it into a new block r Committee members verify the aggregate ciphertext C r When 2/3 is exceeded, the members agree to block r Committee on block r Reach a consensus and combine the blocks r Broadcasting to a blockchain;
s40, participant updates local model, and participant slave block in List r Query transaction to obtain aggregate ciphertext C r At the address in IPFS, download the ciphertext C r To the ciphertext C r Decrypting, reflection and decoding to obtain real aggregation model parameters, then completing updating of local model parameters, next, performing the next round of model training and encryption steps by the participator, repeatedly executing the steps S20, S30 and S40 until the accuracy of the model meets the requirements of the model owner, and entering the step S50;
s50, obtaining a final model parameter by a model owner, after a certain turn, testing whether the accuracy of the model meets the requirement of the MO by each participant in the List, uploading a completion transaction after the participants in the List meet the requirement of the model training of the MO, when all participants upload the completion transaction, decrypting the aggregated ciphertext of the last turn by the SM, sending the decrypted aggregated ciphertext to the MO through a secret channel, obtaining the final model parameter by the MO, testing the accuracy of the model, ending the task after meeting the requirement, calling an incentive contract to calculate and obtain a corresponding reward by the participants in the List; otherwise, continuing the next round of model training by all participants in the List until all the participants meet the given termination condition;
the block chain-based federal learning privacy protection method comprises five entities, namely a block chain, a system administrator, a model owner, a participant and an IPFS. IPFS is a peer-to-peer distributed file storage system that enables distributed computing devices to connect to the same file system and locate file locations using hash values.
Further, the step S10 includes:
s101, a system administrator SM constructs a block chain, a consensus protocol adopted is determined to be Algorand, a participant and a model owner MO can be registered in the block chain and have an account number, a pair of public and private keys { pk, sk }, a wallet address wa, a unique identity id and deposit, the participant and the model owner MO use the wallet address to generate a transaction, all participants, MO and worker need to lock a part of the deposit on the block chain to serve as a deposit, a block is created on the block chain to contain the transaction recording the deposit ownership statement of the deposit, and in addition, a public and private key pair creates a secret channel between a sender and a receiver;
s102, the model owner issues a Federal Learning (FL) task in a mode of issuing asset declaration transaction, and the task comprises the following steps: initial model parameter W 0 Model number mid, learning rate η, training time t per round, model accuracy requirement θ, participant number requirement N, assuming a model owner MO j Issuing an asset declaration transaction: TX MOj ={mid j ,t j ,H(W 0 (j) ),σ(sk j ,H(W 0 (j) )),N jjj "Keywords", where H (W) 0 (j) ) Is the hash address, sk of the initial model parameter stored in IPFS j Is MO j Private key of σ (sk) j ,H(W 0 (j) ) Is a signature used to prove that it does have a model, "Keywords" represents the model description for this FL task; for ease of notation and description, { mid ] is removed j ,t j ,N jjj Subscript j of } followed by a representative MO only j The description is carried out;
s103, adopting the hair by the participantsThe manner in which the data asset declaration transaction is voluntarily joined to the FL task, assuming one participant P i Issuing a data asset statement transaction to join the MO j FL task(s): TX Di ={sid,H(D i ),σ(sk i ,H(D i )), H(TX MOj ) "Keywords", where sid is the participant P i Data set D of i Number, sk i Is a participant P i Private key of (2), H (D) i ) Is a data set D i Hash value of, σ (sk) i ,H(D i ) Is used to prove a participant P i Does have a data set D i Signature of (2), H (TX) MOj ) Representing a participant P i Adding MO j Is a transaction TX MOj The hash value of (a);
s104, model owner MO j Searching the number of participants in the block chain, and when N or more participants join the FL task, the MO j Query all transactions and select N participants, generate a List of wallet addresses and public keys of the N participants, and then upload an employment transaction to the blockchain: TX employ ={List,H(TX MOj ),H(List),σ(sk j , H(List)),“Keywords”};
S105, the system administrator SM according to the MO j TX of transaction MOj Obtaining the information of the FL task: initial model parameter W 0 Model number mid, learning rate eta, training time t of each round, assuming that the time is long enough, all participants finish model training in the time, model accuracy requirement theta, participant number requirement N, selecting a complexity parameter m (m is more than or equal to 3) which can be set according to participant needs, a positive integer l for controlling calculation precision and training start time t 0 Then m +1 pairwise co-prime gcd (p) are generated i ,p j ) Positive prime number p of 1, (i ≠ j) 0 ,p 1 ,p 2 ,···,p m
S106, the system administrator SM randomly generates an N multiplied by N positive integer seed matrix M and provides the participant P with the positive integer seed matrix M i (i ═ 1,2, ·, N) two seed vectors { M · i (j,0) |j=1,2,···,N}、{ M j (i,0) 1,2, ·, N, which respectively represent the ith row and ith column of the matrix M, and then a Pseudo-random number Generator (PRG ()) is selected, which satisfies PRG () (PRG ()) + PRG () (mod p) 0 );
S107, the System Administrator SM according to the MO j TX of transaction employ Reading the participant List to obtain the wallet addresses and public keys { P } of N participants i .wa,P i Pk | i ═ 1,2, ·, N }; then, SM combines these security parameters { m, l, p } 0 ,p 1 ,p 2 ,···, p m The } and the two seed vectors and PRG (-) are sent to N participants through their secret channels and recorded on the blockchain; the SM sends to the ith participant:
(1) SM use participant P i Public key P of i Pk encrypts { m, l, p 0 ,p 1 ,p 2 ,···,p m ,{M i (j,0) |j=1,2,···,N}, { M j (i,0) 1,2, ·, N, PRG (·), to obtain the ciphertext SP i
(2) SM ciphertext SP i Stored in the IPFS, and then generates a transaction: TX Pi ={P i .wa,H(TX MOj ), H(SP i ),σ(SM.sk,H(SP i ) -, "Keywords" }, where H (SP) i ) Is a ciphertext SP i Hash address in IPFS;
(3) SM is to TX Pi To the blockchain.
Further, the step S20 includes:
s201, participant P i (i ═ 1,2,. cndot., N) by MO j Of a transaction TX MOj Downloading initial model parameters W from IPFS 0 (j) Obtaining model accuracy requirement theta and learning rate eta, and then using its own wallet address P i Wa query transaction TX in blockchain Pi Obtaining a security parameter ciphertext SP i Downloading ciphertext SP at Hash Address in IPFS i And use its own private key sk i Decrypting;
s202, participant P i Using local data sets D i Training modelType, let f (x, W) r ) Is a neural network model, where x is the input, W r For the model parameters of round r, the cross entropy function is used as the loss function:
Figure RE-GDA0003788825310000051
wherein<x k ,y k >∈D i ,x k Is input, y k Is a label, n is a data set D i Of then P i Using MO j Model parameter W of r-1 (j) Calculating to obtain an r-th wheel local model parameter W r (j,i)
First, P i Calculate the gradient of the r-th round loss function:
Figure RE-GDA0003788825310000052
wherein
Figure RE-GDA0003788825310000055
Is a loss function L f Gradient of (. D) i * Is a data set D i Is then, P i Local model parameters were obtained using gradient calculations:
Figure RE-GDA0003788825310000053
wherein W r (j,i) Represents P i Using MO j Model parameter W of r-1 (j,i) The local model parameters of the r round after training;
s203, participant P i Encoding model parameters, use
Figure RE-GDA0003788825310000054
Model parameter W r (j,i) Conversion from real to integer:
Figure RE-GDA0003788825310000061
wherein l is a positive integer, adjusting the value of l controls the calculation accuracy,
Figure RE-GDA0003788825310000062
is not more than x × 10 l The maximum integer of (2), the coded parameter is obtained after the calculation is finished
Figure RE-GDA0003788825310000063
S204, participant P i According to two seed vectors { M i (j,0) |j=1,2,···,N}、{M j (i,0) Using PRG (-) to generate two sequences { M }, 1,2, ·, N | j · i (j,r) }、{M j (i,r) };M i (j,r) And M j (i,r) Is composed of PRG (M) i (j ,r-1) ) And PRG (M) j (i,r-1) ) Generated then, P i Using these two sequences M i (j,r) }、{M j (i,r) To the encoded parameters
Figure RE-GDA0003788825310000064
Carrying out blinding:
Figure RE-GDA0003788825310000065
wherein
Figure RE-GDA0003788825310000066
Is a model parameter that has been blinded;
s205, participant P i Encrypting and packaging using Chinese Remainder Theory (CRT)
Figure RE-GDA0003788825310000067
First, P i Handle
Figure RE-GDA0003788825310000068
Random partitioning into m parts b k (i,r) 1,2, ·, m } and satisfies
Figure RE-GDA0003788825310000069
Then, m parts { b } are used k (i,r) 1,2, ·, m } and prime number { p | k 1,2, ·, m } constructs the following congruence equation set:
Figure RE-GDA00037888253100000610
from the CRT, a unique solution in the sense of modulo S is obtained:
Figure RE-GDA00037888253100000611
wherein
Figure RE-GDA00037888253100000612
S k =S/p k
Figure RE-GDA00037888253100000613
T k Is S k Modulo p k Inverse element in the sense of looking at formula (7) because of b k (i,r) S k T k ≡b k (i,r) ×1≡b k (i,r) (mod p k ) Then for
Figure RE-GDA00037888253100000616
k≠j,b j (i,r) S j T j ≡0(mod p k ),ct i (r) Satisfies the following conditions:
Figure RE-GDA00037888253100000614
obviously ct r,i mod p k Is equal to b k (i,r) mod p k Then calculated by CRT, b k (i,r) The set of integers is completed by mod operation
Figure RE-GDA00037888253100000618
To the finite field GF (p) k ) In which GF (p) k )={0,1,2,···,p k -1}, (k ═ 1,2, ·, m), and the specific mapping is as follows:
Figure RE-GDA00037888253100000615
wherein b is k (i,r) Is a blinded parameter
Figure RE-GDA00037888253100000617
In order to prevent overflow errors during the calculation, the prime number p k I k |, 1,2, ·, m } must be large enough to satisfy p k >>N×10 l So that b is k (i,r) ∈[-(p k -1)/2N,(p k -1)/2N]For convenience of description, ct is used i (r) Representing ciphertext CRT 1 (i,r) ,b 2 (i,r) ,···,b m (i,r) ];
S206, participant P i The model parameter ciphertext ct i (r) Stored in IPFS, hash address H (ct) i (r) ) Packaging into transaction and sending to block chain: TX r,i ={r,H(ct i (r) ),σ(sk i ,H(ct i (r) )),H(TX MOj ),H(TX Pi ) "Keywords"; in addition, participant P i Transmit transaction TX r,i When it is time, the CheckTime function of the time contract is called to check whether it is at the cut-off point in time t r And (3) uploading, if part of participants can not finish uploading before the deadline, executing a punishment mechanism, not collecting part of deposit of the participants and rewarding the deposit to other honestly executed participants, then executing the ciphertext uploading step again by the overtime participants, and when N participants finish uploadingAnd the other party can enter the parameter aggregation stage after uploading the parameter ciphertext.
Further, the step S30 includes:
s301, participant P i To-be-transacted TX r,i After the transaction is sent to the blockchain, the worker checks the digital signature of the transaction, confirms that the transaction is from a legal participant, and puts the transaction into a designated transaction pool, a block chain consensus protocol randomly selects a part of all workers through a Verifiable Random Function (VRF) to form a committee, and then selects a member from the committee to become a leader;
s302, the leader executes ciphertext addition operation:
Figure RE-GDA0003788825310000071
since the CRT satisfies the additive homomorphism property, then:
Figure RE-GDA0003788825310000072
wherein C r Is the aggregate ciphertext of round r,
Figure RE-GDA0003788825310000073
s303, after the calculation is finished, the leader aggregates the ciphertext C r Store to IPFS and generate a transaction TX Cr ={mid,r, H(C r ),σ(sk,H(C r )),H(TX MOj ) "Keywords", where sk is the leader's private key, H (C) r ) The hash address of the ciphertext in IPFS is aggregated, and finally all transactions of the round are packed into a new block r ={TX Cr ,TX r,1 , TX r,2 ,···,TX r,N };
S304, verifying the block by the committee member r Voting it, if block is agreed r Then a transaction is generated: TX vote ={H(block r ),σ(sk,H(block r )),H(TX MOj ),“Keywords”};
S305, if the committee member exceeding 2/3 agrees to the block r Then the block is admitted, the leader receives the reward, and all committee members broadcast this block; otherwise, the punishment mechanism will not receive the deposit of the leader and award it to other members of the committee, then select one member from the committee to become the new leader, and re-execute step S302, step S303, step S304 and step S305.
Further, the step S40 includes:
s401, participant P i Slave transaction TX Cr Reading hash address H (C) of aggregated ciphertext in IPFS r ) Downloading to obtain ciphertext C r
S402, participant P i Using a modulus prime number p k Decrypting the aggregate ciphertext C by way of [ k ] 1,2, ·, m } operation r K discrete values are obtained:
Figure RE-GDA0003788825310000081
wherein k is 1,2, m;
s403, participant P i Using a function g -1 (B k (r) ) Di (B) k (r) I k | (1, 2,) is converted from the finite field to the integer field (GF (p) · k )→[-(p k -1)/2,(p k -1)/2],k=1,2,···,m),g -1 (B k (r) ) Is g (b) k (i,r) ) Inverse function of, order
Figure RE-GDA0003788825310000082
Figure RE-GDA0003788825310000083
Figure RE-GDA0003788825310000084
S404, participant P i To pair
Figure RE-GDA0003788825310000085
Are summed to obtain
Figure RE-GDA0003788825310000086
Then to
Figure RE-GDA0003788825310000087
Decoding to obtain the real polymerization parameter W of the r round r (j)
Figure RE-GDA0003788825310000088
Next, participant P i Using W r (j) Replacing local model parameters W r (j,i) Completing the updating;
s405, participant P i The model training of the next round is continued, and step S202 and the subsequent steps are executed.
Further, the step S50 includes:
s501, after r rounds of model training (for example, r is more than or equal to 50), N participants use own test sets to test whether the model accuracy rate meets the requirement of MO, and if the model accuracy rate meets the requirement, a transaction is uploaded;
s502, only when N participants meet the requirements and upload the transaction, the system administrator SM transmits the transaction TX Cr Download the aggregate ciphertext C of the last round r SM uses Security parameters { m, l, p 0 ,p 1 ,p 2 ,···,p m Decrypting the aggregated ciphertext to obtain the final model parameter, and then, encrypting the final model parameter by using the public key and the session key of the MO and sending the ciphertext to the MO by the SM.
S503, the model owner MO decrypts the ciphertext by using the private key, checks whether the model parameters meet the requirements, if the model parameters meet the requirements of the accuracy rate, the MO generates a transaction certificate FL task to be completed, and then the participant calls an incentive contract to calculate the reward according to the size and the distance of the local data set: (1) packing a task completion mark finished, a local data set size and a distance dis from a work structure; (2) work uploads to conttri, and the contract calculates the reward by size and dis:
Figure RE-GDA0003788825310000089
wherein u and v are excitation coefficients; (3) packaging reward results into one transaction TX award[Pi] And to the blockchain; (4) block chain based on logged transaction TX award[Pi] The reward is automatically distributed. If the model accuracy requirements are not met, all participants will continue with the next round of model training.
For a numerical data set, the mass center distance is adopted to measure the quality of the data set, and simply speaking, a clustering method is used to obtain the center of each category of the data set, and then the distance between each center is calculated; whereas for the image dataset the quality of the EMD distance metric dataset is used. Taking the grayscale image data set as an example, the calculation process of the EMD distance is described in detail below:
setting a data set to have a plurality of categories, and calculating the EMD distance of the data set according to the following steps: (1) randomly selecting one image as a reference in each category, and then calculating the EMD distance between other images and the image; (2) solving the EMD distance sum of each category; (3) finally, the average EMD distance of the data set is obtained, and the distance is used as the basis of excitation. EMD Distance, Earth Mover's Distance, is used to measure the Distance between two distributions. The EMD distance between the two images is calculated using their histogram distributions as follows:
(1) let H ═ H be the histogram distribution of two images respectively i K ═ K i H (1. ltoreq. i.ltoreq.256), where h i Is a one-dimensional array of 256 pixels in length, representing a gray level of 256 pixels, each gray level being called a bin, h i Representing the number or probability of the occurrence of a pixel of the image at gray level i, { k i The same reason. For ease of understanding, H is considered to be a soil heap that is properly distributed in space, and K is considered to be properly distributed in spaceThe distance calculation of EMD is the minimum amount of work required to fill K holes with H's earth.
(2) For histogram H, set a set of feature clusters to
Figure RE-GDA0003788825310000091
m i Is the central value of the ith bin,
Figure RE-GDA0003788825310000092
the histogram K works the same. Then, two feature clusters are obtained for histogram H and K transformation:
Figure RE-GDA0003788825310000093
m=n=256。
(3) let D ═ D ij ]Is a distance matrix of 256 × 256, where d ij Is p i And q is j Euclidean distance between:
Figure RE-GDA0003788825310000094
(4) then find a "carry" matrix F ═ F ij ],f ij Represents p i To q j The number of "transports" in between, so that the total workload W is minimal:
Figure RE-GDA0003788825310000095
wherein the constraint condition 17.1 indicates that the pixel point is carried from P to Q and can not be reversed; constraint condition 17.2 represents that the sum of the pixel number from gray level i to K can not exceed the pixel number of gray level i; constraint 17.3 indicates that the total number of pixels received from H "cannot exceed the number of pixels at gray level j; constraint 17.4 indicates that the upper limit of the number of "transport" pixels is the minimum of the number of all pixels of H and the number of all pixels of K. Once the "carry" problem is solved, an optimal "carry" matrix F can be found * =[f ij ]To obtainEMD distance:
Figure RE-GDA0003788825310000101
where m is 256. The normalization of equation (18) is to prevent the EMD distance from being affected by the total "carry" amount.
Compared with the prior art, the invention has the beneficial effects that:
(1) the invention records the federal learning process by using the block chain, realizes the security parameter distribution and the model aggregation, protects the privacy by using the Chinese remainder theorem and the blinding technology, prevents the aggregation party and part of the participants from conspiring to steal the privacy of other participants, and enhances the security in the federal learning process.
(2) The invention provides a block chain-based federal learning privacy protection method, which utilizes an algorithm based on the Chinese remainder theorem to encrypt a local model of a participant. The traditional federal learning uploading model plaintext easily causes privacy disclosure; the invention can enable the aggregator to aggregate in the ciphertext state, thereby ensuring that the privacy of the participants cannot be revealed.
(3) The block chain-based federal learning privacy protection method provided by the invention utilizes a blinding technology to cover a local model of a participant, and prevents a malicious participant from stealing the privacy of other participants.
(4) The block chain-based federated learning privacy protection method provided by the invention records the federated learning process by using the block chain technology, randomly selects one block chain node as an aggregator in each round, and ensures the correctness of the aggregation result by a consensus mechanism.
(5) According to the block chain-based federal learning privacy protection method, experiments prove that compared with the existing homomorphic encryption method, the block chain-based federal learning privacy protection method reduces the calculation overhead and communication overhead of several orders of magnitude, and is more suitable for some lightweight Internet of things equipment.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention.
Fig. 1 is a flowchart of a block chain-based federal learned privacy protection method according to embodiment 1 of the present invention.
Fig. 2 is a structural diagram of a block chain-based federal learning privacy protection method in embodiment 1 of the present invention.
Fig. 3 is an initial round sequence chart of embodiment 1 of the present invention.
Fig. 4 is a schematic diagram of a participant cryptographic model parameter algorithm according to embodiment 1 of the present invention.
Fig. 5 is a schematic diagram of the time contract algorithm of embodiment 1 of the present invention.
Fig. 6 is a schematic diagram of the leader aggregation algorithm in embodiment 1 of the present invention.
Fig. 7 is a schematic diagram of an algorithm for decrypting the aggregated ciphertext by the participant in embodiment 1 of the present invention.
Fig. 8 is a schematic diagram of a participant incentive contract algorithm according to embodiment 1 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. Of course, the specific embodiments described herein are merely illustrative of the invention and are not intended to be limiting.
Example 1
Referring to fig. 1 to 8, the technical solution provided by the present invention is a block chain-based federal learning privacy protection method, as shown in fig. 1, including:
s10, system global initialization, firstly, a system administrator SM constructs a block chain, participants and a model owner register on the block chain to obtain own account numbers and a pair of public and private keys, a model owner MO issues a federal learning task, and the participants meeting the task requirements participate in the task; when the number of people participating in the task meets the task requirement, the model owner determines a participant List; a system administrator generates a safety parameter SP of the Federal learning task according to the participant List List and sends the SP to all members of the participant List List through a secret channel;
s20, model training and encryption, wherein participants in the List download initial models and security parameters, use local data set to train the models to obtain local model parameters, then encode, blind and encrypt the local model parameters, and upload local model parameter ciphertext ct i (r) Generating a transaction of the model training of the current round according to the address in the IPFS in the distributed file system IPFS; when the participants in the List send transactions, a function of detection time is called, whether the transaction uploading time is before the deadline of the round is detected, and when all the participants in the List finish ciphertext uploading before the deadline, a ciphertext aggregation stage is started;
s30, aggregating model parameter ciphertexts, selecting a part of workers to form a committee by an Algorand consensus protocol by using a verifiable random function, selecting a leader from the committee, verifying the transaction validity of all participants by all members of the committee, and aggregating the leaders after the verification is passed: for ciphertext { ct i (r) I is 1,2, N, and the addition operation is executed to obtain the aggregation ciphertext C of the round r After leader aggregation is completed, the aggregation ciphertext C is aggregated r Stored in the IPFS, generates a transaction containing its address and packs it into a new block of blocks r Committee members verify the aggregate ciphertext C r For correctness, when 2/3 is exceeded, the members agree on the block r Committee on block r Reach a consensus and block r Broadcasting to a blockchain;
s40, participant updates local model, and participant slave block in List r Query transaction to obtain aggregate ciphertext C r Address in IPFS, download ciphertext C r For ciphertext C r Decrypting, reflection and decoding to obtain real aggregation model parameters, then completing updating of local model parameters, next, performing next round of model training and encryption steps by participants, repeatedly executing the steps S20, S30 and S40 until the accuracy of the model meets the requirements of a model owner, and entering the stage S50;
s50, obtaining a final model parameter by a model owner, after a certain round, testing whether the accuracy of the model meets the requirement of the MO by each participant in the List, uploading a completed transaction after the participants in the List meet the requirement of the model training of the MO, when all participants upload the completed transaction, decrypting the aggregation ciphertext of the last round by the SM, sending the decrypted aggregation ciphertext to the MO through a secret channel, obtaining the final model parameter by the MO, testing the accuracy of the model, finishing the task after the requirement is met, calling an incentive contract to calculate and obtaining a corresponding reward by the participants in the List; otherwise, continuing the next round of model training by all participants in the List until all the participants meet the given termination condition;
the block chain-based federated learning privacy protection method comprises five entities, namely a block chain, a system administrator, a model owner, a participant and an IPFS; IPFS is a peer-to-peer distributed file storage system that enables distributed computing devices to connect to the same file system and locate file locations using hash values.
As shown in fig. 2, a block chain-based federal learning privacy protection method includes five entities, namely, a system administrator SM, a model owner MO, a participant, a block chain and an IPFS. The system administrator SM is a trusted entity (such as a government agency) that builds a block chain based federal learning system, generates and distributes the security parameters needed for model training for each participant, and can decrypt the final model parameters. And the model owner MO builds an initial model, distributes the initial model to all participants and pays rewards for the training contribution of the participants. And the participants have local data containing privacy, train the model, encrypt model parameters of the participants and upload the ciphertext to the block chain, and finally decrypt the aggregated ciphertext and update the local model. And the block chain permanently records the main flow of the FL task and identifies the ciphertext of the leader aggregation participant selected by the protocol. IPFS, a point-to-point distributed file storage system, stores the main parameters (such as security parameters and model parameter ciphertext) of a model training process, and locates the position of a file by using a hash value. Furthermore, the main sequence of interactions between the five entities in the initial turn is shown in FIG. 3.
In this embodiment, as shown in fig. 4, after the training of the model is completed, the participant encrypts the local model parameters and uploads the parameters to the blockchain, and the time contract of the blockchain checks the uploading time of the participant, as shown in fig. 5. Next, as shown in fig. 6, the leader selected by the blockchain consensus protocol will be responsible for aggregating the model parameter ciphertexts of all participants and uploading the aggregated ciphertexts to the blockchain. As shown in fig. 7, the participant receives the aggregate ciphertext, decrypts it, and updates the local model to enter the next round of training. After a plurality of rounds of training, the participants and the MO check whether the model meets the requirements, FL is completed after the requirements are met, and otherwise, the training is continued. When the FL task is completed, the participants calculate their contribution and are rewarded, as shown in fig. 8.
The step S10 includes:
s101, a system administrator SM constructs a block chain, determines that an adopted consensus protocol is Algorand, participants and a model owner MO can register in the block chain and own an account number, a pair of public and private keys { pk, sk }, a wallet address wa, a unique identity id and deposit, and generates a transaction by using the wallet address of the participants, MO and worker, all the participants, MO and worker need to lock a part of the deposit on the block chain as deposit, a block is created on the block chain, the transaction comprises the transaction of recording the deposit ownership statement of the participants, MO and worker, and in addition, a public and private key pair creates a secret channel between a sender and a receiver;
s102, the model owner issues a Federal Learning (FL) task in a mode of issuing asset declaration transaction, and the task comprises the following steps: initial model parameter W 0 Model number mid, learning rate η, training time t per round, model accuracy requirement θ, participant number requirement N, assuming a model owner MO j Issuing an asset declaration transaction: TX MOj ={mid j ,t j ,H(W 0 (j) ),σ(sk j ,H(W 0 (j) )),N jjj "Keywords", where H (W) 0 (j) ) Is the hash address, sk, of the initial model parameter stored in the IPFS j Is MO j Private key of σ (sk) j ,H(W 0 (j) ) Is a signature used to prove that it does have a model, "Keywords" represents the model description for this FL task; for ease of notation and description, { mid ] is removed j ,t j ,N jjj Subscript j of } followed by a representative MO only j A description is made;
s103, the participants voluntarily join the FL task in a mode of issuing data asset declaration transaction, and assume one participant P i Issuing a data asset statement transaction to join the MO j FL task of (2): TX Di ={sid,H(D i ),σ(sk i ,H(D i )), H(TX MOj ) "Keywords", where sid is the participant P i Data set D of i Number, sk i Is a participant P i Private key of (2), H (D) i ) Is a data set D i Hash value of, σ (sk) i ,H(D i ) Is used to prove the participant P i Does have a data set D i Signature of (2), H (TX) MOj ) Representing a participant P i Adding MO j Is a transaction TX MOj The hash value of (1);
s104, model owner MO j The number of participants is retrieved in the blockchain, and when N or more participants join the FL task, the MO j Query all transactions and select N participants, generate a List of wallet addresses and public keys of the N participants, and then upload an employment transaction to the blockchain: TX employ ={List,H(TX MOj ),H(List),σ(sk j , H(List)),“Keywords”};
S105, the system administrator SM according to the MO j Of a transaction TX MOj Obtaining the information of the FL task: initial model parameter W 0 Model number mid, learning rate eta, training time t of each round, assuming that the time is long enough, all participants finish model training in the time, model accuracy requirement theta, participant number requirement N, selecting a complexity parameter m (m is more than or equal to 3) which can be set according to the needs of the participants, a positive integer l for controlling calculation accuracy and a training startStarting time t 0 Then m +1 pairwise co-prime gcd (p) are generated i ,p j ) Positive prime number p of 1, (i ≠ j) 0 ,p 1 ,p 2 ,···,p m
S106, the system administrator SM randomly generates an N multiplied by N positive integer seed matrix M and provides the participant P with the positive integer seed matrix M i (i ═ 1,2, ·, N) two seed vectors { M · · · · · N) are partitioned i (j,0) |j=1,2,···,N}、{ M j (i,0) 1,2, ·, N, which respectively represent the ith row and ith column of the matrix M, and then a Pseudo-random number Generator (PRG ()) is selected, which satisfies PRG () (PRG ()) + PRG () (mod p) 0 );
S107, the System Administrator SM according to the MO j Of a transaction TX employ Reading the participant List to get the wallet addresses and public keys { P } of N participants i .wa,P i Pk | i ═ 1,2, ·, N }; then, SM combines these security parameters { m, l, p } 0 ,p 1 ,p 2 ,···, p m The } and the two seed vectors and PRG (-) are sent to N participants through their secret channels and recorded on the blockchain; SM sends to the ith participant:
(1) SM use participant P i Public key P of i Pk encrypts { m, l, p 0 ,p 1 ,p 2 ,···,p m ,{M i (j,0) |j=1,2,···,N}, { M j (i,0) 1,2, ·, N, PRG (·), to obtain the ciphertext SP i
(2) SM ciphertext SP i Stored in the IPFS, and then generates a transaction: TX Pi ={P i .wa,H(TX MOj ), H(SP i ),σ(SM.sk,H(SP i ) - "Keywords" -), where H (SP) i ) Is a ciphertext SP i Hash address in IPFS;
(3) SM is to TX Pi Sent to the blockchain.
The step S20 includes:
s201, participant P i (i ═ 1,2,. cndot., N) by MO j TX of transaction MOj Downloading initial model parameters W from IPFS 0 (j) Obtaining model accuracy requirement theta and learning rate eta, and then using the wallet address P i Wa query transaction TX in blockchain Pi Obtaining a security parameter ciphertext SP i Hash address in IPFS, download ciphertext SP i And use its own private key sk i Decrypting;
s202, participant P i Using local data sets D i Training model, let f (x, W) r ) Is a neural network model, where x is the input, W r For the model parameters of round r, the cross entropy function is used as the loss function:
Figure RE-GDA0003788825310000141
wherein<x k ,y k >∈D i ,x k Is input, y k Is a label, n is a data set D i Of size, then P i Using MO j Model parameter W of r-1 (j) Calculating to obtain an r-th wheel local model parameter W r (j,i)
First, P i Calculate the gradient of the r-th round loss function:
Figure RE-GDA0003788825310000142
wherein
Figure RE-GDA00037888253100001414
Is a loss function L f Gradient of (. D) i * Is a data set D i Is then, P i Local model parameters were obtained using gradient calculations:
Figure RE-GDA0003788825310000143
wherein W r (j,i) Represents P i Using MOs j Die ofType parameter W r-1 (j,i) The local model parameters of the r round after training;
s203, participant P i Encoding model parameters, use
Figure RE-GDA0003788825310000144
Model parameter W r (j,i) Conversion from real to integer:
Figure RE-GDA0003788825310000145
wherein l is a positive integer, adjusting the value of l controls the calculation accuracy,
Figure RE-GDA0003788825310000146
is not more than x 10 l The maximum integer of (2), the coded parameter is obtained after the calculation is finished
Figure RE-GDA0003788825310000147
S204, participant P i According to two seed vectors { M i (j,0) |j=1,2,···,N}、{M j (i,0) Generating two sequences { M } using PRG (·), 1,2, ·, N | j · i (j,r) }、{M j (i,r) };M i (j,r) And M j (i,r) Is composed of PRG (M) i (j ,r-1) ) And PRG (M) j (i,r-1) ) Generated then, P i Using these two sequences M i (j,r) }、{M j (i,r) To the encoded parameters
Figure RE-GDA0003788825310000148
Carrying out blinding:
Figure RE-GDA0003788825310000149
wherein
Figure RE-GDA00037888253100001410
Is a model parameter that has been blinded;
s205, participant P i Encrypting and packaging using Chinese Remainder Theorem (CRT)
Figure RE-GDA00037888253100001411
First, P i Handle
Figure RE-GDA00037888253100001412
Random partitioning into m parts b k (i,r) 1,2, ·, m }, and satisfies
Figure RE-GDA00037888253100001413
Then, m parts { b } are used k (i,r) 1,2, ·, m } and prime number { p | k 1,2, ·, m } constructs the following congruence equation set:
Figure RE-GDA0003788825310000151
from the CRT, a unique solution in the sense of modulo S is obtained:
Figure RE-GDA0003788825310000152
wherein
Figure RE-GDA0003788825310000153
S k =S/p k
Figure RE-GDA0003788825310000154
T k Is S k Modulo p k Inverse element in the sense of looking at formula (7) since b k (i,r) S k T k ≡b k (i,r) ×1≡b k (i,r) (mod p k ) Then for
Figure RE-GDA0003788825310000159
k≠j,b j (i,r) S j T j ≡0(mod p k ),ct i (r) Satisfies the following conditions:
Figure RE-GDA0003788825310000155
obviously ct r,i mod p k Is equal to b k (i,r) mod p k Then calculated by CRT, b k (i,r) The set of integers is completed by mod operation
Figure RE-GDA0003788825310000156
To the finite field GF (p) k ) In which GF (p) k )={0,1,2,···,p k -1}, (k ═ 1,2, ·, m), and the specific mapping is as follows:
Figure RE-GDA0003788825310000157
wherein b is k (i,r) Is a blinded parameter
Figure RE-GDA0003788825310000158
In order to prevent overflow errors during the calculation, the prime number p k I k |, 1,2, ·, m } must be large enough to satisfy p k >>N×10 l So that b is k (i,r) ∈[-(p k -1)/2N,(p k -1)/2N]For convenience of description, ct is used i (r) Representing ciphertext CRT 1 (i,r) ,b 2 (i,r) ,···,b m (i,r) ];
S206, participant P i The model parameter ciphertext ct i (r) Stored in IPFS, hash address H (ct) i (r) ) Packaging into transaction and sending to block chain: TX r,i ={r,H(ct i (r) ),σ(sk i ,H(ct i (r) )),H(TX MOj ),H(TX Pi ) "Keywords" }; in addition, participant P i Transmit transaction TX r,i When it is time, the CheckTime function of the time contract is called to check whether it is at the cut-off point in time t r And (3) uploading, if part of participants can not finish uploading before the deadline time, executing a punishment mechanism, not receiving part of deposit of the participants and rewarding the deposit to other honestly executed participants, then executing the ciphertext uploading step again by the overtime participants, and entering a parameter aggregation stage after N participants upload parameter ciphertexts.
The step S30 includes:
s301, participant P i To-be-transacted TX r,i After the transaction is sent to a blockchain, a worker checks a digital signature of the transaction, confirms that the transaction is from a legal participant, and puts the transaction into a designated transaction pool, a consensus protocol of the blockchain randomly selects a part of all workers through a Verifiable Random Function (VRF) to form a committee, and then selects one member from the committee to become a leader;
s302, the leader executes ciphertext addition operation:
Figure RE-GDA0003788825310000161
since the CRT satisfies the additive homomorphism property, then:
Figure RE-GDA0003788825310000162
wherein C is r Is the aggregate ciphertext of round r,
Figure RE-GDA0003788825310000163
s303, after the calculation is finished, the leader aggregates the ciphertext C r Store to IPFS and generate a transaction TX Cr ={mid,r, H(C r ),σ(sk,H(C r )),H(TX MOj ) "Keywords", where sk is the leader's private key, H (C) r ) Is to aggregate ciphertext in IPFS hash address, and finally packaging all transactions of the round into a new block r ={TX Cr ,TX r,1 , TX r,2 ,···,TX r,N };
S304, verifying the block by the committee member r Voting it, if block is agreed r Then a transaction is generated: TX vote ={H(block r ),σ(sk,H(block r )),H(TX MOj ),“Keywords”};
S305, if the committee member exceeding 2/3 agrees to the block r Then the block is admitted, the leader receives the reward, and all committee members broadcast this block; otherwise, the punishment mechanism will not receive the deposit of the leader and award it to other members of the committee, then select one member from the committee to become the new leader, and re-execute step S302, step S303, step S304 and step S305.
The step S40 includes:
s401, participant P i Slave transaction TX Cr Reading hash address H (C) of aggregated ciphertext in IPFS r ) Downloading to obtain ciphertext C r
S402, participant P i Using a modulus prime number { p k Decrypting the aggregate ciphertext C by way of [ k ] 1,2, ·, m } operation r K discrete values are obtained:
Figure RE-GDA0003788825310000164
wherein k is 1,2, ·, m;
s403, participant P i Using function g -1 (B k (r) ) Di (B) k (r) I k | (1, 2,) m } is transformed from the finite field to the integer field (GF (p) k )→[-(p k -1)/2,(p k -1)/2],k=1,2,···,m),g -1 (B k (r) ) Is g (b) k (i,r) ) Inverse function of, order
Figure RE-GDA0003788825310000165
Figure RE-GDA0003788825310000166
Figure RE-GDA0003788825310000167
S404, participant P i To pair
Figure RE-GDA0003788825310000168
Are summed to obtain
Figure RE-GDA0003788825310000169
Then pair
Figure RE-GDA00037888253100001610
Decoding to obtain the real polymerization parameter W of the r round r (j)
Figure RE-GDA00037888253100001611
Next, participant P i Using W r (j) Replacing local model parameters W r (j,i) Completing the updating;
s405, participant P i The model training of the next round is continued, and step S202 and the subsequent steps are executed.
The step S50 includes:
s501, after r rounds of model training, N participants use own test sets to test whether the model accuracy meets the requirement of an MO, and if the model accuracy meets the requirement, a transaction is uploaded;
s502, only when N participants meet the requirements and upload the transaction, the system administrator SM transmits the transaction TX Cr Download the final round of aggregate ciphertext C r SM Security parameters { m, l, p 0 ,p 1 ,p 2 ,···,p m Deciphering the aggregated ciphertext to obtain the final model parameters, and then SM uses the MO public keyEncrypting the final model parameter by the session key and sending the ciphertext to the MO;
s503, the model owner MO decrypts the ciphertext by using the private key, checks whether the model parameters meet the requirements, if the model parameters meet the requirements of the accuracy rate, the MO generates a transaction certificate FL task to be completed, and then the participant calls an incentive contract to calculate the reward according to the size and the distance of the local data set: (1) packing a task completion mark finished, a local data set size and a distance dis from a work structure; (2) work uploads to conttri, and the contract calculates the reward by size and dis:
Figure RE-GDA0003788825310000171
wherein u and v are excitation coefficients; (3) packaging reward results into one transaction TX award[Pi] And to the blockchain; (4) block chain based on logged transaction TX award[Pi] The reward is automatically distributed. If the model accuracy requirements are not met, all participants will continue with the next round of model training.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, which is intended to cover any modifications, equivalents, improvements, etc. within the spirit and scope of the present invention.

Claims (6)

1. A block chain-based federated learning privacy protection method is characterized by comprising the following steps:
s10, system global initialization, firstly, a system administrator SM constructs a block chain, participants and a model owner register on the block chain to obtain own account numbers and a pair of public and private keys, a model owner MO issues a federal learning task, and the participants meeting the task requirements participate in the task; when the number of people participating in the task meets the task requirement, the model owner determines a participant List; a system administrator generates a safety parameter SP of the federal learning task according to the participant List and sends the SP to all members of the participant List through a secret channel;
s20, model training and encryption, wherein participants in the List download initial models and security parameters, use local data set to train the models to obtain local model parameters, then encode, blind and encrypt the local model parameters, and upload local model parameter ciphertext ct i (r) Generating a transaction of the model training of the current round according to the address in the IPFS in the distributed file system IPFS; when the participants in the List send transactions, a function for detecting time is called to detect whether the transaction uploading time is before the deadline of the round, and when all the participants in the List finish ciphertext uploading before the deadline, a ciphertext aggregation stage is started;
s30, aggregating model parameter ciphertexts, selecting a part of workers to form a committee by an Algorand consensus protocol by using a verifiable random function, selecting a leader from the committee, verifying the transaction validity of all participants by all members of the committee, and aggregating the leaders after the verification is passed: for ciphertext { ct i (r) I is 1,2, N, and the addition operation is executed to obtain the aggregation ciphertext C of the round r After leader aggregation is completed, the aggregation ciphertext C is aggregated r Stored in the IPFS, generates a transaction containing its address and packs it into a new block of blocks r Committee members verify the aggregate ciphertext C r For correctness, when 2/3 is exceeded, the members agree on the block r Committee on block r Reach a consensus and block r Broadcasting to a blockchain;
s40, participant updates local model, and participant slave block in List r Query transaction to obtain aggregate ciphertext C r Address in IPFS, download ciphertext C r For ciphertext C r Decrypting, reflection and decoding to obtain real aggregation model parameters, then completing updating of local model parameters, next, performing the next round of model training and encryption steps by the participator, repeatedly executing the steps S20, S30 and S40 until the accuracy of the model meets the requirements of the model owner, and entering the step S50;
s50, obtaining a final model parameter by a model owner, after a certain turn, testing whether the accuracy of the model meets the requirement of the MO by each participant in the List, uploading a completion transaction after the participants in the List meet the requirement of the model training of the MO, when all participants upload the completion transaction, decrypting the aggregated ciphertext of the last turn by the SM, sending the decrypted aggregated ciphertext to the MO through a secret channel, obtaining the final model parameter by the MO, testing the accuracy of the model, ending the task after meeting the requirement, calling an incentive contract to calculate and obtain a corresponding reward by the participants in the List; otherwise, continuing the next round of model training by all participants in the List until all the participants meet the given termination condition;
the block chain-based federated learning privacy protection method comprises five entities, namely a block chain, a system administrator, a model owner, a participant and an IPFS; IPFS is a peer-to-peer distributed file storage system that enables distributed computing devices to connect to the same file system and locate file locations using hash values.
2. The block chain-based federal learned privacy protection method of claim 1, wherein the step S10 includes the steps of:
s101, a system administrator SM constructs a block chain, determines that an adopted consensus protocol is Algorand, participants and a model owner MO can register in the block chain and own an account number, a pair of public and private keys { pk, sk }, a wallet address wa, a unique identity id and deposit, and generates a transaction by using the wallet address of the participants, MO and worker, all the participants, MO and worker need to lock a part of the deposit on the block chain as deposit, a block is created on the block chain, the transaction comprises the transaction of recording the deposit ownership statement of the participants, MO and worker, and in addition, a public and private key pair creates a secret channel between a sender and a receiver;
s102, the model owner issues a Federal Learning (FL) task in a mode of issuing asset declaration transaction, and the task comprises the following steps: initial model parameter W 0 Model number mid, learning rate η, each round of trainingT, model accuracy requirement theta, participant number requirement N, and assuming a model owner MO j Issuing an asset declaration transaction: TX MOj ={mid j ,t j ,H(W 0 (j) ),σ(sk j ,H(W 0 (j) )),N jjj "Keywords", where H (W) 0 (j) ) Is the hash address, sk of the initial model parameter stored in IPFS j Is MO j Private key of (c), σ (sk) j ,H(W 0 (j) ) Is a signature used to prove that it does have a model, "Keywords" represents the model description for this FL task; for ease of notation and description, { mid ] is removed j ,t j ,N jjj Subscript j of } followed by a representative MO only j The description is carried out;
s103, the participants voluntarily join the FL task in a mode of issuing data asset declaration transaction, and assume one participant P i Issuing a data asset declaration transaction to join the MO j FL task(s): TX Di ={sid,H(D i ),σ(sk i ,H(D i )),H(TX MOj ) "Keywords" }, where sid is participant P i Data set D of i Number, sk i Is a participant P i Private key of (2), H (D) i ) Is a data set D i Hash value of, σ (sk) i ,H(D i ) Is used to prove the participant P i Does have a data set D i Signature of (2), H (TX) MOj ) Representing a participant P i Adding MO j Is a transaction TX MOj The hash value of (1);
s104, model owner MO j Searching the number of participants in the block chain, and when N or more participants join the FL task, the MO j Query all transactions and select N participants, generate a List of wallet addresses and public keys of the N participants, and then upload an employment transaction to the blockchain: TX employ ={List,H(TX MOj ),H(List),σ(sk j ,H(List)),“Keywords”};
S105, the system administrator SM according to MO j Of a transaction TX MOj Obtaining the information of the FL task: initial model parameter W 0 Model number mid, learning rate eta, training time t of each round, assuming that the time is long enough, all participants finish model training in the time, model accuracy requirement theta, participant number requirement N, selecting a complexity parameter m (m is more than or equal to 3) which can be set according to participant needs, a positive integer l for controlling calculation precision and training start time t 0 Then m +1 pairwise co-prime gcd (p) are generated i ,p j ) Positive prime number p of 1, (i ≠ j) 0 ,p 1 ,p 2 ,···,p m
S106, the system administrator SM randomly generates an N multiplied by N positive integer seed matrix M and provides the participant P with the positive integer seed matrix M i (i ═ 1,2, ·, N) two seed vectors { M · i (j,0) |j=1,2,···,N}、{M j (i,0) 1,2, ·, N } which respectively represent the ith row and ith column of the matrix M, and then a Pseudo-random number Generator (PRG ()) is selected, which satisfies PRG () (PRG ()) (mod p) · and (PRG ())) is then selected 0 );
S107, the System Administrator SM according to the MO j Of a transaction TX employ Reading the participant List to get the wallet addresses and public keys { P } of N participants i .wa,P i Pk | i ═ 1,2, ·, N }; then, SM combines these security parameters { m, l, p } 0 ,p 1 ,p 2 ,···,p m The } and the two seed vectors and PRG (-) are sent to N participants through their secret channels and recorded on the blockchain; the SM sends to the ith participant:
(1) SM use participant P i Public key P of i Pk encrypts { m, l, p 0 ,p 1 ,p 2 ,···,p m ,{M i (j,0) |j=1,2,···,N},{M j (i,0) 1,2, ·, N, PRG (·), to obtain the ciphertext SP i
(2) SM ciphertext SP i Stored in the IPFS, and then generates a transaction: TX Pi ={P i .wa,H(TX MOj ),H(SP i ),σ(SM.sk,H(SP i )),“Keywords ", where H (SP) i ) Is a ciphertext SP i Hash address in IPFS;
(3) SM is to TX Pi To the blockchain.
3. The block chain-based federal learned privacy protection method as claimed in claim 1 or 2, wherein the step S20 includes the steps of:
s201, participant P i (i ═ 1,2,. cndot., N) by MO j Of a transaction TX MOj Downloading initial model parameters W from IPFS 0 (j) Obtaining model accuracy requirement theta and learning rate eta, and then using the wallet address P i Wa query transaction TX in blockchain Pi Obtaining a security parameter cryptogram SP i Hash address in IPFS, download ciphertext SP i And use its own private key sk i Decrypting;
s202, participant P i Using local data sets D i Training model, let f (x, W) r ) Is a neural network model, where x is the input, W r For the model parameters of round r, the cross entropy function is used as the loss function:
Figure FDA0003669040340000031
wherein<x k ,y k >∈D i ,x k Is input, y k Is a label, n is a data set D i Of then P i Using MO j Model parameter W of r-1 (j) Calculating to obtain an r-th wheel local model parameter W r (j,i)
First, P i Calculate the gradient of the r-th round loss function:
Figure FDA0003669040340000032
wherein ^ L f Is a loss function L f Gradient of (. cndot.), D i * Is a data set D i Is then, P i Local model parameters were obtained using gradient calculations:
Figure FDA0003669040340000041
wherein W r (j,i) Represents P i Using MO j Model parameter W of r-1 (j,i) The r-th round of local model parameters after training;
s203, participant P i Encoding model parameters, use
Figure FDA0003669040340000042
Model parameter W r (j,i) Conversion from real to integer:
Figure FDA0003669040340000043
wherein l is a positive integer, adjusting the value of l controls the calculation accuracy,
Figure FDA0003669040340000044
is not more than x 10 l The maximum integer of (2), the coded parameter is obtained after the calculation is finished
Figure FDA0003669040340000045
S204, participant P i According to two seed vectors { M i (j,0) |j=1,2,···,N}、{M j (i,0) Generating two sequences { M } using PRG (·), 1,2, ·, N | j · i (j,r) }、{M j (i,r) };M i (j,r) And M j (i,r) Is composed of PRG (M) i (j,r-1) ) And PRG (M) j (i,r-1) ) Generated then, P i Using these two sequences M i (j,r) }、{M j (i,r) To the encoded parameters
Figure FDA0003669040340000046
Carrying out blinding:
Figure FDA0003669040340000047
wherein
Figure FDA0003669040340000048
Are model parameters that have been blinded;
s205, participant P i Encrypting and packaging using Chinese Remainder Theorem (CRT)
Figure FDA0003669040340000049
First, P i Handle
Figure FDA00036690403400000410
Random partitioning into m parts b k (i,r) 1,2, ·, m } and satisfies
Figure FDA00036690403400000411
Then, m parts b are used k (i,r) 1,2, ·, m } and prime number { p | k 1,2, ·, m } constructs the following congruence equation set:
Figure FDA00036690403400000412
from the CRT, a unique solution in the sense of modulo S is obtained:
Figure FDA00036690403400000413
wherein
Figure FDA00036690403400000414
S k =S/p k ,T k ≡S k -1 (mod p k ),T k Is S k Modulo p k Inverse element in the sense of looking at formula (7) since b k (i,r) S k T k ≡b k (i,r) ×1≡b k (i,r) (mod p k ) Then for
Figure FDA00036690403400000415
k≠j,b j (i,r) S j T j ≡0(mod p k ),ct i (r) Satisfies the following conditions:
Figure FDA00036690403400000416
obviously ct r,i mod p k Is equal to b k (i,r) mod p k Then calculated by CRT, b k (i,r) The set of integers is completed by mod operation
Figure FDA00036690403400000417
To the finite field GF (p) k ) In which GF (p) k )={0,1,2,···,p k -1}, (k ═ 1,2, ·, m), and the specific mapping is as follows:
Figure FDA0003669040340000051
wherein b is k (i,r) Is a blinded parameter
Figure FDA0003669040340000052
In order to prevent overflow errors during the calculation, prime numbers p k I k |, 1,2, ·, m } must be large enough to satisfy p k >>N×10 l So that b is k (i,r) ∈[-(p k -1)/2N,(p k -1)/2N]For convenience of description, ct is used i (r) Representing ciphertext CRT b 1 (i,r) ,b 2 (i,r) ,···,b m (i,r) ];
S206, participant P i The model parameter ciphertext ct i (r) Stored in IPFS, hash address H (ct) i (r) ) Packaging into transaction and sending to block chain: TX r,i ={r,H(ct i (r) ),σ(sk i ,H(ct i (r) )),H(TX MOj ),H(TX Pi ) "Keywords"; in addition, participant P i Transmit transaction TX r,i When it is time, the CheckTime function of the time contract is called to check whether it is at the cut-off point in time t r And (3) uploading before, if part of participants can not finish uploading before the deadline time, executing a punishment mechanism, not receiving part of deposit of the participants and rewarding the deposit to other honestly executed participants, then executing the ciphertext uploading step again by the overtime participants, and entering a parameter aggregation stage after N participants upload the parameter ciphertexts.
4. The block chain-based federal learned privacy protection method of any of claims 1-3, wherein the step S30 includes the steps of:
s301, participant P i To-be-transacted TX r,i After the transaction is sent to the blockchain, the worker checks the digital signature of the transaction, confirms that the transaction is from a legal participant, and puts the transaction into a designated transaction pool, a block chain consensus protocol randomly selects a part of all workers through a Verifiable Random Function (VRF) to form a committee, and then selects a member from the committee to become a leader;
s302, the leader executes ciphertext addition operation:
Figure FDA0003669040340000053
since the CRT satisfies the additively homomorphic property, then:
Figure FDA0003669040340000054
wherein C is r Is the aggregate ciphertext of round r,
Figure FDA0003669040340000055
s303, after the calculation is finished, the leader aggregates the ciphertext C r Store to IPFS and generate a transaction TX Cr ={mid,r,H(C r ),σ(sk,H(C r )),H(TX MOj ) "Keywords", where sk is the leader's private key, H (C) r ) The hash address of the ciphertext in IPFS is aggregated, and finally all transactions of the round are packed into a new block r ={TX Cr ,TX r,1 ,TX r,2 ,···,TX r,N };
S304, verifying the block by the committee member r Voting it, if block is agreed r Then a transaction is generated: TX vote ={H(block r ),σ(sk,H(block r )),H(TX MOj ),“Keywords”};
S305, if the committee member exceeding 2/3 agrees to the block r Then the block is admitted, the leader receives the reward, and all committee members broadcast this block; otherwise, the punishment mechanism will not receive the deposit of the leader and award it to other members of the committee, then select one member from the committee to become a new leader, and re-execute step S302, step S303, step S304 and step S305.
5. The block chain-based federal learned privacy protection method of any of claims 1-4, wherein the step S40 includes the steps of:
s401, participant P i Slave transaction TX Cr Reading hash address H (C) of aggregated ciphertext in IPFS r ) Downloading to obtain ciphertext C r
S402, participant P i Using a modulus prime number { p k Decrypting the aggregate ciphertext C by way of [ k ] 1,2, ·, m ] operation r K discrete values are obtained:
Figure FDA0003669040340000061
wherein k is 1,2, ·, m;
s403, participant P i Using function g -1 (B k (r) ) Di (B) k (r) I k | (1, 2,) is converted from the finite field to the integer field (GF (p) · k )→[-(p k -1)/2,(p k -1)/2],k=1,2,···,m),g -1 (B k (r) ) Is g (b) k (i,r) ) Inverse function of, order
Figure FDA0003669040340000062
Figure FDA0003669040340000063
Figure FDA0003669040340000064
S404, participant P i To pair
Figure FDA0003669040340000065
Are summed to obtain
Figure FDA0003669040340000066
Then to
Figure FDA0003669040340000067
Decoding to obtain the real polymerization parameter W of the r round r (j)
Figure FDA0003669040340000068
Next, participant P i Using W r (j) Replacing local model parameters W r (j,i) Completing the updating;
s405, participant P i The model training of the next round is continued, and step S202 and the subsequent steps are executed.
6. The block chain-based federal learned privacy protection method of any of claims 1-5, wherein the step S50 includes the steps of:
s501, after r rounds of model training, N participants use own test sets to test whether the model accuracy rate meets the requirements of the MO, and if the model accuracy rate meets the requirements, a transaction is uploaded;
s502, only when N participants meet the requirements and upload the transaction, the system administrator SM transmits the transaction TX Cr Download the final round of aggregate ciphertext C r SM Security parameters { m, l, p 0 ,p 1 ,p 2 ,···,p m Decrypting the aggregated ciphertext to obtain a final model parameter, then encrypting the final model parameter by the SM by using the public key and the session key of the MO, and sending the ciphertext to the MO;
s503, the model owner MO decrypts the ciphertext by using a private key, then checks whether the final model parameter meets the requirement, if the model parameter meets the requirement of the accuracy rate, the MO generates a transaction to prove that the federal learning task is completed, and then the participant calls an incentive contract to calculate and obtain the reward; if the model accuracy requirement is not met, all participants continue to perform the next round of model training.
CN202210599679.5A 2022-05-30 2022-05-30 Block chain-based federated learning privacy protection method Withdrawn CN115037477A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210599679.5A CN115037477A (en) 2022-05-30 2022-05-30 Block chain-based federated learning privacy protection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210599679.5A CN115037477A (en) 2022-05-30 2022-05-30 Block chain-based federated learning privacy protection method

Publications (1)

Publication Number Publication Date
CN115037477A true CN115037477A (en) 2022-09-09

Family

ID=83120745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210599679.5A Withdrawn CN115037477A (en) 2022-05-30 2022-05-30 Block chain-based federated learning privacy protection method

Country Status (1)

Country Link
CN (1) CN115037477A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115329385A (en) * 2022-10-11 2022-11-11 北京理工大学 Model training method and device based on block chain cross-chain privacy protection
CN115622800A (en) * 2022-11-30 2023-01-17 山东区块链研究院 Federal learning homomorphic encryption system and method based on Chinese remainder representation
CN115629783A (en) * 2022-10-27 2023-01-20 北方工业大学 Model updating method for keeping privacy and resisting abnormal data in mobile crowd sensing
CN115766295A (en) * 2023-01-05 2023-03-07 成都墨甲信息科技有限公司 Industrial internet data secure transmission method, device, equipment and medium
CN115795518A (en) * 2023-02-03 2023-03-14 西华大学 Block chain-based federal learning privacy protection method
CN116049680A (en) * 2023-03-31 2023-05-02 天聚地合(苏州)科技股份有限公司 Model training method and system based on block chain
CN116957110A (en) * 2023-09-20 2023-10-27 中国科学技术大学 Trusted federation learning method and system based on federation chain

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115329385A (en) * 2022-10-11 2022-11-11 北京理工大学 Model training method and device based on block chain cross-chain privacy protection
CN115329385B (en) * 2022-10-11 2022-12-16 北京理工大学 Model training method and device based on block chain cross-chain privacy protection
CN115629783A (en) * 2022-10-27 2023-01-20 北方工业大学 Model updating method for keeping privacy and resisting abnormal data in mobile crowd sensing
CN115629783B (en) * 2022-10-27 2023-05-26 北方工业大学 Model updating method for protecting privacy and resisting abnormal data in mobile crowd sensing
CN115622800A (en) * 2022-11-30 2023-01-17 山东区块链研究院 Federal learning homomorphic encryption system and method based on Chinese remainder representation
CN115766295A (en) * 2023-01-05 2023-03-07 成都墨甲信息科技有限公司 Industrial internet data secure transmission method, device, equipment and medium
CN115795518A (en) * 2023-02-03 2023-03-14 西华大学 Block chain-based federal learning privacy protection method
CN115795518B (en) * 2023-02-03 2023-04-18 西华大学 Block chain-based federal learning privacy protection method
CN116049680A (en) * 2023-03-31 2023-05-02 天聚地合(苏州)科技股份有限公司 Model training method and system based on block chain
CN116049680B (en) * 2023-03-31 2023-08-04 天聚地合(苏州)科技股份有限公司 Model training method and system based on block chain
CN116957110A (en) * 2023-09-20 2023-10-27 中国科学技术大学 Trusted federation learning method and system based on federation chain
CN116957110B (en) * 2023-09-20 2024-01-05 中国科学技术大学 Trusted federation learning method and system based on federation chain

Similar Documents

Publication Publication Date Title
EP3419211B1 (en) Privacy preserving computation protocol for data analytics
CN115037477A (en) Block chain-based federated learning privacy protection method
EP3779717B1 (en) Multiparty secure computing method, device, and electronic device
CN110419053B (en) System and method for information protection
US11341269B2 (en) Providing security against user collusion in data analytics using random group selection
CN114338045B (en) Information data safe sharing method and system based on block chain and federal learning
US20210143987A1 (en) Privacy-preserving federated learning
Canetti et al. Adaptively secure multi-party computation
Ateniese et al. Secret handshakes with dynamic and fuzzy matching.
CN112906030B (en) Data sharing method and system based on multi-party homomorphic encryption
Huang et al. Achieving accountable and efficient data sharing in industrial internet of things
US11356241B2 (en) Verifiable secret shuffle protocol for encrypted data based on homomorphic encryption and secret sharing
WO2014112548A1 (en) Secure-computation system, computing device, secure-computation method, and program
Gilad-Bachrach et al. Secure data exchange: A marketplace in the cloud
US11368296B2 (en) Communication-efficient secret shuffle protocol for encrypted data based on homomorphic encryption and oblivious transfer
CN112383550B (en) Dynamic authority access control method based on privacy protection
US20210328763A1 (en) Computation-efficient secret shuffle protocol for encrypted data based on homomorphic encryption
Peng Danger of using fully homomorphic encryption: A look at Microsoft SEAL
KR20210139344A (en) Methods and devices for performing data-driven activities
Zhou et al. Privacy‐Preserving Federated Learning Framework with General Aggregation and Multiparty Entity Matching
CN115795518B (en) Block chain-based federal learning privacy protection method
CN112597542B (en) Aggregation method and device of target asset data, storage medium and electronic device
CN115473664A (en) Credit data processing method and model based on block chain
Tran et al. An efficient privacy-enhancing cross-silo federated learning and applications for false data injection attack detection in smart grids
CN113420886B (en) Training method, device, equipment and storage medium for longitudinal federal learning model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220909

WW01 Invention patent application withdrawn after publication