CN111930698B - Data security sharing method based on hash map and federal learning - Google Patents

Data security sharing method based on hash map and federal learning Download PDF

Info

Publication number
CN111930698B
CN111930698B CN202010625680.1A CN202010625680A CN111930698B CN 111930698 B CN111930698 B CN 111930698B CN 202010625680 A CN202010625680 A CN 202010625680A CN 111930698 B CN111930698 B CN 111930698B
Authority
CN
China
Prior art keywords
node
model
witness
event
round
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010625680.1A
Other languages
Chinese (zh)
Other versions
CN111930698A (en
Inventor
张秀贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Xiaozhuang University
Original Assignee
Nanjing Xiaozhuang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Xiaozhuang University filed Critical Nanjing Xiaozhuang University
Priority to CN202010625680.1A priority Critical patent/CN111930698B/en
Publication of CN111930698A publication Critical patent/CN111930698A/en
Application granted granted Critical
Publication of CN111930698B publication Critical patent/CN111930698B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/176Support for shared access to files; File sharing support
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/80ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for detecting, monitoring or modelling epidemics or pandemics, e.g. flu

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Public Health (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The method for safely sharing data based on hash diagrams and federation learning is characterized in that detection of a federation learning local model is added in a blockchain 3.0 technology hashsraph consensus algorithm to prevent dishonest nodes from providing an error model, meanwhile, the federation learning data model is realized by a method for carrying out weighted aggregation on the local model, and 1) the detection of the federation learning local model is added in the blockchain 3.0 technology hashsraph consensus algorithm to prevent dishonest nodes from providing the error model; 2) The dishonest node detection flow of the hashsraph mainly comprises the following steps: the generation of events adopts the eight diagrams algorithm Gossip communication, adopts the virtual voting algorithm to carry out consensus, and realizes the successful detection of dishonest nodes in the federal learning process based on the data security sharing model of hashsraph and federal learning.

Description

Data security sharing method based on hash map and federal learning
Technical Field
The invention designs a data security sharing method based on hashsraph and federal learning, which is suitable for a mobile edge computing network and belongs to the technical field of information communication.
Background
When privacy data of many users is involved, the data cannot be completely uploaded to the cloud for model training, and therefore federal learning is required to solve the problem, and a central model is obtained on a server by aggregating models trained locally on the client. In federal learning, a distributed local device calculates a local model from local data samples and sends it to a central server. The central server trains the shared model by aggregating local models from different devices. Therefore, in the training process, the original data is always stored in the local equipment, so that the privacy of the user can be effectively protected. Therefore, not only data sharing but also privacy protection are realized in federal learning, but federal learning also has some limitations, and first, it cannot be guaranteed that a learned model is not revealed in transmission in a network. Second, dishonest users can adversely affect the learning model by providing low-level local patterns. In addition, users also lack the incentive to participate in federal learning with their own computing resources and data. The last is the network overload problem, and a large number of models are transmitted simultaneously in the federal learning process, so that the network is overloaded under the limitation of bandwidth. In recent years, to solve the above problems, many researchers have studied in combination with federal learning and blockchain. The blockchain is used in the paper to store the retrieval data and the access rights, so that a malicious user can be prevented from tampering with the model. Personal privacy data is protected using a differential privacy algorithm. However, the use of differential privacy algorithms results in a dramatic drop in data availability due to random noise interference. The prior art has proposed a new solution in connection with federal learning and the passage of blockchains. Federal learning requests are provided in the same channel to ensure individual privacy of users under different channels, but not to involve individual privacy protection between users under the same channel. The prior art also uses blockchain and federal learning to couple to ensure privacy of user data. The trained learning model parameters may be securely stored on the blockchain in an immutable manner to prevent unauthorized user access and malicious behavior. Prior art y.j.kim et al propose a federal learning based on blockchains to provide an incentive mechanism to prevent malicious users from changing the model based on the natural transaction attributes and immutable ledgers of the blockchain. In addition, a fast and stable target precision convergence joint learning model is also provided to reduce network overload. Although there are some FL-based blockchain studies, the results of these studies do not take into account dishonest model providers. The dishonest model provider has a great influence on the accuracy and reliability of the learning model, so how to detect whether the client participating in learning is a dishonest model provider becomes a great problem that must be solved by the development of the internet of things.
Therefore, the invention provides a model provider detection method based on hashsraph, which solves the problem that dishonest model providers have adverse effects on model generation in the federal learning process.
Disclosure of Invention
The invention aims to provide a data security sharing method based on hashsraph and federal learning, which solves the problem that dishonest model providers produce adverse effects on model generation in the federal learning process and improves the accuracy of federal learning training.
The technical scheme is as follows: the invention provides a data security sharing method based on hashsraph (hash map) and federal learning, which comprises the following steps:
(1) The method for preventing dishonest nodes from providing an error model by adding detection on the federal learning local model in a blockchain 3.0 technology hashsraph consensus algorithm comprises the following steps:
the system can be divided into a blockchain platform and a communication network, and the blockchain platform adopts hashcraph in order to reduce consensus time and avoid network overload. In particular, the blockchain platform is used to record local model retrieval (original model parameters stored in the local device), availability of the local model, and all shared data events that can track the use of the data for further auditing. The communication network is responsible for data communication.
All users desiring to provide data sharing services may apply for joining the blockchain platform. The data sharing requester sends a request to the blockchain platform, and the blockchain will check whether the request has been processed before. If processed, the request is forwarded to the node that cached the result and the cached result is then sent as a response to the requestor. In contrast, if not processed, a new federal learning request with data class and incentive mechanism is issued on the blockchain, and all nodes in the blockchain choose whether to join federal learning based on how well their owned data matches the requested data and incentive mechanism. All nodes participating in federal learning are considered committee nodes responsible for driving consensus in the blockchain.
The client trains the local model locally by using local data, propagates the local model in the blockchain network by adopting a rumor algorithm, votes the accuracy of the model by using a virtual voting algorithm, collects voting results of all other users in the blockchain, considers the model provider to be reliable if more than 1/2 participants vote, and the model is available, otherwise considers the model provider to be unreliable and removes the model provider from the blockchain. And finally, weighting each reliable local model to obtain a federal learning training model.
(2) The dishonest node detection flow of the hashsraph mainly comprises the following steps: generating events, adopting the eight diagrams algorithm Gossip for communication, adopting the virtual voting algorithm for consensus, and mainly adopting the following processes:
the event of generating dishonest node detection described in (2-1) mainly comprises: the system comprises a time stamp, a digital signature, a parent hash of the node, parent hashes of other nodes and event content, wherein the event content comprises: the data type, local model index, the number of endorsements obtained, whether the local model is valid.
The main flow of the eight diagrams algorithm described in (2-2): after the local node generates an event, another node is randomly selected as a destination node, and data which is known to the node but not to the selected node is transmitted to the selected node. When a node receives data containing new information, it first executes all the new transaction information that has not yet been executed and checks the availability of the local model, votes for the local model, and then repeats the same process by selecting another random node until all nodes have received the event.
The main flow of the virtual voting algorithm described in (2-3) mainly comprises: determining the turn, determining a known witness, collecting model reliability votes, and determining the number of consensus turns and consensus time:
(1) and (3) round determination: the first event sent by a node is a witness event, and at the same time, the witness event is the beginning of a round (r) of the node. Assuming that node B will select node C as the receiving node after receiving event X sent by node a, node B creates event Y (which includes data that node B knows but node C does not), and sends Y to node C, before creating event Y, node B should check if a new round needs to start, if event X can see most witness for round r, event Y is the start of round r+1, and Y is witness for round r+1. Otherwise, event Y is still in the r round.
(2) The well known witness determination and model reliability vote collection: when judging whether the witness of the R round is a known witness, the witness of the r+1 round needs to be used for judging, and then the witness of the r+2 round is used for counting whether the witness is the known witness ticket number and whether the local model contained in the witness event of the R round is reliable ticket number. If the witness of node B of round r+1 can see the witness of node a of round r, then the witness of node B of round r+1 throws a well known witness ticket to the witness of node a of round r. The witness of the C node of the r+2 round collects evidence that the B node (or other node) of the r+1 round, where it can be seen strongly, the a node is the number of tickets to the well-known witness, and when the number of tickets exceeds two-thirds of the number of nodes, the witness of the a node is the well-known witness. The local model is valid when the number of votes collected for the local model is more than 1/2 node number.
(3) And (3) determining the number of consensus wheels and consensus time: when the witness of the r-th round all determines whether it is a known witness, then the receive round of events that can be seen by all r-th rounds of the known witness is r. Event x to each witness node that sees it, the earliest visible event of x, such as: the event x is visible x at the node A, the node A.B.C, the earliest visible x of the node A is x, the node B is the event which transmits x to the node B for the first time, the node C and the node B find out that the median of the time stamp in the three events is the consensus time stamp of the event x, the consensus time stamp, the consensus round number, the endorsement number obtained by the local model, whether the local model is valid or not, and the obtained endorsement number is stored in the block chain.
(3) Multiplying all local models by weighting coefficients to obtain a federal learning model, the weighting coefficients being improved herein withThe body improvement method is as follows: as shown in fig. 4, a local model w is calculated using a deep learning algorithm i (t) using homomorphic encryption algorithm to pair w i (t) encrypting to obtain w' i (t), w' i (t) sending to hashsraph for detection, if w 'is detected' i (t) is a dishonest provider, then m i =0, otherwise m i =1,w′ i The weight coefficients of (t) are:
wherein k is i Is the local data volume of the ith model provider, k is the total data volume of all model providers, N i Is given w' i (t) number of votes to be awarded, I is the number of model providers, then the federal learning model is:
the invention has the advantages that: the invention realizes successful detection of dishonest nodes in the federal learning process. By adding the detection of the federal learning local model into the hashsraph consensus algorithm of the blockchain 3.0 technology, the error model provided by dishonest nodes can be successfully detected, and the model convergence speed is improved. The invention provides a method for realizing a federal learning data model by carrying out weighted aggregation on a local model, and improves the accuracy of model training.
(1) The invention provides a data security sharing model based on hashsraph and federal learning, which protects the privacy of a data model of a local user.
(2) The detection method for the federal learning local model provider based on hashsraph solves the problem that dishonest model provider produces adverse effects on model generation in the federal learning process, improves the accuracy of federal learning, reduces learning time, and simultaneously effectively prevents network overload.
(3) The weighting coefficient of the model in the federal learning is improved, and the learning precision is improved.
Drawings
Fig. 1 is a data security sharing model based on hashsraph and federal learning.
Fig. 2 event structure.
Fig. 3 eight diagrams protocol.
Fig. 4 federal learning model diagram.
Detailed Description
The invention provides a data security sharing model based on hashsraph and federal learning, which realizes that dishonest nodes are successfully detected in the federal learning process. By adding the detection of the federal learning local model into the hashsraph consensus algorithm of the blockchain 3.0 technology, the error model provided by dishonest nodes can be successfully detected, and the model convergence speed is improved. A method for implementing a federally learned data model by weighting and aggregating local models is proposed, the weighting coefficients of which mainly comprise: the ratio of the local model data volume to the total data volume, the ratio of the number of endorsements obtained by the local model to the number of total participating model clients, improves the accuracy of model training.
(1) The method for preventing dishonest nodes from providing an error model by adding detection on the federal learning local model in a blockchain 3.0 technology hashsraph consensus algorithm comprises the following steps:
the system can be divided into a blockchain platform and a communication network, as shown in fig. 1. To reduce consensus time and avoid network overload, blockchain platforms employ hashcraphs. In particular, the blockchain platform is used to record local model retrieval (original model parameters stored in the local device), availability of the local model, and all shared data events that can track the use of the data for further auditing. The communication network is responsible for data communication.
All users desiring to provide data sharing services may apply for joining the blockchain platform. The data sharing requester sends a request to the blockchain platform, and the blockchain will check whether the request has been processed before. If processed, the request is forwarded to the node that cached the result and the cached result is then sent as a response to the requestor. In contrast, if not processed, a new federal learning request with data class and incentive mechanism is issued on the blockchain, and all nodes in the blockchain choose whether to join federal learning based on how well their owned data matches the requested data and incentive mechanism. All nodes participating in federal learning are considered committee nodes responsible for driving consensus in the blockchain.
The client trains the local model locally by using local data, propagates the local model in the blockchain network by adopting a rumor algorithm, votes the accuracy of the model by using a virtual voting algorithm, collects voting results of all other users in the blockchain, considers the model provider to be reliable if more than 1/2 participants vote, and the model is available, otherwise considers the model provider to be unreliable and removes the model provider from the blockchain. And finally, weighting each reliable local model to obtain a federal learning training model.
(2) The dishonest node detection flow of the hashsraph mainly comprises the following steps: generating events, adopting the eight diagrams algorithm Gossip for communication, adopting the virtual voting algorithm for consensus, and mainly adopting the following processes:
the event of generating dishonest node detection of (2-1), as shown in fig. 2, mainly comprises: the system comprises a time stamp, a digital signature, a parent hash of the node, parent hashes of other nodes and event content, wherein the event content comprises: the data type, local model index, the number of endorsements obtained, whether the local model is valid.
The main flow of the eight diagrams algorithm described in (2-2) is as shown in FIG. 3: after the local node generates an event, another node is randomly selected as a destination node, and data which is known to the node but not to the selected node is transmitted to the selected node. When a node receives data containing new information, it first executes all the new transaction information that has not yet been executed and checks the availability of the local model, votes for the local model, and then repeats the same pass by selecting another random node until all nodes have received the event.
The main flow of the virtual voting algorithm described in (2-3) mainly comprises: determining the turn, determining a known witness, collecting model reliability votes, and determining the number of consensus turns and consensus time:
(1) and (3) round determination: the first event sent by a node is a witness event, and at the same time, the witness event is the beginning of a round (r) of the node. Assuming that node B will select node C as the receiving node after receiving event X sent by node a, node B creates event Y (which includes data that node B knows but node C does not), and sends Y to node C, before creating event Y, node B should check if a new round needs to start, if event X can see most witness for round r, event Y is the start of round r+1, and Y is witness for round r+1. Otherwise, event Y is still in the r round.
(2) The well known witness determination and model reliability vote collection: when judging whether the witness of the R round is a known witness, the witness of the r+1 round needs to be used for judging, and then the witness of the r+2 round is used for counting whether the witness is the known witness ticket number and whether the local model contained in the witness event of the R round is reliable ticket number. If the witness of node B of round r+1 can see the witness of node a of round r, then the witness of node B of round r+1 throws a well known witness ticket to the witness of node a of round r. The witness of the C node of the r+2 round collects evidence that the B node (or other node) of the r+1 round, where it can be seen strongly, the a node is the number of tickets to the well-known witness, and when the number of tickets exceeds two-thirds of the number of nodes, the witness of the a node is the well-known witness. The local model is valid when the number of votes collected for the local model is more than 1/2 node number.
(3) And (3) determining the number of consensus wheels and consensus time: when the witness of the r-th round all determines whether it is a known witness, then the receive round of events that can be seen by all r-th rounds of the known witness is r. Event x to each witness node that sees it, the earliest visible event of x, such as: the event x is visible x at the node A, the node A.B.C, the earliest visible x of the node A is x, the node B is the event which transmits x to the node B for the first time, the node C and the node B find out that the median of the time stamp in the three events is the consensus time stamp of the event x, the consensus time stamp, the consensus round number, the endorsement number obtained by the local model, whether the local model is valid or not, and the obtained endorsement number is stored in the block chain.
(3) Multiplying all local models by weighting coefficients to obtain a federal learning model, wherein the weighting coefficients are improved by the following specific improvement method: as shown in fig. 4, a local model w is calculated using a deep learning algorithm i (t) using homomorphic encryption algorithm to pair w i (t) encrypting to obtain w' i (t), w' i (t) sending to hashsraph for detection, if w 'is detected' i (t) is a dishonest provider, then m i =0, otherwise m i =1,w′ i The weight coefficients of (t) are:
wherein k is i Is the local data volume of the ith model provider, k is the total data volume of all model providers, N i Is given w' i (t) number of votes to be awarded, I is the number of model providers, then the federal learning model is:

Claims (2)

1. the data security sharing method based on hashsraph and federal learning is characterized in that detection of a local model of federal learning is added into a hashsraph consensus algorithm of a blockchain 3.0 technology, so that dishonest nodes are prevented from providing an error model, and meanwhile, the data model of federal learning is realized by a method for carrying out weighted aggregation on the local model, and the method specifically comprises the following steps:
(1) The method for preventing dishonest nodes from providing an error model by adding detection on the federal learning local model in a blockchain 3.0 technology hashsraph consensus algorithm comprises the following steps:
the client trains a local model locally by using local data, propagates the local model in a blockchain network by adopting an eight diagrams algorithm, votes the accuracy of the model by using a virtual voting algorithm, collects voting results of all other users in the blockchain, considers a model provider to be reliable if more than 1/2 participants agree on the vote, and considers the model to be usable, otherwise considers the model provider to be unreliable and removes the model provider from the blockchain; finally, weighting each reliable local model to obtain a federal learning training model;
(2) The dishonest node detection flow of the hashsraph comprises the following steps: generating an event, namely generating an event detected by dishonest nodes, adopting the eight diagrams algorithm Gossip for communication, adopting the virtual voting algorithm for consensus, and adopting the following flow:
the event of (2-1) generating a dishonest node detection: each event includes: the node comprises a timestamp, a parent hash of the node, parent hashes of other nodes and event content, wherein the event content comprises: data type, local model index, number of endorsements obtained, whether the local model is valid;
the flow of the eight diagrams algorithm described in (2-2): after the local node generates an event, randomly selecting another node as a destination node, and transmitting data which is known by the node but not known by the selected node to the selected node; when a node receives data containing new information, it first executes all the new transaction information that has not yet been executed, checks the availability of the local model, votes on the local model, and then repeats the same process by selecting another random node until all nodes receive the event;
the flow of the virtual voting algorithm described in (2-3) is divided into: determining the turn, determining a known witness, collecting model reliability votes, and determining the number of consensus turns and consensus time:
(1) and (3) round determination: the first event sent by a node is a witness event, and at the same time, the witness event is the beginning of a round (r) of the node; assuming that after node B receives event X sent by node A, node B will select node C as receiving node, then node B creates event Y, event Y includes data that node B knows but node C does not know, and sends event Y to node C, before creating event Y, node B should check if it needs to start a new round, if event X can see most witness of r round, event Y is the start of r+1 round, Y is witness of r+1 round; otherwise, event Y is still in r-round;
(2) well-known witness determination and model reliability vote collection: when judging whether the witness of the R round is a known witness or not, the witness of the r+1 round needs to be judged, and then the witness of the r+2 round counts whether the witness is the known witness ticket number and whether the local model contained in the witness event of the R round is reliable ticket number or not; if the witness of the node B of the r+1 round can see the witness of the node A of the r round, the witness of the node B of the r+1 round throws a well-known witness ticket to the witness of the node A of the r round; the witness of the C node of the r+2 round collects the ticket number of the A node which is proved to be the known witness by the B node of the r+1 round, and when the ticket number exceeds two-thirds of the node number, the witness of the A node is the known witness; when the number of the collected reliable votes of the local model exceeds 1/2 node number, the local model is valid;
(3) and (3) determining the number of consensus wheels and consensus time: when the witness of the r-th round determines whether the witness is a known witness, the receiving round of events that can be seen by all r-th rounds of the known witness is r; event x to the earliest visible event in each witness node that sees it; the method comprises the steps that an event x is visible in an A node and a A, B, C node, x is visible in the A node at the earliest time, the B node transmits x to the B node for the first time, the C node and the B node, the median of the timestamps in the three found events is the consensus timestamp of the event x, the consensus timestamp, the consensus round number and the endorsement number obtained by the local model are stored in a block chain or not, and the local model is valid or not;
(3) Multiplying all local models by weighting coefficients to obtain a federal learning model, wherein the weighting coefficients consist of the following two parts: the ratio of the local model data volume to the total data volume, the ratio of the number of endorsements obtained by the local model to the total number of participating model clients.
2. The method of claim 1, wherein the local model w is calculated using a deep learning algorithm i (t) using homomorphic encryption algorithm to pair w i (t) encrypting to obtain w' i (t), w' i (t) sending to hashsraph for detection, if w 'is detected' i (t) is a dishonest provider, then m i =0, otherwise m i =1,w′ i The weight coefficients of (t) are:
wherein k is i Is the local data volume of the ith model provider, k is the total data volume of all model providers, N i Is given w' i (t) number of votes to be awarded, I is the number of model providers, then the federal learning model is:
CN202010625680.1A 2020-07-01 2020-07-01 Data security sharing method based on hash map and federal learning Active CN111930698B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010625680.1A CN111930698B (en) 2020-07-01 2020-07-01 Data security sharing method based on hash map and federal learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010625680.1A CN111930698B (en) 2020-07-01 2020-07-01 Data security sharing method based on hash map and federal learning

Publications (2)

Publication Number Publication Date
CN111930698A CN111930698A (en) 2020-11-13
CN111930698B true CN111930698B (en) 2024-03-15

Family

ID=73317444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010625680.1A Active CN111930698B (en) 2020-07-01 2020-07-01 Data security sharing method based on hash map and federal learning

Country Status (1)

Country Link
CN (1) CN111930698B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112395640B (en) * 2020-11-16 2022-08-26 国网河北省电力有限公司信息通信分公司 Industry internet of things data light-weight credible sharing technology based on block chain
CN112468565A (en) * 2020-11-19 2021-03-09 江苏省测绘资料档案馆 System for managing space data integrity and tracking shared flow based on block chain
CN112749392B (en) * 2021-01-07 2022-10-04 西安电子科技大学 Method and system for detecting abnormal nodes in federated learning
CN113139884B (en) * 2021-03-26 2021-12-03 青岛亿联信息科技股份有限公司 Intelligent building management system method, system, storage medium and electronic equipment
CN113420323B (en) * 2021-06-04 2022-06-03 国网河北省电力有限公司信息通信分公司 Data sharing method and terminal equipment
CN113626530A (en) * 2021-09-03 2021-11-09 杭州复杂美科技有限公司 Block generation method, computer device and storage medium
CN115062320B (en) * 2022-04-26 2024-04-26 西安电子科技大学 Privacy protection federal learning method, device, medium and system for asynchronous mechanism

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110262819A (en) * 2019-06-04 2019-09-20 深圳前海微众银行股份有限公司 A kind of the model parameter update method and device of federal study
WO2019232789A1 (en) * 2018-06-08 2019-12-12 北京大学深圳研究生院 Voting-based consensus method
CN110572253A (en) * 2019-09-16 2019-12-13 济南大学 Method and system for enhancing privacy of federated learning training data
CN111125779A (en) * 2019-12-17 2020-05-08 山东浪潮人工智能研究院有限公司 Block chain-based federal learning method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019232789A1 (en) * 2018-06-08 2019-12-12 北京大学深圳研究生院 Voting-based consensus method
CN110262819A (en) * 2019-06-04 2019-09-20 深圳前海微众银行股份有限公司 A kind of the model parameter update method and device of federal study
CN110572253A (en) * 2019-09-16 2019-12-13 济南大学 Method and system for enhancing privacy of federated learning training data
CN111125779A (en) * 2019-12-17 2020-05-08 山东浪潮人工智能研究院有限公司 Block chain-based federal learning method and device

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Blockchain and Federated Learning for Privacy-Preserved Data Sharing in Industrial IoT;Y. Lu等;IEEE Transactions on Industrial Informatics;第16卷(第6期);4177-4186 *
Blockchained On-Device Federated Learning;Kim, H等;IEEE Communications Letters;第24卷(第6期);1279-1283 *
Communication-Efficient Federated Deep Learning with Asynchronous Model Update and Temporally Weighted Aggregation;Yang Chen等;arXiv:1903.07424;1-10 *
优化可扩展的拜占庭容错共识算法;韩嗣诚, 朱晓荣, 张秀贤;物联网学报;第4卷(第2期);18-25 *
基于区块链的传染病监测与预警技术;欧阳丽炜;智慧科学与技术学报;第2卷(第2期);135-143 *
智能生态网络:知识驱动的未来价值互联网基础设施;雷凯等;应用科学学报(第01期);156-176 *

Also Published As

Publication number Publication date
CN111930698A (en) 2020-11-13

Similar Documents

Publication Publication Date Title
CN111930698B (en) Data security sharing method based on hash map and federal learning
Moreton et al. Trading in trust, tokens, and stamps
Wang et al. Challenges and solutions in autonomous driving: A blockchain approach
Guo et al. Proof-of-event recording system for autonomous vehicles: A blockchain-based solution
CN114362987B (en) Distributed voting system and method based on block chain and intelligent contract
CN101411122A (en) System and method for P2P network data flow direction and flow measurement, and commerce mode based on the technology
CN111585990B (en) Mobile crowd sensing privacy protection truth value discovery method based on safety summation
KR20100055234A (en) Method and apparatus for classifying traffic at transport layer
CN113992526B (en) Coalition chain cross-chain data fusion method based on credibility calculation
CN114650110A (en) Cooperative spectrum sensing method based on highest node degree clustering
CN111209345B (en) Online teaching consensus system based on block chain and learning recording method
CN111260348A (en) Fair payment system based on intelligent contract in Internet of vehicles and working method thereof
CN107070954B (en) Anonymous-based trust evaluation method
Wu et al. Privacy-preserving voluntary-tallying leader election for internet of things
Alotaibi et al. PPIoV: A privacy preserving-based framework for IoV-fog environment using federated learning and blockchain
Drakatos et al. Adrestus: Secure, scalable blockchain technology in a decentralized ledger via zones
CN111600750B (en) Speed limit detection method and system for PCDN network node flow
Blum et al. Superlight–A permissionless, light-client only blockchain with self-contained proofs and BLS signatures
Ramachandran et al. Bitstore: An incentive-compatible solution for blocked downloads in bittorrent
CN115640305A (en) Fair and credible federal learning method based on block chain
CN115913670A (en) Distributed K anonymous location privacy protection method, system, device and terminal
CN115834512A (en) Data sharing method, system, electronic equipment and storage medium
Li et al. PPBFL: A Privacy Protected Blockchain-based Federated Learning Model
Kim et al. Reducing the propagation delay of compact block in Bitcoin network
Reiter et al. Making peer-assisted content distribution robust to collusion using bandwidth puzzles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant