CN114187006A - Block chain supervision-based federal learning method - Google Patents

Block chain supervision-based federal learning method Download PDF

Info

Publication number
CN114187006A
CN114187006A CN202111294450.2A CN202111294450A CN114187006A CN 114187006 A CN114187006 A CN 114187006A CN 202111294450 A CN202111294450 A CN 202111294450A CN 114187006 A CN114187006 A CN 114187006A
Authority
CN
China
Prior art keywords
model
transaction
participant
aggregation server
model training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111294450.2A
Other languages
Chinese (zh)
Inventor
李潇
李安
付春凤
张婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced Institute of Information Technology AIIT of Peking University
Hangzhou Weiming Information Technology Co Ltd
Original Assignee
Advanced Institute of Information Technology AIIT of Peking University
Hangzhou Weiming Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced Institute of Information Technology AIIT of Peking University, Hangzhou Weiming Information Technology Co Ltd filed Critical Advanced Institute of Information Technology AIIT of Peking University
Priority to CN202111294450.2A priority Critical patent/CN114187006A/en
Publication of CN114187006A publication Critical patent/CN114187006A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Accounting & Taxation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Bioethics (AREA)
  • Computer Hardware Design (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention provides a block chain supervision-based federal learning method. The block chain and the federal learning in the invention have independent functions: model training is performed by the federal learning module, and the blockchain is responsible for supervising the model training process. The method comprises the following steps that a participant publishes a data dictionary variable on a platform, a model demander searches the dictionary variable according to own data requirements, and then sends a data use request to the participant; model training may begin after data usage rights are obtained. And the model requiring party can carry out federal learning after acquiring the data use permission of each participant. The invention is convenient for deployment, and has relatively low requirements on the deployment conditions of the nodes of the participants; the communication overhead and the waiting time in the model training process are relatively short; the model training process is supervised through a block chain, the operation parameters and the training results are kept secret, and the model training method is only open to the nodes participating in training, so that the openness and the privacy are balanced.

Description

Block chain supervision-based federal learning method
Technical Field
The invention relates to the technical field of machine learning, in particular to a block chain supervision-based federal learning method.
Background
The block chain is essentially a distributed database, generates blocks from data according to a time sequence, combines the blocks in a sequential connection mode into a chain data structure, and guarantees that the data cannot be falsified and forged by using a cryptographic technology. The generalized blockchain also refers to distributed accounting technologies implemented based on the blockchain structure, including distributed consensus, privacy and security protection, peer-to-peer communication technology, network protocols, intelligent contracts, and the like. The blockchain technique utilizes an encrypted chained blockstructure to verify and store data, and a distributed node consensus algorithm to generate and update data. The consensus algorithm is a mathematical algorithm for establishing trust and obtaining rights and interests among different nodes in the blockchain system. According to different participants, the blockchains can be divided into 3 types: public Chain (Public Chain), federation Chain (Consortium Chain), and Private Chain (Private Chain). The public link is a block link opened to the public, and any node accessing the public network can add and read data, such as bitcoin, ether house and the like; the alliance chain is a block chain with an admission mechanism, and only authorized users can access data on the chain; private chains are open only to a certain organization or entity.
The federated learning aims at establishing a distributed machine learning model, each organization with data trains a local model, then each organization uploads the trained models to an aggregation server to obtain a global model, and the aggregation server sends parameters of the global model to each organization to update the local model until the global model is stable. In the whole process, data cannot be local to a data owner, so that the privacy and the safety of the data are ensured. Federal learning enables more data holders to be willing to share data by solving the problem of data property rights, so that the data volume of model training is increased, and the accuracy of the model is improved. Federated learning can be classified into horizontal federated learning, vertical federated learning, and migratory federated learning according to the feature space and sample space of the data, wherein the architecture of horizontal federated learning can be further classified into client-server architecture and peer-to-peer network architecture. The client-server architecture for horizontal federal learning generally consists of a data owner and a data aggregator (aggregation server).
Although data is always kept in the local area in the federal learning process, the data property right is not migrated, and effective supervision of the use behavior of the data by the federal learning provider still cannot be carried out. The block chain has key characteristics of decentralization, openness, tamper resistance, traceability and the like, and can effectively solve the problems by combining with the federal learning technology. Swarm Learning (Swarm Learning) is a method combining block chain and federal Learning, and aims to decentralize a model aggregation process through the block chain and select model aggregation among a plurality of nodes by a coordination mechanism. This approach does not provide for regulatory compliance with the federal learning process. The invention provides a block chain supervision-based federal learning method, which is characterized in that block chain nodes and federal learning nodes are decoupled, and a block chain record model training process is used for supervising and tracing the federal learning process.
Disclosure of Invention
In view of this, the present invention is directed to a federal learning method based on block chain supervision, which can specifically solve the existing problems.
Based on the above purpose, the invention provides a block chain supervision-based federal learning method, which comprises the following steps:
after obtaining the data use authority, the model demander sends a model training request to the aggregation server, assembles request information into a first transaction and sends the first transaction to the blockchain network, and after the transaction is verified, selects a node with the accounting right by using a blockchain consensus algorithm and records the transaction on the blockchain;
starting first iteration, receiving a model training request by an aggregation server, evaluating network and server configuration of each participant, screening out participant nodes meeting requirements, sending initial model parameters to the participant nodes, and starting model training; after each iteration, the aggregation server receives the encrypted gradient information of each participant, decrypts the encrypted gradient information, aggregates the gradient information to obtain global gradient information, and transmits the global gradient information to each participant after encryption; during each iteration, the aggregation server assembles the encrypted global gradient information and the information of each participant into a second transaction and sends the second transaction to the blockchain network, after the transaction is verified, nodes with accounting rights are selected by a blockchain consensus algorithm, and the transaction is recorded on the blockchain;
each participant receives the model training request, starts model training, acquires the initial model parameters from the aggregation server, and sends a third transaction to the block chain, after the transaction is verified, the node with the accounting right is selected by the block chain consensus algorithm, and the transaction is recorded on the block chain;
each participant utilizes local data to train a model, obtains a new gradient after multiple iterations, and encrypts the new gradient;
each participant sends the encrypted gradient to the aggregation server and sends a fourth transaction to the blockchain, after the transaction is verified, nodes with accounting rights are selected by a blockchain consensus algorithm, and the transaction is recorded on the blockchain;
and after the iteration termination condition is reached, the aggregation server sends the final model to the model demand side.
Further, the model training request information sent by the model request direction aggregation server includes information of a requesting party, initial model parameters, and unique identification information of a participant allowed to participate in the current training.
Further, the information of the first transaction is assembled in a clear text form.
Further, the global gradient information is encrypted and then sent to each participant, and the global gradient information is not sent to the participants after the iteration termination condition is reached.
Further, the third transaction includes the received gradient, the time of receipt, the aggregated server identification information, and the party identification information.
Further, the fourth transaction includes the new encryption gradient, the participant identification, and the aggregation server identification.
Further, the participator publishes the data dictionary variables on the aggregation server, the model demander searches the dictionary variables according to own data requirements, then sends a data use request to the participator, and starts model training after obtaining the data use authority.
Further, after receiving a model training instruction of the aggregation server, the participant downloads initial model parameters from the aggregation server, performs model training by using local data, uploads the parameters to the aggregation server, obtains global parameters of the server to update the local model, and ends the training after the iteration termination condition is met.
Further, the model requiring party searches the required dictionary variables on the aggregation server, sends an application to the participating party and acquires the data use permission; after obtaining a sufficient amount of data, sending a model training request to the aggregation server, starting a training process on the aggregation server, and after model training is finished, obtaining a final model.
Further, when receiving a model training request, the aggregation server encrypts the request and request information, places the encrypted request and request information on a block chain, and then selects a proper participant according to network conditions to start the model training.
In general, the advantages and experience brought to the user of the invention are:
(1) the invention provides that the update of the data of each participant and the aggregator on the chain in the federal learning process is recorded on the block chain in real time so as to audit the models. In addition, the parameters generated in the model training process are encrypted and sent to the block chain, so that the transparency of the model training process and the confidentiality of the operation parameters are ensured.
(2) Deployment is facilitated: the federated learned aggregation server is the platform itself and therefore has relatively low deployment requirements for the participant nodes.
(3) And (3) reducing communication overhead: a large amount of parameters are transmitted in the model training process and do not pass through a block chain, but the aggregation party and the participants directly communicate, and therefore waiting time is reduced.
(4) The model training process is transparent, but the operating parameters are kept secret. The model training process is recorded on the block chain, and parameters are encrypted, so that the model training process is transparent to the whole alliance chain, but the training result is only transparent to the nodes participating in the training.
(5) Parameters transmitted in the model training process are encrypted, and the model is effectively prevented from being attacked.
Drawings
In the drawings, like reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily to scale. It is appreciated that these drawings depict only some embodiments in accordance with the disclosure and are therefore not to be considered limiting of its scope.
Fig. 1 shows a schematic diagram of the system architecture of the present invention.
Fig. 2 is a flow chart of participant model training.
FIG. 3 is a flow chart of the training of the demand side model.
Fig. 4 is a flowchart of the algorithm model training.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
Although data are always kept in the local of the participants in the federal learning process, the data property rights can be protected from being migrated to a certain extent, and data leakage caused by malicious operation of some participating nodes in the model training process cannot be avoided. In order to solve the problem, currently, a combined blockchain is proposed, information to be certified in a model training process is linked, but transaction information stored on the blockchain is public, so that all members of the league chain can obtain a training result instead of only nodes participating in training. If the federal learning node is deployed as a blockchain node, the aggregation server is generated by coordination of the participants, so that the federal learning deployment difficulty is increased, and the communication cost between the participants and the aggregation party is increased. Therefore, the invention independently deploys the blockchain nodes and the federal learning nodes. The federal learning system is responsible for model training, and directly takes the platform expected to be realized by the invention as an aggregation central server, so that the deployment cost is reduced; in addition, the communication between the participator and the aggregator does not pass through a block chain any more, so that the communication overhead can be reduced; in addition, an encryption method is introduced to encrypt parameters transmitted between the participators and the aggregators in the model training process, so that the model extraction attack is prevented. And the block chain is responsible for recording the information to be stored, and encrypting the uplink model parameters in order to prevent nodes which do not participate in model training from acquiring parameter information.
The invention separates the functions of the block chain and the federal learning, the model training is responsible for the federal learning, the server responsible for aggregation is the platform, and the block chain is responsible for recording the model training process. Because the blockchain has the characteristic of being not tampered, the operation of each node in the whole process of federal learning is recorded on the blockchain, so that the traceability of data operation is ensured, and the parties can trust each other.
In the invention, a data owner (namely a participant) publishes a data dictionary variable on a platform, a model demander searches the dictionary variable according to own data requirement, then sends a data use request to the data owner, and can start model training after obtaining data use authority. The model requiring party starts the federal learning process after acquiring the data usage rights of each participant, wherein the model training process of the participant on the local data set is executed in a trusted environment. The model training process and the uplink processing for the operation records of the process are shown in FIG. 1, and includes the following steps:
step 1: after obtaining a sufficient number of data use authorities, the model demander sends a model training request to the aggregation server, assembles request information into a transaction and sends the transaction to the blockchain network. The model demander has two roles, one is that the platform user sends model training request information to the aggregation server, wherein the model training request information comprises demander information, initial model parameters and unique identification information of a participant allowed to participate in the training; and the other is a member of the alliance chain, which assembles the aforementioned transaction information into a transaction in a plaintext form and sends the transaction information to the block chain. After the transaction is verified, the nodes with the accounting right are selected by the block chain consensus algorithm, and the transaction is recorded on the block chain.
Step 2: and (3) for the first iteration, the aggregation server receives the model training request, evaluates the network and server configuration of each participant, screens out the participant nodes meeting the requirements, sends initial model parameters to the participant nodes, and starts model training. And in each iteration, the aggregation server receives the encrypted gradient information of each participant, decrypts the encrypted gradient information, aggregates the gradient information to obtain global gradient information, and encrypts the global gradient information and sends the global gradient information to each participant. It is noted that, in order to prevent the possible attacks in the model training process, the present invention encrypts the parameters to be transmitted. After the iteration termination condition is reached, gradient information is not sent to the participants.
In addition, each step of model iteration, the aggregation server needs to assemble global gradient information and information of each participant into a transaction and send the transaction to the blockchain network, after the transaction is verified, a node with the accounting right is selected by a blockchain consensus algorithm, and the transaction is recorded on the blockchain.
And step 3: and each participant receives the model training request and starts model training. Global model parameters are obtained from the aggregation server (initial model parameters are obtained for the first iteration). And the transaction (the received gradient, the receiving time, the identification information of the aggregation server and the identification information of the participator) is sent to the blockchain, after the transaction is verified, the nodes with the accounting right are selected by a blockchain consensus algorithm, and the transaction is recorded on the blockchain.
And 4, step 4: and (4) each participant utilizes local data to train the model, obtains a new gradient after multiple iterations, and encrypts the new gradient.
And 5: each participant sends the encrypted gradient to the aggregation server and sends the transaction (new encryption gradient, participant identification, aggregation server identification, etc.) to the blockchain. After the transaction is verified, the nodes with the accounting right are selected by the block chain consensus algorithm, and the transaction is recorded on the block chain.
Step 6: and after the iteration termination condition is reached, the aggregation server sends the final model to the model demand side.
The platform realized by the invention comprises the following users with 3 roles, a participant (a data owner), a model demander and an aggregator (an aggregation server, also a server where the platform is located), wherein the process of each role participating in model training is as follows:
fig. 2 is a flow chart of participant model training. All users registered to use the platform belong to participants, and the participants upload dictionary variables of local data to the platform firstly. After receiving the data usage request, the internal decision approves or rejects the data usage application. After receiving a model training instruction of an aggregation server, downloading initial model parameters from an aggregation party, performing model training by using local data, uploading the parameters to the aggregation server, acquiring global parameters of the server to update a local model, and finishing training after an iteration termination condition is met.
FIG. 3 is a flow chart of the training of the demand side model. And the model demander searches the required dictionary variables on the platform, sends an application to the data owner and acquires the data use permission. After a sufficient amount of data is obtained, a model training request (the request comprises information of a demand side, initial model parameters and information of a participant allowed to participate in the training) is sent to a platform (a model aggregator), and the platform starts a training process. And after the model training is finished, obtaining a final model.
Fig. 4 is a flowchart of the algorithm model training. When the aggregator receives the model training request, the aggregator encrypts the request and the request information and places the encrypted request and the encrypted request information on the blockchain. And then selecting proper participants according to conditions such as a network and the like, and starting the model training. In the training process, the aggregation server takes charge of two things, namely, receiving parameters uploaded by the participants, aggregating to obtain global parameters and sending the global parameters to the participants; second, the operation records of the broadcast participants and the aggregators are recorded to the blockchain network. And after the iteration termination condition is met, sending the result to the model demander.
It should be noted that:
the algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or digital signal processor may be used in practice to implement some or all of the functionality of some or all of the components in a virtual machine creation system in accordance with embodiments of the present invention. The present invention may also be embodied as apparatus or system programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several systems, several of these systems may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive various changes or substitutions within the technical scope of the present invention, and these should be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A block chain supervision-based federal learning method is characterized in that:
after obtaining the data use authority, the model demander sends a model training request to the aggregation server, assembles request information into a first transaction and sends the first transaction to the blockchain network, and after the transaction is verified, selects a node with the accounting right by using a blockchain consensus algorithm and records the transaction on the blockchain;
starting first iteration, receiving a model training request by an aggregation server, evaluating network and server configuration of each participant, screening out participant nodes meeting requirements, sending initial model parameters to the participant nodes, and starting model training; after each iteration, the aggregation server receives the encrypted gradient information of each participant, decrypts the encrypted gradient information, aggregates the gradient information to obtain global gradient information, and transmits the global gradient information to each participant after encryption; during each iteration, the aggregation server assembles the global gradient information and the information of each participant into a second transaction and sends the second transaction to the blockchain network, after the transaction is verified, nodes with accounting rights are selected by a blockchain consensus algorithm, and the transaction is recorded on the blockchain;
each participant receives the model training request, starts model training, acquires the initial model parameters from the aggregation server, and sends a third transaction to the block chain, after the transaction is verified, the node with the accounting right is selected by the block chain consensus algorithm, and the transaction is recorded on the block chain;
each participant utilizes local data to train a model, obtains a new gradient after multiple iterations, and encrypts the new gradient;
each participant sends the encrypted gradient to the aggregation server and sends a fourth transaction to the blockchain, after the transaction is verified, nodes with accounting rights are selected by a blockchain consensus algorithm, and the transaction is recorded on the blockchain;
and after the iteration termination condition is reached, the aggregation server sends the final model to the model demand side.
2. The method of claim 1,
the model training request information sent by the model demand direction aggregation server comprises demand party information, initial model parameters and unique identification information of the participants allowed to participate in the training.
3. The method according to claim 1 or 2,
the information of the first transaction is assembled in the clear.
4. The method according to claim 1 or 2,
and the global gradient information is encrypted and then sent to each participant, and the global gradient information is not sent to the participants after the iteration termination condition is reached.
5. The method according to claim 1 or 2,
the third transaction includes the received gradient, the time of receipt, the aggregation server identification information, and the participant identification information.
6. The method according to claim 1 or 2,
the fourth transaction includes the new encryption gradient, the participant identification, and the aggregation server identification.
7. The method according to claim 1 or 2,
and the participator publishes the data dictionary variable on the aggregation server, the model requiring party retrieves the dictionary variable according to the data requirement of the participator, then sends a data use request to the participator, and starts to carry out model training after obtaining the data use authority.
8. The method according to claim 1 or 2,
after receiving a model training instruction of the aggregation server, the participant downloads initial model parameters from the aggregation server, performs model training by using local data, uploads the parameters to the aggregation server, obtains global parameters of the server to update the local model, and finishes training after the iteration termination condition is met.
9. The method according to claim 1 or 2,
the model demander searches the required dictionary variables on the aggregation server, sends an application to the participators and acquires data use permission; after obtaining a sufficient amount of data, sending a model training request to the aggregation server, starting a training process on the aggregation server, and after model training is finished, obtaining a final model.
10. The method according to claim 1 or 2,
and when the aggregation server receives the model training request, encrypting the request and the request information to process uplink, and then selecting a proper participant according to network conditions to start the model training.
CN202111294450.2A 2021-11-03 2021-11-03 Block chain supervision-based federal learning method Pending CN114187006A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111294450.2A CN114187006A (en) 2021-11-03 2021-11-03 Block chain supervision-based federal learning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111294450.2A CN114187006A (en) 2021-11-03 2021-11-03 Block chain supervision-based federal learning method

Publications (1)

Publication Number Publication Date
CN114187006A true CN114187006A (en) 2022-03-15

Family

ID=80540631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111294450.2A Pending CN114187006A (en) 2021-11-03 2021-11-03 Block chain supervision-based federal learning method

Country Status (1)

Country Link
CN (1) CN114187006A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114648131A (en) * 2022-03-22 2022-06-21 中国电信股份有限公司 Federal learning method, device, system, equipment and medium
CN115271089A (en) * 2022-06-15 2022-11-01 京信数据科技有限公司 Block chain-based federal learning credible training method and device
CN115329032A (en) * 2022-10-14 2022-11-11 杭州海康威视数字技术股份有限公司 Federal dictionary based learning data transmission method, device, equipment and storage medium
CN116828453A (en) * 2023-06-30 2023-09-29 华南理工大学 Unmanned aerial vehicle edge computing privacy protection method based on self-adaptive nonlinear function

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110837653A (en) * 2019-11-07 2020-02-25 深圳前海微众银行股份有限公司 Label prediction method, device and computer readable storage medium
CN111125779A (en) * 2019-12-17 2020-05-08 山东浪潮人工智能研究院有限公司 Block chain-based federal learning method and device
CN111698322A (en) * 2020-06-11 2020-09-22 福州数据技术研究院有限公司 Medical data safety sharing method based on block chain and federal learning
CN112860800A (en) * 2021-02-22 2021-05-28 深圳市星网储区块链有限公司 Trusted network application method and device based on block chain and federal learning
CN113052331A (en) * 2021-02-19 2021-06-29 北京航空航天大学 Block chain-based Internet of things personalized federal learning method
WO2021155671A1 (en) * 2020-08-24 2021-08-12 平安科技(深圳)有限公司 High-latency network environment robust federated learning training method and apparatus, computer device, and storage medium
WO2021174778A1 (en) * 2020-07-30 2021-09-10 平安科技(深圳)有限公司 Blockchain secure transaction method, computer device, and readable storage medium
WO2021189906A1 (en) * 2020-10-20 2021-09-30 平安科技(深圳)有限公司 Target detection method and apparatus based on federated learning, and device and storage medium
CN113536382A (en) * 2021-08-09 2021-10-22 北京理工大学 Block chain-based medical data sharing privacy protection method by using federal learning
CN113570065A (en) * 2021-07-08 2021-10-29 国网河北省电力有限公司信息通信分公司 Data management method, device and equipment based on alliance chain and federal learning

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110837653A (en) * 2019-11-07 2020-02-25 深圳前海微众银行股份有限公司 Label prediction method, device and computer readable storage medium
CN111125779A (en) * 2019-12-17 2020-05-08 山东浪潮人工智能研究院有限公司 Block chain-based federal learning method and device
CN111698322A (en) * 2020-06-11 2020-09-22 福州数据技术研究院有限公司 Medical data safety sharing method based on block chain and federal learning
WO2021174778A1 (en) * 2020-07-30 2021-09-10 平安科技(深圳)有限公司 Blockchain secure transaction method, computer device, and readable storage medium
WO2021155671A1 (en) * 2020-08-24 2021-08-12 平安科技(深圳)有限公司 High-latency network environment robust federated learning training method and apparatus, computer device, and storage medium
WO2021189906A1 (en) * 2020-10-20 2021-09-30 平安科技(深圳)有限公司 Target detection method and apparatus based on federated learning, and device and storage medium
CN113052331A (en) * 2021-02-19 2021-06-29 北京航空航天大学 Block chain-based Internet of things personalized federal learning method
CN112860800A (en) * 2021-02-22 2021-05-28 深圳市星网储区块链有限公司 Trusted network application method and device based on block chain and federal learning
CN113570065A (en) * 2021-07-08 2021-10-29 国网河北省电力有限公司信息通信分公司 Data management method, device and equipment based on alliance chain and federal learning
CN113536382A (en) * 2021-08-09 2021-10-22 北京理工大学 Block chain-based medical data sharing privacy protection method by using federal learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XUDONG ZHU ET AL.: "Privacy-preserving Decentralized Federated Deep Learning", 《PRIVACY-PRESERVING DECENTRALIZED FEDERATED DEEP LEARNING》, 31 July 2021 (2021-07-31), pages 33 - 38, XP058658075, DOI: 10.1145/3472634.3472642 *
李凌霄等: "基于区块链的联邦学习技术综述", 《计算机应用研究》, vol. 38, no. 11, 25 March 2021 (2021-03-25), pages 3222 - 3230 *
李子孝等: "基于区块链技术的缺血性卒中医疗质量评价应用初探", 中国卒中杂志, vol. 15, no. 06, 20 June 2020 (2020-06-20), pages 577 - 586 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114648131A (en) * 2022-03-22 2022-06-21 中国电信股份有限公司 Federal learning method, device, system, equipment and medium
CN115271089A (en) * 2022-06-15 2022-11-01 京信数据科技有限公司 Block chain-based federal learning credible training method and device
CN115329032A (en) * 2022-10-14 2022-11-11 杭州海康威视数字技术股份有限公司 Federal dictionary based learning data transmission method, device, equipment and storage medium
CN116828453A (en) * 2023-06-30 2023-09-29 华南理工大学 Unmanned aerial vehicle edge computing privacy protection method based on self-adaptive nonlinear function
CN116828453B (en) * 2023-06-30 2024-04-16 华南理工大学 Unmanned aerial vehicle edge computing privacy protection method based on self-adaptive nonlinear function

Similar Documents

Publication Publication Date Title
US11836616B2 (en) Auditable privacy protection deep learning platform construction method based on block chain incentive mechanism
Sookhak et al. Security and privacy of smart cities: a survey, research issues and challenges
CN112232527B (en) Safe distributed federal deep learning method
Alamri et al. Blockchain for Internet of Things (IoT) research issues challenges & future directions: A review
Campanile et al. Designing a GDPR compliant blockchain-based IoV distributed information tracking system
CN114187006A (en) Block chain supervision-based federal learning method
CN113127916B (en) Data set processing method, data processing method, device and storage medium
CN110795755B (en) Building project scene type evidence storing and non-tampering method and system based on block chain
EP3070630A2 (en) Data system and method
CN109740384A (en) Data based on block chain deposit card method and apparatus
Zhong et al. Privacy-protected blockchain system
CN111460400A (en) Data processing method and device and computer readable storage medium
CN114697048A (en) Carbon emission data sharing method and system based on block chain
DE112022002623T5 (en) TRUSTED AND DECENTRALIZED AGGREGATION FOR FEDERATED LEARNING
CN112688775B (en) Management method and device of alliance chain intelligent contract, electronic equipment and medium
CN116502732B (en) Federal learning method and system based on trusted execution environment
CN109740319A (en) Digital identity verification method and server
Feng et al. Autonomous vehicles' forensics in smart cities
Mengjun et al. Privacy-preserving distributed location proof generating system
Garrigues et al. Protecting mobile agents from external replay attacks
CN102349076B (en) For protecting the method for the content protective system of personal content, device and computer program
Sonya et al. An effective blockchain‐based smart contract system for securing electronic medical data in smart healthcare application
Asiri A blockchain-based IoT trust model
Pavlov Security aspects of digital twins in IoT platform
Zhu et al. Multimedia fusion privacy protection algorithm based on iot data security under network regulations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination