CN112348063A - Model training method and device based on federal transfer learning in Internet of things - Google Patents

Model training method and device based on federal transfer learning in Internet of things Download PDF

Info

Publication number
CN112348063A
CN112348063A CN202011163727.3A CN202011163727A CN112348063A CN 112348063 A CN112348063 A CN 112348063A CN 202011163727 A CN202011163727 A CN 202011163727A CN 112348063 A CN112348063 A CN 112348063A
Authority
CN
China
Prior art keywords
request
machine learning
data
training
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011163727.3A
Other languages
Chinese (zh)
Other versions
CN112348063B (en
Inventor
曾瑛
张珮明
王力
李伟坚
陈宇航
施展
付佳佳
卢建刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Dispatch Control Center of Guangdong Power Grid Co Ltd
Original Assignee
Electric Power Dispatch Control Center of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Dispatch Control Center of Guangdong Power Grid Co Ltd filed Critical Electric Power Dispatch Control Center of Guangdong Power Grid Co Ltd
Priority to CN202011163727.3A priority Critical patent/CN112348063B/en
Publication of CN112348063A publication Critical patent/CN112348063A/en
Application granted granted Critical
Publication of CN112348063B publication Critical patent/CN112348063B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a method and a device for model training based on federal transfer learning in the Internet of things, wherein the method comprises the following steps: a data holder initiates a request for inquiring an available machine learning training model to a local server; the local server side inquires whether an available machine learning training model matched with the request of the data holder exists or not; if the available machine learning training model is not found in the local server, recording a data training request sent by the data holder, and sending the data training request to a federal transfer learning server; the federated transfer learning server side inquires whether an available machine learning training model matched with the request of the data holder exists; and when the federal migration learning server side finds an available machine learning training model matched with the request of the data holder, performing a corresponding migration learning process. According to the technical scheme, the mining of the small data value can be realized under the condition of ensuring the privacy of the user data.

Description

Model training method and device based on federal transfer learning in Internet of things
Technical Field
The invention relates to the technical field of communication, in particular to a method and a device for model training based on federal transfer learning in the Internet of things.
Background
With the constant popularization of the internet of things technology, the number of internet of things devices is increased explosively. According to the statistics of IDC reports of International data corporation, the number of devices of the Internet of things reaches 416 hundred million by 2025 years, and mass data of 79.4ZB is generated. These data have the characteristics of wide sources, different data holders, complicated types and the like, so that efficient data processing and analysis technology is urgently needed to mine the effective value of mass data. In recent years, the rapid rise of artificial intelligence technology has promoted the application of artificial intelligence technology in various fields. However, in the scene of the internet of things, due to the existence of privacy protection and other problems among different data holders, data presents a data island form, and the value of the data cannot be sufficiently mined for utilization.
Patent No. CN108833464A, Bonder type multi-domain Internet of things coordination system and method, smart city, smart home: the patent is based on a block chain technology, and authority verification is carried out on data identification of each domain through an identification management center so as to realize data sharing and service cooperation of a multi-domain Internet of things.
According to the scheme, the risk of leakage of the private data of the user still exists, high sharing of the data cannot be achieved, and the data value is difficult to mine.
Method and apparatus for model training using private data, patent No. CN 110992936A: the method disclosed in this patent is performed by a server device. And the server side issues the global model to a plurality of client side equipment, and each client side equipment trains the current global model by using local private data. And the server side equipment carries out model updating on the current global model according to the model updating amount collected from each client side equipment.
The technical scheme refers to the mechanism principle of federal learning, and the user private data can participate in the training process of the global model without going out of the local, but the technical scheme does not relate to a training method aiming at small data.
As can be seen from the analysis of the prior art, the prior art does not mention how to process and analyze small data on the premise of ensuring the data privacy so as to fully exploit the application value of the small data.
In addition, in some specific application fields of the internet of things, such as smart medicine, the data volume of some applications is very small or the tag data volume of some applications is very small, so that enormous cost is consumed for tagging data. The existence form of these data is defined as "small data". The current situation causes that the application of the artificial intelligence related technology in the field of the internet of things has certain limitation.
Disclosure of Invention
The embodiment of the invention provides a method and a device for training a model based on federated transfer learning in the Internet of things, which are used for completing a collaborative training process of the model under the condition of ensuring user data privacy so as to realize the mining of small data values.
In order to solve the above technical problem, an embodiment of the present invention provides the following technical solutions:
a model training method based on federal transfer learning in the Internet of things comprises the following steps:
a data holder initiates a request for inquiring an available machine learning training model to a local server;
the local server side inquires whether an available machine learning training model matched with the request of the data holder exists or not; if the available machine learning training model is not found in the local server, recording a data training request sent by the data holder, and sending the data training request to a federal transfer learning server;
the federated transfer learning server side inquires whether an available machine learning training model matched with the request of the data holder exists;
and when the federal migration learning server side finds an available machine learning training model matched with the request of the data holder, performing a corresponding migration learning process.
Optionally, the querying, by the local server, whether there is an available machine learning training model matching the request of the data holder, further includes:
and if the available machine learning training model is inquired at the local server, directly performing a transfer learning process based on the available model, and directly transmitting the model parameters to the data holder.
Optionally, the federated migration learning server side queries whether there is an available machine learning training model matching the request of the data holder, further including:
and if the available machine learning training model is not found at the Federal transfer learning server, recording a data training request matched with the request of the data holder, and feeding back the data training request to the local server.
Optionally, if no available model is found at the federal migration learning server, recording a data training request matching the request of the data holder, and feeding back the data training request to the local server, including:
and performing inquiry of an available machine learning training model matched with the request of the data holder at an interval of 24 hours simultaneously with the local server side and the federal migration learning server side.
Optionally, the query of the available machine learning training model matched with the request of the data holder is performed at an interval of 24 hours simultaneously with the local server side and the federal migration learning server side; comprises any one of the following steps:
and when the available machine learning training model matched with the request of the data holder is found at the local server side, returning the parameters of the available machine learning training model to the data holder for small data training.
And when the available machine learning training model matched with the request of the data holder is found at the Federal transfer learning server, carrying out a Federal transfer learning process, and encrypting the parameters of the available machine learning training model and transmitting the encrypted parameters to the data holder.
Optionally, when the federate migration learning server finds an available model matching the request of the data holder, a corresponding migration learning process is performed, which further includes:
and encrypting the parameters of the available machine learning training model and transmitting the parameters to the data holder.
Optionally, the step of encrypting the parameters of the available machine learning training model and then transmitting the encrypted parameters to the data holder includes the following steps:
generating a secret key;
homomorphic encryption;
homomorphic decryption;
and homomorphic assignment.
The embodiment of the invention also provides a model training device based on federal transfer learning in the internet of things, which comprises:
the data holder initiates a request for inquiring an available machine learning training model to the local server;
a first query module, wherein the local server side queries whether an available machine learning training model matched with the request of the data holder exists; if the available machine learning training model is not found in the local server, recording a data training request sent by the data holder, and sending the data training request to a federal transfer learning server;
the second query module is used for querying whether an available machine learning training model matched with the request of the data holder exists at the Federal transfer learning server;
and the feedback module is used for carrying out a corresponding transfer learning process when the federate transfer learning server side finds an available machine learning training model matched with the request of the data holder.
Optionally, the querying, by the local server, whether there is an available machine learning training model matching the request of the data holder, further includes:
and if the available machine learning training model is inquired at the local server, directly performing a transfer learning process based on the available machine learning training model, and directly transmitting parameters of the available machine learning training model to the data holder.
Optionally, the federated migration learning server side queries whether there is an available machine learning training model matching the request of the data holder, further including:
and if the available machine learning training model is not found at the Federal transfer learning server, recording a data training request matched with the request of the data holder, and feeding back the data training request to the local server.
The technical scheme of the invention comprises the following technical scheme:
according to the technical scheme, the collaborative training process of the model is completed under the condition that the data privacy of the user is guaranteed, and the mining of small data value can be realized.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of the present invention;
fig. 2 is a schematic view of the frame structure of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
As shown in fig. 1, an embodiment of the present invention provides a model training method based on federal transfer learning in the internet of things, including:
s1, the data holder sends a request for inquiring the available machine learning training model to the local server;
s2, the local server side inquires whether an available machine learning training model matched with the request of the data holder exists; if the available machine learning training model is not found in the local server, recording a data training request sent by the data holder, and sending the data training request to a federal transfer learning server;
s3, the Federal transfer learning server side inquires whether an available machine learning training model matched with the request of the data holder exists;
and S4, when the Federal transfer learning server end finds an available machine learning training model matched with the request of the data holder, carrying out a corresponding transfer learning process.
Specifically, assume first that there are n data holders in a scene s of an internet of things, and use DHi(i ═ 1,2,3, …, n) and n is a positive integer. Three interacting parties are involved, including a data holder, a local server and a federal migration learning server.
In step S2, the machine learning training model refers to model information obtained by training each data holder stored in the local server using local data, and includes a machine learning training model source, a training task targeted by the machine learning training model, and a machine learning training model training method; and the parameters of the machine learning training model are encrypted and then transmitted to a Federal transfer learning server for storage. The machine learning training model is a model which is obtained by training through various machine learning algorithms and is suitable for a certain training task.
In the embodiment of the invention, the collaborative training process of the model is completed under the condition of ensuring the data privacy of the user, and the mining of small data value can be realized.
In an alternative embodiment of the present invention, in step S2, the querying, by the local server, whether there is an available machine learning training model matching the request of the data holder further includes:
and if the available machine learning training model is inquired at the local server, directly performing a transfer learning process based on the available model, and directly transmitting the model parameters to the data holder.
In an optional embodiment of the present invention, in step S3, the querying, by the federal migration learning server, whether there is an available machine learning training model matching the request of the data holder further includes:
and if the available machine learning training model is not found at the Federal transfer learning server, recording a data training request matched with the request of the data holder, and feeding back the data training request to the local server.
In an alternative embodiment of the present invention, in step S3, if no available model is found at the federal migration learning server, the method records a data training request matching the request of the data holder, and feeds the data training request back to the local server, including:
and performing inquiry of an available machine learning training model matched with the request of the data holder at an interval of 24 hours simultaneously with the local server side and the federal migration learning server side.
In an alternative embodiment of the present invention, in step S4, the query of the available machine learning training model matching the request of the data holder is performed at an interval of 24 hours simultaneously with the local server and the federal migration learning server; comprises any one of the following steps:
and when the available machine learning training model matched with the request of the data holder is found at the local server side, returning the parameters of the available machine learning training model to the data holder for small data training.
And when the available machine learning training model matched with the request of the data holder is found at the Federal transfer learning server, carrying out a Federal transfer learning process, and encrypting the parameters of the available machine learning training model and transmitting the encrypted parameters to the data holder.
In an optional embodiment of the present invention, in step S3, when the federate migration learning server finds an available model matching the request of the data holder, the corresponding migration learning process is performed, which further includes:
and encrypting the parameters of the available machine learning training model and transmitting the parameters to the data holder.
In an alternative embodiment of the present invention, in step S3, the method for encrypting the parameters of the available machine learning training model and then transmitting the encrypted parameters to the data holder includes the following steps:
generating a secret key;
homomorphic encryption;
homomorphic decryption;
and homomorphic assignment.
Specifically, the encryption process satisfies equations (1) and (2). Wherein formula (1) indicates that homomorphic encryption satisfies additive homomorphism. Equation (2) indicates that homomorphic encryption satisfies scalar homomorphism.
[[u]]+[[v]]=[[u+v]] (1)
n·[[u]]=[[n·u]] (2)
As shown in fig. 2, an embodiment of the foregoing technical solution of the present invention may be:
suppose that there are three medical institutions under the scene of the intelligent medical internet of things, which are numbered as medical institution 1, medical institution 2 and medical institution 3.
Medical facility 2 generates a small data training request, i.e., a data training requirement for a certain rare disease.
The medical institution 2 sends a data training request to the local server;
querying a local server whether an available machine learning training model exists;
if the available machine learning training model is not found in the local server, the request is sent to the federal transfer learning server;
after receiving a data training request, the Federal transfer learning server inquires whether a matched available machine learning training model exists;
if no available machine learning training model is found at the federal migration learning server, the request response is returned to the local server corresponding to the medical institution 2;
and performing model matching on a local server side corresponding to the medical institution 2 and a federal migration learning server side at the same time at intervals of 24 hours.
And if the matched model is found at the local server, returning the parameter information of the available machine learning training model to the medical institution 2 for small data training.
The embodiment of the invention also provides a model training device based on federal transfer learning in the internet of things, which comprises:
the data holder initiates a request for inquiring an available machine learning training model to the local server;
a first query module, wherein the local server side queries whether an available machine learning training model matched with the request of the data holder exists; if the available machine learning training model is not found in the local server, recording a data training request sent by the data holder, and sending the data training request to a federal transfer learning server;
the second query module is used for querying whether an available machine learning training model matched with the request of the data holder exists at the Federal transfer learning server;
and the feedback module is used for carrying out a corresponding transfer learning process when the federate transfer learning server side finds an available machine learning training model matched with the request of the data holder.
Optionally, the querying, by the local server, whether there is an available machine learning training model matching the request of the data holder, further includes:
and if the available machine learning training model is inquired at the local server, directly performing a transfer learning process based on the available machine learning training model, and directly transmitting parameters of the available machine learning training model to the data holder.
Optionally, the federated migration learning server side queries whether there is an available machine learning training model matching the request of the data holder, further including:
and if the available machine learning training model is not found at the Federal transfer learning server, recording a data training request matched with the request of the data holder, and feeding back the data training request to the local server.
Optionally, if no available model is found at the federal migration learning server, recording a data training request matching the request of the data holder, and feeding back the data training request to the local server, including:
and performing inquiry of an available machine learning training model matched with the request of the data holder at an interval of 24 hours simultaneously with the local server side and the federal migration learning server side.
Optionally, the query of the available machine learning training model matched with the request of the data holder is performed at an interval of 24 hours simultaneously with the local server side and the federal migration learning server side; comprises any one of the following steps:
and when the available machine learning training model matched with the request of the data holder is found at the local server side, returning the parameters of the available machine learning training model to the data holder for small data training.
And when the available machine learning training model matched with the request of the data holder is found at the Federal transfer learning server, carrying out a Federal transfer learning process, and encrypting the parameters of the available machine learning training model and transmitting the encrypted parameters to the data holder.
Optionally, when the federate migration learning server finds an available model matching the request of the data holder, a corresponding migration learning process is performed, which further includes:
and encrypting the parameters of the available machine learning training model and transmitting the parameters to the data holder.
Optionally, the step of encrypting the parameters of the available machine learning training model and then transmitting the encrypted parameters to the data holder includes the following steps:
generating a secret key;
homomorphic encryption;
homomorphic decryption;
and homomorphic assignment.
According to the technical scheme of the embodiment of the invention, the collaborative training process of the model is completed under the condition of ensuring the privacy of user data, and the mining of small data value can be realized.
It should be noted that the apparatus is an apparatus corresponding to the embodiment of the method shown in fig. 1, and all the implementations in the embodiment of the method are applicable to the embodiment of the apparatus, so that the same technical effects can be achieved.
Furthermore, it is to be noted that in the device and method of the invention, it is obvious that the individual components or steps can be decomposed and/or recombined. These decompositions and/or recombinations are to be regarded as equivalents of the present invention. Also, the steps of performing the series of processes described above may naturally be performed chronologically in the order described, but need not necessarily be performed chronologically, and some steps may be performed in parallel or independently of each other. It will be understood by those skilled in the art that all or any of the steps or elements of the method and apparatus of the present invention may be implemented in any computing device (including processors, storage media, etc.) or network of computing devices, in hardware, firmware, software, or any combination thereof, which can be implemented by those skilled in the art using their basic programming skills after reading the description of the present invention.
Thus, the objects of the invention may also be achieved by running a program or a set of programs on any computing device. The computing device may be a general purpose device as is well known. The object of the invention is thus also achieved solely by providing a program product comprising program code for implementing the method or the apparatus. That is, such a program product also constitutes the present invention, and a storage medium storing such a program product also constitutes the present invention. It is to be understood that the storage medium may be any known storage medium or any storage medium developed in the future. It is further noted that in the apparatus and method of the present invention, it is apparent that each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be regarded as equivalents of the present invention. Also, the steps of executing the series of processes described above may naturally be executed chronologically in the order described, but need not necessarily be executed chronologically. Some steps may be performed in parallel or independently of each other.
While the preferred embodiments of the present invention have been described, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the following claims.

Claims (10)

1. A model training method based on federal transfer learning in the Internet of things is characterized by comprising the following steps:
a data holder initiates a request for inquiring an available machine learning training model to a local server;
the local server side inquires whether an available machine learning training model matched with the request of the data holder exists or not; if the available machine learning training model is not found in the local server, recording a data training request sent by the data holder, and sending the data training request to a federal transfer learning server;
the federated transfer learning server side inquires whether an available machine learning training model matched with the request of the data holder exists;
and when the federal migration learning server side finds an available machine learning training model matched with the request of the data holder, performing a corresponding migration learning process.
2. The internet of things model training method based on federal transfer learning of claim 1, wherein the local server side queries whether there is an available machine learning training model matching the request of the data holder, further comprising:
and if the available machine learning training model is inquired at the local server, directly performing a transfer learning process based on the available model, and directly transmitting the model parameters to the data holder.
3. The method for model training based on federated transfer learning in the internet of things of claim 1, wherein the federated transfer learning server side queries whether there is an available machine learning training model matching the request of the data holder, further comprising:
and if the available machine learning training model is not found at the Federal transfer learning server, recording a data training request matched with the request of the data holder, and feeding back the data training request to the local server.
4. The method for model training based on federal transfer learning in internet of things as claimed in claim 3, wherein if no available model is found at the federal transfer learning server, a data training request matching the request of the data holder is recorded and fed back to the local server, and the method comprises the following steps:
and performing inquiry of an available machine learning training model matched with the request of the data holder at an interval of 24 hours simultaneously with the local server side and the federal migration learning server side.
5. The method for model training based on federal transfer learning in internet of things as claimed in claim 4, wherein query of available machine learning training models matched with the request of the data holder is performed at an interval of 24 hours simultaneously with the local server side and the federal transfer learning server side; comprises any one of the following steps:
and when the available machine learning training model matched with the request of the data holder is found at the local server side, returning the parameters of the available machine learning training model to the data holder for small data training.
And when the available machine learning training model matched with the request of the data holder is found at the Federal transfer learning server, carrying out a Federal transfer learning process, and encrypting the parameters of the available machine learning training model and transmitting the encrypted parameters to the data holder.
6. The method for model training based on federal transfer learning in internet of things according to claim 5, wherein when the federal transfer learning server finds an available model matching with the request of the data holder, a corresponding transfer learning process is performed, further comprising:
and encrypting the parameters of the available machine learning training model and transmitting the parameters to the data holder.
7. The method for model training based on federal transfer learning in the internet of things as claimed in claim 6, wherein the method for transferring the parameters of the available machine learning training model to the data holder after encrypting comprises the following steps:
generating a secret key;
homomorphic encryption;
homomorphic decryption;
and homomorphic assignment.
8. The utility model provides a model trainer based on federal migration learning in thing networking which characterized in that includes:
the data holder initiates a request for inquiring an available machine learning training model to the local server;
a first query module, wherein the local server side queries whether an available machine learning training model matched with the request of the data holder exists; if the available machine learning training model is not found in the local server, recording a data training request sent by the data holder, and sending the data training request to a federal transfer learning server;
the second query module is used for querying whether an available machine learning training model matched with the request of the data holder exists at the Federal transfer learning server;
and the feedback module is used for carrying out a corresponding transfer learning process when the federate transfer learning server side finds an available machine learning training model matched with the request of the data holder.
9. The internet of things model training device based on federal transfer learning of claim 8, wherein the local server side queries whether there are available machine learning training models matching the request of the data holder, further comprising:
and if the available machine learning training model is inquired at the local server, directly performing a transfer learning process based on the available machine learning training model, and directly transmitting parameters of the available machine learning training model to the data holder.
10. The internet of things model training device based on federal transfer learning of claim 8, wherein the federal transfer learning server side inquires whether there is an available machine learning training model matched with the request of the data holder, further comprising:
and if the available machine learning training model is not found at the Federal transfer learning server, recording a data training request matched with the request of the data holder, and feeding back the data training request to the local server.
CN202011163727.3A 2020-10-27 2020-10-27 Model training method and device based on federal migration learning in Internet of things Active CN112348063B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011163727.3A CN112348063B (en) 2020-10-27 2020-10-27 Model training method and device based on federal migration learning in Internet of things

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011163727.3A CN112348063B (en) 2020-10-27 2020-10-27 Model training method and device based on federal migration learning in Internet of things

Publications (2)

Publication Number Publication Date
CN112348063A true CN112348063A (en) 2021-02-09
CN112348063B CN112348063B (en) 2024-06-11

Family

ID=74359126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011163727.3A Active CN112348063B (en) 2020-10-27 2020-10-27 Model training method and device based on federal migration learning in Internet of things

Country Status (1)

Country Link
CN (1) CN112348063B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113159283A (en) * 2021-03-31 2021-07-23 华为技术有限公司 Model training method based on federal transfer learning and computing node
CN113705825A (en) * 2021-07-16 2021-11-26 杭州医康慧联科技股份有限公司 Data model sharing method suitable for multi-party use

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110168495A (en) * 2016-01-27 2019-08-23 伯尼塞艾公司 It can be re-used, reconfigure and be reassembled as the housebroken artificial intelligence object of artificial intelligence model
US20190303785A1 (en) * 2018-03-29 2019-10-03 Azimuth1, LLC Forecasting soil and groundwater contamination migration
CN110399742A (en) * 2019-07-29 2019-11-01 深圳前海微众银行股份有限公司 A kind of training, prediction technique and the device of federation's transfer learning model
CN110490305A (en) * 2019-08-22 2019-11-22 腾讯科技(深圳)有限公司 Machine learning model processing method and node based on block chain network
CN110572253A (en) * 2019-09-16 2019-12-13 济南大学 Method and system for enhancing privacy of federated learning training data
US20200196210A1 (en) * 2018-12-12 2020-06-18 Akamai Technologies, Inc. Intelligently pre-positioning and migrating compute capacity in an overlay network, with compute handoff and data consistency
CN111723947A (en) * 2020-06-19 2020-09-29 深圳前海微众银行股份有限公司 Method and device for training federated learning model

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110168495A (en) * 2016-01-27 2019-08-23 伯尼塞艾公司 It can be re-used, reconfigure and be reassembled as the housebroken artificial intelligence object of artificial intelligence model
US20190303785A1 (en) * 2018-03-29 2019-10-03 Azimuth1, LLC Forecasting soil and groundwater contamination migration
US20200196210A1 (en) * 2018-12-12 2020-06-18 Akamai Technologies, Inc. Intelligently pre-positioning and migrating compute capacity in an overlay network, with compute handoff and data consistency
CN110399742A (en) * 2019-07-29 2019-11-01 深圳前海微众银行股份有限公司 A kind of training, prediction technique and the device of federation's transfer learning model
CN110490305A (en) * 2019-08-22 2019-11-22 腾讯科技(深圳)有限公司 Machine learning model processing method and node based on block chain network
CN110572253A (en) * 2019-09-16 2019-12-13 济南大学 Method and system for enhancing privacy of federated learning training data
CN111723947A (en) * 2020-06-19 2020-09-29 深圳前海微众银行股份有限公司 Method and device for training federated learning model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113159283A (en) * 2021-03-31 2021-07-23 华为技术有限公司 Model training method based on federal transfer learning and computing node
CN113705825A (en) * 2021-07-16 2021-11-26 杭州医康慧联科技股份有限公司 Data model sharing method suitable for multi-party use

Also Published As

Publication number Publication date
CN112348063B (en) 2024-06-11

Similar Documents

Publication Publication Date Title
US11263344B2 (en) Data management method and registration method for an anonymous data sharing system, as well as data manager and anonymous data sharing system
CN112270597A (en) Business processing and credit evaluation model training method, device, equipment and medium
CN109495592A (en) Data collaborative method and electronic equipment
CN112348063A (en) Model training method and device based on federal transfer learning in Internet of things
Zanetti et al. Privacy-preserving clone detection for RFID-enabled supply chains
CN105850100A (en) Systems and methods for audience measurement
US9544378B2 (en) Correlation of activities across a distributed system
US11093565B2 (en) Methods and systems for identifying multiple devices belonging to a single user by merging deterministic and probabilistic data to generate a cross device data structure
CN113362048B (en) Data label distribution determining method and device, computer equipment and storage medium
CN113449048B (en) Data label distribution determining method and device, computer equipment and storage medium
Wang et al. Blockchain-enabled fish provenance and quality tracking system
US12021972B2 (en) Aggregating encrypted network values
EP3966988B1 (en) Generating sequences of network data while preventing acquisition or manipulation of time data
CN115422570B (en) Data processing method and system for distributed storage
US20240214219A1 (en) Preventing data manipulation using multiple aggregation servers
CN107770276A (en) It is a kind of to realize that user data manages the network system and method with renewal independently
CN110213202B (en) Identification encryption matching method and device, and identification processing method and device
CN116527709A (en) Electronic medical record safe sharing system and method combining quantum key and blockchain
CN107196918A (en) A kind of method and apparatus of matched data
CN115643090A (en) Longitudinal federal analysis method, device, equipment and medium based on privacy retrieval
CN115544572A (en) Multi-party privacy data and financial privacy data processing method based on privacy protection
CN113810421A (en) Block chain-based PRE Internet of things data sharing method and system
CN113554315A (en) Service data tracking method and device, computer equipment and storage medium
CN114490704A (en) Data processing method, device, equipment and storage medium
Zhao Privacy Preservation and Verifiability for Federated Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant