CN111814189B - Distributed learning privacy protection method based on differential privacy - Google Patents

Distributed learning privacy protection method based on differential privacy Download PDF

Info

Publication number
CN111814189B
CN111814189B CN202010847611.5A CN202010847611A CN111814189B CN 111814189 B CN111814189 B CN 111814189B CN 202010847611 A CN202010847611 A CN 202010847611A CN 111814189 B CN111814189 B CN 111814189B
Authority
CN
China
Prior art keywords
user node
iteration
node
ith
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010847611.5A
Other languages
Chinese (zh)
Other versions
CN111814189A (en
Inventor
陈志立
孙晨
张顺
仲红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN202010847611.5A priority Critical patent/CN111814189B/en
Publication of CN111814189A publication Critical patent/CN111814189A/en
Application granted granted Critical
Publication of CN111814189B publication Critical patent/CN111814189B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioethics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Complex Calculations (AREA)

Abstract

The invention discloses a distributed learning privacy protection method based on differential privacy, which is applied to n user nodes in a network, wherein each user has a group of data samples independently distributed, and the method comprises the following steps: s1, an initialization stage; s2, a local learning stage of the user node; s3, the user node acquires neighbor node information and updates the neighbor node information; s4, adding noise and disturbing; step S5, broadcasting. The invention can solve the privacy protection problem in the current distributed learning, so that the user node updates the local parameters of the user node through the neighbor nodes and sends the parameters processed by noise to the neighbor nodes, thereby protecting the personal sensitive data of the user from being leaked in a decentralized network environment.

Description

Distributed learning privacy protection method based on differential privacy
Technical Field
The invention belongs to the field of machine learning safety, and particularly relates to a distributed learning privacy protection method based on differential privacy.
Background
Networked personal devices can collect large amounts of personal data. This information can be used to provide useful personalized services to the user through machine learning. A common approach is to centralize the data generated by these users to a central server, which then performs a global optimization. While this approach is beneficial to the learning process, it may cause serious privacy concerns. On the other hand, if the user learns alone on his own device, doing so, while preserving privacy, is less accurate, especially for users who do not have too much local data.
In order to solve the above problems, the document [ Decentralized Collaborative Learning of Personalized Models over Networks,2017] considers the Decentralized Collaborative Learning problem of a personal model, but they do not consider any privacy constraint, although the data exchanged between the user and the neighbor has only parameters after each iteration and no direct data exchange, but the iteration sequence broadcasted by the user node may reveal the information of its private data set through the gradient of the local loss function. There has been a great deal of work on protecting centralized machine learning user privacy, particularly based on differential privacy. Existing privacy protection methods rely on a central trusted server. There is one central node connecting multiple nodes. The central node aggregates the stochastic gradients computed by all other nodes and updates the model parameters, e.g., weights of the neural network. A potential bottleneck in a centralized network topology is the blocking of communication traffic by the central node, since all nodes need to communicate with the central node iteratively and concurrently. When the network bandwidth is low, the performance may be significantly degraded.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a distributed learning privacy protection method based on differential privacy, which aims to solve the privacy protection problem in the current distributed learning, so that a user node updates own local parameters through a neighbor node and sends the parameters subjected to noise processing to the neighbor node, and personal sensitive data of the user can be protected from being leaked in a decentralized network environment.
The invention adopts the following technical scheme for solving the technical problems:
the invention relates to a distributed learning privacy protection method based on differential privacy, which is characterized by being applied to a weighted network graph G (V, E) formed by n user nodes, wherein V represents the weighted network graphA set of n user nodes in G; e is a relation edge connected between the user nodes; let W be the same as R n×n Is a symmetric nonnegative weight matrix associated with the weighted network graph G;
let i, j be the serial numbers of any two user nodes in the weighted network graph G, define W ij A relation edge (i, j) connected between the ith user node and the jth user node belongs to the weight of E, and the weight W ij Satisfies the following conditions: w ij ∈[0,1],W ij =W ji When W is ij If =0, the relationship edge (i, j) between the ith user node and the jth user node is not communicated; when W is ij When the number is not equal to 0, the relation edge (i, j) between the ith user node and the jth user node is communicated;
the ith user node has a local data distribution U i Define satisfying distribution U i Is D i
Defining the loss function of the ith user node as l (theta) i ;D i ) Wherein, θ i A parameter representing an ith user node;
the distributed learning privacy protection method is carried out according to the following steps:
step S1, an initialization stage:
setting the total number of iterations as K, the number of current iterations as K, and initializing K =1;
defining the parameter of the ith user node of the kth iteration as
Figure GDA0003805283440000021
Defining the learning rate of the k-th iteration as eta k And initializing eta k = η, define the weight matrix of the kth iteration as W k And initialize W k = W; setting the privacy budget to be epsilon, setting the difference privacy invalidation probability to be delta and setting the clipping threshold value to be C;
s2, a local learning stage of the user node:
step S2.1, from sample set D of ith user node i Randomly extracting a group of local data samples of the k-th iteration, and recording the group of local data samples as
Figure GDA0003805283440000022
S2.2, the ith user node iterates according to the parameters of the k-1 th round
Figure GDA0003805283440000023
And local data samples for the kth iteration
Figure GDA0003805283440000024
Calculating the gradient of the k-th iteration by using the formula (1)
Figure GDA0003805283440000025
Figure GDA0003805283440000026
In the formula (1), when k =1, let
Figure GDA0003805283440000027
S3, the user node acquires the neighbor node information and updates:
s3.1, in the k-th iteration, the ith user node obtains the parameters of the k-1 iteration transmitted by the jth neighbor node
Figure GDA0003805283440000028
Wherein j belongs to V, and (i, j) belongs to E; calculating a weighted average of the ith user node of the kth iteration using equation (2)
Figure GDA0003805283440000029
Figure GDA00038052834400000210
In the formula (2), when k =1, let
Figure GDA0003805283440000031
Step S3.2, the weighted average value of the ith user node is calculated by using the gradient descent method shown in the formula (3)
Figure GDA0003805283440000032
Optimizing to obtain the parameter of the ith user node of the kth iteration
Figure GDA0003805283440000033
Figure GDA0003805283440000034
S3.3, updating the weight matrix W of the kth iteration k And the learning rate eta of the kth iteration k
S4, noise disturbance adding stage:
step S4.1, the parameter of the ith iteration of the ith user node by using the formula (4)
Figure GDA0003805283440000035
Cutting to obtain the parameters after the kth round of iterative cutting
Figure GDA0003805283440000036
Figure GDA0003805283440000037
Step S4.2, setting the noise generated by the ith user node in the kth iteration as
Figure GDA0003805283440000038
And the noise satisfies the Gaussian distribution N (0, sigma) with the position parameter of 0 and the scale parameter of sigma 2 ) And has the following components:
Figure GDA0003805283440000039
in equation (5), Δ s is the local sensitivity and has:
Figure GDA00038052834400000310
step S4.3, utilizing the formula (7) to pair the parameters after cutting
Figure GDA00038052834400000311
Adding noise
Figure GDA00038052834400000312
Obtaining the parameter theta after the noise is added in the k-th iteration ik
Figure GDA00038052834400000313
Step S5, broadcasting:
step S5.1, the ith user node adds noise
Figure GDA00038052834400000314
Latter parameters
Figure GDA00038052834400000315
Sending the information to the jth neighbor node of the own node;
step S5.2, assigning k +1 to k and judging k>If K is true, it indicates that the ith user node obtains its loss function l (theta) i ;D i ) Minimized parameter
Figure GDA00038052834400000316
And finishing differential privacy protection; otherwise, the step S2 is returned to and executed in sequence.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention allows users to collaboratively learn to optimize local parameters, and the parameters broadcasted by the users are not real values but are subjected to disturbance processing, thereby not only protecting personal sensitive data of the users from leakage, but also making up the condition that the parameter optimization effect is poor due to insufficient personal local data quantity. Meanwhile, the decentralized structure breaks through the potential bottleneck of communication flow blockage of the centralized network topology center node, and the efficiency of parameter optimization is higher.
2. According to the invention, a differential privacy technology is introduced for protecting the privacy of the users participating in distributed learning, before the user nodes broadcast the parameters to the own neighbor nodes, the parameters are cut to the preset threshold value, and Gaussian noise is added for disturbance, so that the users can not leak personal information of the users to the neighbor parameters through broadcasting, and the parameter optimization is completed in a safe environment.
Drawings
FIG. 1 is a schematic diagram of a user node communication structure according to the present invention;
FIG. 2 is a flow chart of the method of the present invention.
Detailed Description
In this embodiment, a distributed learning privacy protection method based on differential privacy is applied to a weighted network graph G (V, E) composed of n user nodes, where V represents a set of n user nodes in the weighted network graph G; e is a relation edge connected between the user nodes; let W be the same as R n×n Is a symmetric nonnegative weight matrix associated with the weighted network graph G;
let i, j be the serial numbers of any two user nodes in the weighted network graph G, define W ij The relation edge (i, j) connected between the ith user node and the jth user node belongs to the weight value of E, and the weight value W ij Satisfies the following conditions: w is a group of ij ∈[0,1],W ij =W ji When W is ij If =0, the relationship edge (i, j) between the ith user node and the jth user node is not communicated; when W ij And when the number is not equal to 0, the relation edge (i, j) between the ith user node and the jth user node is communicated.
The ith user node has a distribution U of local data i Defining a distribution satisfying U i Is D i
Defining the loss function of the ith user node as l (theta) i ;D i ) Wherein θ i Representing the ith user node parameter;
consider a book recommendation system. Each user node scores a small part of books on a smart phone application program, and hopes that the application program carries out personalized recommendation on new books. To develop a reliable recommendation system for each user, not only limited user data but also information from users of similar tastes is relied upon. Firstly, a graph among users is constructed, and it is assumed that four user nodes exist in the network, namely users a, B, C and D are shown in fig. 1. User a wants to optimize his book recommendation system using the local data of the other three users. There is a weight between every two nodes. The weight reflects the similarity between users, and the higher the similarity is, the higher the contribution degree is when the parameters are optimized. The more similar the user trains a recommendation system when the iteration is completed. In this network, the four users represent preferences for a group of books, assuming that the user similarity is calculated according to the user's preferences. The higher the user's preference for a book, the higher the score it will be, ranging from 1 to 5. For the current user score, shown by Table 1 below, the rows represent the user and the columns represent the book.
TABLE 1 user rating Table
Figure GDA0003805283440000051
As shown in fig. 2, the distributed learning privacy protection method is performed according to the following steps:
step S1, an initialization stage:
setting the total number of iterations as K, the number of current iterations as K, and initializing K =0;
define the parameters of the k-th iteration as
Figure GDA0003805283440000052
Defining privacy budget to be set as epsilon, setting the differential privacy invalidation probability as delta and setting the clipping threshold value as C;
define and initialize the learning rate of the k-th iteration to be eta k Initialization of eta k = η, we set η to be in this example
Figure GDA0003805283440000053
With the increase of the iteration number k, the learning rate eta is reduced, the parameters are gradually updated to optimal values, and the weight matrix of the kth iteration is defined as W k And initialize W k = W, in this embodiment, the similarity between two user nodes is calculated by using cosine similarity, and then the weight matrix W can be obtained by processing the similarity, where the calculation formula of cosine similarity is
Figure GDA0003805283440000054
r u A score set (one row of score data in table 1) representing user u, r v A score set representing user v and i representing the book category.
According to cosine similarity, a user similarity matrix can be obtained
Figure GDA0003805283440000055
The user weight matrix can be obtained according to the proportion
Figure GDA0003805283440000056
S2, a local learning stage of the user node:
step S2.1, from sample set D of ith user node i Randomly extracting a group of local data samples of the k-th iteration, and recording the group of local data samples as
Figure GDA0003805283440000061
The sample in this embodiment corresponds to a score of a certain book by a user;
step S2.2, the ith user node iterates the parameter based on the k-1 th round
Figure GDA0003805283440000062
And local data samples
Figure GDA0003805283440000063
Calculating the gradient of the kth iteration by using the formula (1)
Figure GDA0003805283440000064
Figure GDA0003805283440000065
In the formula (1), when k =1, let
Figure GDA0003805283440000066
S3, the user node acquires neighbor node information and updates:
s3.1, in the k-th iteration, the ith user node obtains the parameters of the k-1 iteration transmitted by the jth neighbor node
Figure GDA0003805283440000067
Wherein j belongs to V, and (i, j) belongs to E; calculating a weighted average of the ith user node of the kth iteration using equation (2)
Figure GDA0003805283440000068
Figure GDA0003805283440000069
In the formula (2), when k =1, let
Figure GDA00038052834400000610
This step can be computed in parallel with the computation of the gradient of step S2.2;
step S3.2, the weighted average value of the ith user node is calculated by using the gradient descent method shown in the formula (3)
Figure GDA00038052834400000611
Optimizing to obtain the parameter of the ith user node of the kth iteration
Figure GDA00038052834400000612
Figure GDA00038052834400000613
S3.3, updating the weight matrix W of the kth iteration k And the learning rate eta of the kth iteration k
S4, noise disturbance adding stage:
step S4.1, the parameter of the ith iteration of the ith user node by using the formula (4)
Figure GDA00038052834400000614
Cutting to obtain the parameters after the kth round of iterative cutting
Figure GDA00038052834400000615
Figure GDA00038052834400000616
In order to limit the sensitivity, the current local parameters are cut, if the current local parameters are smaller than a threshold value C, the current local parameters are reserved, and if the current local parameters are not smaller than the threshold value C, the current local parameters are cut to C;
step S4.2, setting the noise generated by the ith user node in the kth iteration as
Figure GDA0003805283440000071
And the noise satisfies the Gaussian distribution N (0, sigma) with the position parameter of 0 and the scale parameter of sigma 2 ) And has the following components:
Figure GDA0003805283440000072
in equation (5), Δ s is the local sensitivity and has:
Figure GDA0003805283440000073
s4.3, utilizing the formula (7) to pair the parameters after cutting
Figure GDA0003805283440000074
Adding noise
Figure GDA0003805283440000075
Obtaining parameters after adding noise
Figure GDA0003805283440000076
Figure GDA0003805283440000077
Step S5, broadcasting:
step S5.1, the ith user node adds noise
Figure GDA0003805283440000078
Latter parameters
Figure GDA0003805283440000079
Sending the information to the jth neighbor node of the self;
step S5.2, assigning k +1 to k and judging k>If K is true, it indicates that the ith user node obtains the loss function l (theta) i ;D i ) Minimized parameter
Figure GDA00038052834400000710
And differential privacy protection is completed; otherwise, the step S2 is returned to and executed in sequence.

Claims (1)

1. A distributed learning privacy protection method based on differential privacy is characterized by being applied to a weighted network graph G (V, E) formed by n user nodes, wherein V represents a set of n user nodes in the weighted network graph G; e is a relation edge connected between the user nodes; let W be the same as R n×n Is a symmetric nonnegative weight matrix associated with the weighted network graph G;
let i, j beWeighting the sequence numbers of any two user nodes in the network graph G, defining W ij A relation edge (i, j) connected between the ith user node and the jth user node belongs to the weight of E, and the weight W ij Satisfies the following conditions: w ij ∈[0,1],W ij =W ji When W is ij When =0, the relationship edge (i, j) between the ith user node and the jth user node is not communicated; when W is ij When the number is not equal to 0, the relation edge (i, j) between the ith user node and the jth user node is communicated;
the ith user node has a local data distribution U i Define satisfying distribution U i Is D i
Defining the loss function of the ith user node as
Figure FDA0003805283430000018
Wherein, theta i A parameter representing an ith user node;
the distributed learning privacy protection method is carried out according to the following steps:
step S1, an initialization stage:
setting the total number of iterations as K, the number of current iterations as K, and initializing K =1;
defining the parameter of the ith user node of the kth iteration as
Figure FDA0003805283430000011
Defining the learning rate of the k-th iteration as eta k And initialize η k = η, defining the weight matrix of the kth iteration as W k And initialize W k = W; setting the privacy budget to be epsilon, setting the difference privacy invalidation probability to be delta and setting the clipping threshold value to be C;
s2, a local learning stage of the user node:
step S2.1, from sample set D of ith user node i Randomly extracting a group of local data samples of the k-th iteration, and recording the group of local data samples as
Figure FDA0003805283430000012
S2.2, the ith user node iterates according to the parameters of the k-1 th round
Figure FDA0003805283430000013
And local data samples for the k-th iteration
Figure FDA0003805283430000014
Calculating the gradient of the k-th iteration by using the formula (1)
Figure FDA0003805283430000015
Figure FDA0003805283430000016
In the formula (1), when k =1, let
Figure FDA0003805283430000017
S3, the user node acquires neighbor node information and updates:
s3.1, in the k-th iteration, the ith user node obtains the parameters of the k-1 iteration transmitted by the jth neighbor node
Figure FDA0003805283430000021
Wherein j belongs to V, and (i, j) belongs to E; calculating a weighted average of the ith user node of the kth iteration by using equation (2)
Figure FDA0003805283430000022
Figure FDA0003805283430000023
In the formula (2), when k =1, let
Figure FDA0003805283430000024
Step S3.2, the weighted average value of the ith user node is calculated by using the gradient descent method shown in the formula (3)
Figure FDA0003805283430000025
Optimizing to obtain the parameter of the ith user node of the kth iteration
Figure FDA0003805283430000026
Figure FDA0003805283430000027
Step S3.3, updating the weight matrix W of the kth iteration k And the learning rate eta of the k-th iteration k
S4, noise disturbance adding stage:
step S4.1, the parameter of the ith iteration of the ith user node is calculated by using the formula (4)
Figure FDA0003805283430000028
Cutting to obtain a parameter theta 'after the kth round of iterative cutting' i k
Figure FDA0003805283430000029
Step S4.2, setting the noise generated by the ith user node in the kth iteration as
Figure FDA00038052834300000210
And the noise satisfies the Gaussian distribution N (0, sigma) with the position parameter of 0 and the scale parameter of sigma 2 ) And has the following components:
Figure FDA00038052834300000211
in equation (5), Δ s is the local sensitivity and has:
Figure FDA00038052834300000212
step S4.3, utilizing formula (7) to pair the trimmed parameter theta' i k Adding noise
Figure FDA00038052834300000213
Obtaining the parameter theta' of the kth round after the noise is added in an iterative way i k
Figure FDA00038052834300000214
Step S5, broadcasting:
step S5.1, the ith user node adds noise
Figure FDA0003805283430000031
The latter parameter θ ″) i k Sending the information to the jth neighbor node of the own node;
step S5.2, assigning k +1 to k and judging k>If K is true, it means that the ith user node obtains the loss function
Figure FDA0003805283430000032
Minimized parameter
Figure FDA0003805283430000033
And differential privacy protection is completed; otherwise, the step S2 is returned to and executed in sequence.
CN202010847611.5A 2020-08-21 2020-08-21 Distributed learning privacy protection method based on differential privacy Active CN111814189B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010847611.5A CN111814189B (en) 2020-08-21 2020-08-21 Distributed learning privacy protection method based on differential privacy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010847611.5A CN111814189B (en) 2020-08-21 2020-08-21 Distributed learning privacy protection method based on differential privacy

Publications (2)

Publication Number Publication Date
CN111814189A CN111814189A (en) 2020-10-23
CN111814189B true CN111814189B (en) 2022-10-18

Family

ID=72859654

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010847611.5A Active CN111814189B (en) 2020-08-21 2020-08-21 Distributed learning privacy protection method based on differential privacy

Country Status (1)

Country Link
CN (1) CN111814189B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112465301B (en) * 2020-11-06 2022-12-13 山东大学 Edge smart power grid cooperation decision method based on differential privacy mechanism
CN112749403B (en) * 2021-01-19 2022-03-18 山东大学 Edge data encryption method suitable for edge Internet of things agent device
CN115081024B (en) * 2022-08-16 2023-01-24 杭州金智塔科技有限公司 Decentralized business model training method and device based on privacy protection
CN116805082B (en) * 2023-08-23 2023-11-03 南京大学 Splitting learning method for protecting private data of client

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109766710A (en) * 2018-12-06 2019-05-17 广西师范大学 The difference method for secret protection of associated social networks data
CN109952582A (en) * 2018-09-29 2019-06-28 区链通网络有限公司 A kind of training method, node, system and the storage medium of intensified learning model
CN110910218A (en) * 2019-11-21 2020-03-24 南京邮电大学 Multi-behavior migration recommendation method based on deep learning
CN111177781A (en) * 2019-12-30 2020-05-19 北京航空航天大学 Differential privacy recommendation method based on heterogeneous information network embedding

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11455427B2 (en) * 2018-07-24 2022-09-27 Arizona Board Of Regents On Behalf Of Arizona State University Systems, methods, and apparatuses for implementing a privacy-preserving social media data outsourcing model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109952582A (en) * 2018-09-29 2019-06-28 区链通网络有限公司 A kind of training method, node, system and the storage medium of intensified learning model
CN109766710A (en) * 2018-12-06 2019-05-17 广西师范大学 The difference method for secret protection of associated social networks data
CN110910218A (en) * 2019-11-21 2020-03-24 南京邮电大学 Multi-behavior migration recommendation method based on deep learning
CN111177781A (en) * 2019-12-30 2020-05-19 北京航空航天大学 Differential privacy recommendation method based on heterogeneous information network embedding

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Individual Differential Privacy: A Utility-Preserving Formulation of Differential Privacy Guarantees;Jordi Soria-Comas 等;《IEEE Transactions on Information Forensics and Security》;20170202;第12卷(第6期);第1418-1429页 *
Traditional and Deep Learning Based Methods for Mammographic Image Analysis;Feng Xing 等;《2018 14th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD)》;20190411;第317-324页 *
面向深度神经网络训练的数据差分隐私保护随机梯度下降算法;李英 等;《计算机应用与软件》;20200430;第37卷(第4期);第252-259页 *

Also Published As

Publication number Publication date
CN111814189A (en) 2020-10-23

Similar Documents

Publication Publication Date Title
CN111814189B (en) Distributed learning privacy protection method based on differential privacy
CN105468742B (en) The recognition methods of malice order and device
EP2498440B1 (en) Configuration method and system of complex network and configuration and management module of server resources
CN109617888B (en) Abnormal flow detection method and system based on neural network
CN110059144B (en) Trajectory owner prediction method based on convolutional neural network
CN113422695B (en) Optimization method for improving robustness of topological structure of Internet of things
CN115798598B (en) Hypergraph-based miRNA-disease association prediction model and method
CN110991789B (en) Method and device for determining confidence interval, storage medium and electronic device
CN112990276A (en) Federal learning method, device, equipment and storage medium based on self-organizing cluster
CN105956925B (en) Important user discovery method and device based on propagation network
US20160314129A1 (en) System and method for matching dynamically validated network data
CN116862023A (en) Robust federal learning abnormal client detection method based on spectral clustering
CN114048838A (en) Knowledge migration-based hybrid federal learning method
AU2021102006A4 (en) A system and method for identifying online rumors based on propagation influence
CN110910261A (en) Network community detection countermeasure enhancement method based on multi-objective optimization
CN107291860B (en) Seed user determination method
CN105159918A (en) Trust correlation based microblog network community discovery method
Lou et al. Local communities obstruct global consensus: Naming game on multi-local-world networks
CN115801897B (en) Message dynamic processing method of edge proxy
CN116010832A (en) Federal clustering method, federal clustering device, central server, federal clustering system and electronic equipment
CN113516163B (en) Vehicle classification model compression method, device and storage medium based on network pruning
Qi et al. Micro-blog user community discovery using generalized SimRank edge weighting method
CN103260060B (en) A kind of digital television program recommending method based on community discovery
CN113486933B (en) Model training method, user identity information prediction method and device
CN115392058A (en) Method for constructing digital twin model based on evolutionary game in industrial Internet of things

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant