CN116663052A - Power data privacy protection method, system, equipment and medium under multiparty collaboration - Google Patents

Power data privacy protection method, system, equipment and medium under multiparty collaboration Download PDF

Info

Publication number
CN116663052A
CN116663052A CN202310582461.3A CN202310582461A CN116663052A CN 116663052 A CN116663052 A CN 116663052A CN 202310582461 A CN202310582461 A CN 202310582461A CN 116663052 A CN116663052 A CN 116663052A
Authority
CN
China
Prior art keywords
gradient
model
data
privacy
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310582461.3A
Other languages
Chinese (zh)
Inventor
王晓辉
李道兴
郭鹏天
季知祥
程凯
杨会峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
China Electric Power Research Institute Co Ltd CEPRI
Information and Telecommunication Branch of State Grid Hebei Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
China Electric Power Research Institute Co Ltd CEPRI
Information and Telecommunication Branch of State Grid Hebei Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, China Electric Power Research Institute Co Ltd CEPRI, Information and Telecommunication Branch of State Grid Hebei Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202310582461.3A priority Critical patent/CN116663052A/en
Publication of CN116663052A publication Critical patent/CN116663052A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Bioethics (AREA)
  • Computer Hardware Design (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computer Security & Cryptography (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Remote Monitoring And Control Of Power-Distribution Networks (AREA)

Abstract

The invention discloses a method, a system, equipment and a medium for protecting privacy of electric power data under multiparty collaboration, wherein the method comprises the following steps: updating parameters of an energy credit rating model under multiparty cooperation constructed based on federal learning through gradient values; finding out the input data gradient with highest accuracy of the credit rating model after parameter updating through a model loss function; performing gradient clipping on input data according to a minimum risk function in the model training process, and adding disturbance; and protecting the input data subjected to gradient clipping and disturbance addition by using homomorphic encryption. The invention adopts federal learning to protect data safety and user privacy, and fully utilizes scattered data sources to improve the performance of the model. Meanwhile, input data gradient clipping and disturbance adding are carried out, so that the data privacy safety of model training under multiparty cooperative data sharing can be effectively ensured. In addition, the homomorphic encryption can effectively protect the user privacy and the data security in federal learning.

Description

Power data privacy protection method, system, equipment and medium under multiparty collaboration
Technical Field
The invention belongs to the technical field of power data analysis, and particularly relates to a power data privacy protection method, system, equipment and medium under multiparty cooperation.
Background
Machine learning and deep learning techniques require model training using large amounts of data, making the data sharing requirements of inter-unit lateral collaboration and inter-department longitudinal penetration increasingly urgent. Meanwhile, due to the sensitivity and privacy protection requirements of marketing data, great difficulty exists in sharing the marketing data among units in a company, so that the problem of data island among the units is easily caused, model training is unfavorable, the inherent value of the marketing data is difficult to comprehensively and fully mine, and data value waste is caused. Therefore, how to ensure the privacy safety of the data and the user of each unit in the multi-party data sharing process, realize the data collaborative training on the basis, improve the effectiveness and the accuracy of the data model, and achieve the effect of data safety sharing is a problem to be solved.
Federal learning is a distributed machine learning technology, and by performing distributed model training among a plurality of data sources with local data, on the premise that local individual or sample data do not need to be exchanged, a global model based on virtual fusion data is built only by exchanging model parameters or intermediate results, so that balance between data privacy protection and data sharing calculation is realized. However, federal learning solves the problem of "data islanding" between units while exposing data privacy security issues when multiparty data sharing, so that an attacker can reverse deduce participant data by training intermediate parameters.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a method, a system, equipment and a medium for protecting the privacy of electric power data under multiparty collaboration, which can protect the safety of the data and the privacy of users by adopting federal learning and can fully utilize scattered data sources to improve the effectiveness and the accuracy of a data model.
In order to achieve the above purpose, the present invention has the following technical scheme:
in a first aspect, a method for protecting privacy of power data under multiparty collaboration is provided, including:
updating parameters of an energy credit rating model under multiparty cooperation constructed based on federal learning through gradient values;
finding out the input data gradient with highest accuracy of the credit rating model after parameter updating through a model loss function;
performing gradient clipping on input data according to a minimum risk function in the model training process, and adding disturbance;
and protecting the input data subjected to gradient clipping and disturbance addition by using homomorphic encryption.
Preferably, the steps of constructing the credit rating model under multiparty cooperation based on federal learning are as follows:
analyzing a business target of a sharing scene of credit investigation data, and analyzing the inherent relevance between the electric energy data of the electric network of each participant and the business target;
Carding the basic information of each power grid and each partner;
according to the inherent relevance of the electric energy data of the electric network of each participant and the business target, establishing a corresponding scoring rule for the basic information of each of the carded electric network and the cooperators, and calculating a corresponding score by a privacy calculation method;
and training an enterprise default condition scoring/enterprise payment capability scoring model based on federal learning to obtain a credit rating model under multiparty collaboration.
Preferably, the step of updating parameters of the credit rating model under multiparty cooperation constructed based on federal learning through gradient values includes:
training the credit scoring model by adopting a neural network optimization algorithm, iteratively calculating the gradient of the optimized nonlinear function and updating parameters to reduce the gradient until the algorithm converges to a local optimum or reaches a maximum iteration value;
let the parameter vector w j To flatten all model parameters into one-dimensional vectors by using credit scoring model, E i For the loss function, calculating the partial derivative of the loss function to each parameter in the model parameter vector, namely the gradient value of the model, and updating the parameters of the credit scoring model by the gradient value by using a back propagation algorithm, wherein the calculation expression is as follows:
Where η is the learning rate.
Preferably, the finding the input data gradient with the highest accuracy of the credit rating model after parameter updating through the model loss function includes:
the loss function of the credit rating model is as follows:
wherein f (·) is the activation function, b is the bias term in the credit model, y label For the true tag value, w is the weight parameter vector, w i Is one weight parameter in the weight parameter vector, x is the input data vector, x i Is input data;
finding a weight parameter for minimizing the loss function, and training the value h as the loss function value is smaller w,b (x) And the true tag value y label The smaller the difference between the values, the higher the accuracy of the credit score model, the kth input data x k The gradient of (2) is calculated as follows:
the gradient of the bias term b is calculated as:
if the attacker obtains the kth input value x k Gradient information of the bias term b, i.e. the privacy information x of the participant can be known k The privacy disclosure problem is caused, and the relation is as follows:
preferably, the step of clipping the input data gradient according to the minimum risk function in the model training process and adding disturbance comprises the following steps:
assuming a total of n participants, the total dataset is d= { D 1 ,…,d n For the kth participant, the minimum risk function in the corresponding training process is:
wherein ,representing the loss function of party k during the ith round of iterative training, +.>Representing data of party k at the ith round of iterative training, the gradient optimization function of party k is:
the gradient cutting mode is as follows:
order theThe gradient optimization iterative calculation expression is:
G[g 2 ] t ←ρ(g t ) 2 +(1-ρ)G[g 2 ] t-1
wherein ,G[g2 ] t-1 For estimating a cumulative square of the historical gradient; ρ is the attenuation coefficient; epsilon 0 Is 10 -8 The purpose is to make the denominator non-zero;
when cutting threshold valueWhen in use, the method can realize the approximate optimal cutting effect, and adopts G [ G ] 2 ] t-1 Predicting global gradientsAnd as clipping threshold for the current wheel +.>Beta is the local clipping factor, < >>The calculated expression of (2) is:
the manner of adding the disturbance is as follows:
in the t-th training, the process of locally clipping the gradient and adding disturbance by the kth participant is as follows:
wherein, C is a gradient clipping threshold value when the priori knowledge of the gradient at the initial stage of training is insufficient; s is(s) k The model size for the kth participant such that the added perturbation information is known only to the corresponding participant; sigma is a gaussian disturbance value; as the model continues to converge, the clipping threshold C gradually decreases, so that the noise added to the gradient gradually decreases.
Preferably, the step of protecting the input data of gradient clipping and adding disturbance by using homomorphic encryption comprises the following steps:
homomorphic encryption is divided into triples { K, E, D }, a key pair { pk, sk } is obtained through a key generation function K, pk is a public key for encrypting plaintext, sk is a private key for decrypting, plaintext m and public key pk are input, and ciphertext c=E is generated through an encryption function E pk (m) obtaining plaintext m=d by decrypting function D and private key sk sk (c);
The key generation, encryption and decryption are carried out through the Paillier algorithm, and the method specifically comprises the following steps:
and (3) key generation: selecting two large prime numbers p and q, calculating n=p×q, and γ=lcm (p-1, q-1), wherein lcm represents the least common multiple, and definingFunction, randomly select->g and n satisfy gcd (l (g) r modn 2 ) N) =1, wherein gcd represents the greatest common divisor, solving the greatest common divisor ensures that the two prime numbers are equal in length, the public key pk is (n, g), and the private key sk is γ;
encryption: given plaintext m ε Z N Randomly selectCalculating ciphertext c=e using public key pk pk (m)=g m r n modn 2
Decryption: given ciphertext c and private key sk, recovering plaintext
And the homomorphic encryption method is used for carrying out aggregation updating on the intermediate parameters when training by using the credit scoring model, so that the user privacy and the data safety of each participant in the model training process are protected.
In a second aspect, a power data privacy protection system under multiparty collaboration is provided, including:
the parameter updating module is used for updating parameters of the energy credit rating model under multiparty cooperation constructed based on federal learning through the gradient values;
the gradient calculation module is used for finding out the input data gradient with highest accuracy of the credit rating model after parameter updating through the model loss function;
the gradient cutting and disturbance adding module is used for carrying out gradient cutting on input data according to a minimum risk function in the model training process and adding disturbance;
and the homomorphic encryption module is used for protecting the input data subjected to gradient clipping and disturbance addition by using homomorphic encryption.
Preferably, the power data privacy protection system under multiparty collaboration further comprises a credit rating model building module for:
analyzing a business target of a sharing scene of credit investigation data, and analyzing the inherent relevance between the electric energy data of the electric network of each participant and the business target;
carding the basic information of each power grid and each partner;
according to the inherent relevance of the electric energy data of the electric network of each participant and the business target, establishing a corresponding scoring rule for the basic information of each of the carded electric network and the cooperators, and calculating a corresponding score by a privacy calculation method;
And training an enterprise default condition scoring/enterprise payment capability scoring model based on federal learning to obtain an energy credit scoring model under multiparty collaboration.
Preferably, the parameter updating module adopts a neural network optimization algorithm to train the credit scoring model, iteratively calculates the gradient of the optimized nonlinear function and updates the parameter to reduce the gradient until the iteration termination condition;
let the parameter vector w j To flatten all model parameters into one-dimensional vectors by using credit scoring model, E i For the loss function, calculating the partial derivative of the loss function to each parameter in the model parameter vector, namely the gradient value of the model, and updating the parameters of the credit scoring model by the gradient value by using a back propagation algorithm, wherein the calculation expression is as follows:
where η is the learning rate.
Preferably, the gradient calculating module finds out the input data gradient with the highest accuracy of the credit rating model after parameter updating through the model loss function, and the gradient calculating module comprises the following steps:
the loss function of the credit rating model is as follows:
wherein f (·) is the activation function, b is the bias term in the credit model, y label For the true tag value, w is the weight parameter vector, w i Is one weight parameter in the weight parameter vector, x is the input data vector, x i Is input data;
finding a weight parameter for minimizing the loss function, and training the value h as the loss function value is smaller w,b (x) And the true tag value y label The smaller the difference between the values, the higher the accuracy of the credit score model, the kth input data x k The gradient of (2) is calculated as follows:
the gradient of the bias term b is calculated as:
if the attacker obtains the kth input value x k Gradient information of the bias term b, i.e. the privacy information x of the participant can be known k The privacy disclosure problem is caused, and the relation is as follows:
preferably, the step of performing input data gradient clipping and adding disturbance according to the minimum risk function in the model training process by the gradient clipping and disturbance adding module includes:
assuming a total of n participants, the total dataset is d= { D 1 ,…,d n For the kth participant, the minimum risk function in the corresponding training process is:
wherein ,representing the loss function of party k during the ith round of iterative training, +.>Representing data of party k at the ith round of iterative training, the gradient optimization function of party k is:
the gradient cutting mode is as follows:
Order theThe gradient optimization iterative calculation expression is:
G[g 2 ] t ←ρ(g t ) 2 +(1-ρ)G[g 2 ] t-1
wherein ,G[g2 ] t-1 For estimating a cumulative square of the historical gradient; ρ is the attenuation coefficient; epsilon 0 Is 10 -8 The purpose is to make the denominator non-zero;
when cutting threshold valueWhen in use, the method can realize the approximate optimal cutting effect, and adopts G [ G ] 2 ] t-1 Predicting global gradientsAnd as clipping threshold for the current wheel +.>Beta is the local clipping factor, < >>The calculated expression of (2) is:
the manner of adding the disturbance is as follows:
in the t-th training, the process of locally clipping the gradient and adding disturbance by the kth participant is as follows:
wherein, C is a gradient clipping threshold value when the priori knowledge of the gradient at the initial stage of training is insufficient; s is(s) k The model size for the kth participant such that the added perturbation information is known only to the corresponding participant; sigma is a gaussian disturbance value; as the model continues to converge, the clipping threshold C gradually decreases, so that the noise added to the gradient gradually decreases.
Preferably, the step of protecting the input data of gradient clipping and disturbance adding by the homomorphic encryption module by using homomorphic encryption includes:
homomorphic encryption is divided into triples { K, E, D }, a key pair { pk, sk } is obtained through a key generation function K, pk is a public key for encrypting plaintext, sk is a private key for decrypting, plaintext m and public key pk are input, and ciphertext c=E is generated through an encryption function E pk (m) obtaining plaintext m=d by decrypting function D and private key sk sk (c);
The key generation, encryption and decryption are carried out through the Paillier algorithm, and the method specifically comprises the following steps:
and (3) key generation: selecting two large prime numbers p and q, calculating n=p×q, and γ=lcm (p-1, q-1), wherein lcm represents the least common multiple, and definingFunction, randomly select->g and n satisfy gcd (l (g) r modn 2 ) N) =1, wherein gcd represents the greatest common divisor, solving the greatest common divisor ensures that the two prime numbers are equal in length, the public key pk is (n, g), and the private key sk is γ;
encryption: given plaintext m ε Z N Randomly selectCalculating ciphertext c=e using public key pk pk (m)=g m r n modn 2
Decryption: given ciphertext c and private key sk, recovering plaintext
And the homomorphic encryption method is used for carrying out aggregation updating on the intermediate parameters when training by using the credit scoring model, so that the user privacy and the data safety of each participant in the model training process are protected.
In a third aspect, an electronic device is provided, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method for protecting privacy of power data under multi-party collaboration when the computer program is executed.
In a fourth aspect, a computer readable storage medium is provided, where a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method for protecting privacy of electric power data under the cooperation of multiple parties.
Compared with the prior art, the first aspect of the invention has at least the following beneficial effects:
the method for protecting the electric power data privacy under multiparty collaboration adopts federal learning, so that the data safety and the user privacy can be protected, and the performance of a model can be improved by fully utilizing scattered data sources. Meanwhile, input data gradient clipping is carried out according to the minimum risk function in the model training process, disturbance is added, and data privacy safety of model training under multiparty cooperative data sharing is effectively guaranteed. Although federal learning can realize training a model on the premise of not sharing data, federal learning needs to share the trained model, so that malicious parties can still adjust input data according to different parameters of federal learning models in each round, gradually approach real parameters, so that sensitive information of users is deduced, and threat is formed to privacy of the users. The invention utilizes privacy protection technology and homomorphic encryption to effectively protect user privacy and data security in federal learning.
It will be appreciated that the advantages of the second to fourth aspects may be found in the relevant description of the first aspect and are not repeated here.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for protecting privacy of electric power data under multiparty collaboration according to an embodiment of the application;
fig. 2 is a block diagram of a power data privacy protection system under multiparty collaboration according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
Example 1
Referring to fig. 1, the method for protecting privacy of electric power data under multiparty collaboration according to the embodiment of the application specifically includes the following steps:
s1, updating parameters of a credit rating model under multiparty cooperation constructed based on federal learning through gradient values;
s2, finding out the input data gradient with highest accuracy of the credit rating model after parameter updating through a model loss function;
s3, performing gradient clipping on input data according to a minimum risk function in the model training process and adding disturbance;
s4, protecting input data subjected to gradient clipping and disturbance addition by using homomorphic encryption.
In one possible implementation, the step of constructing the credit rating model under the multiparty collaboration in step S1 based on federal learning is as follows:
firstly, analyzing a business target of a sharing scene of credit-evaluating data, and analyzing the internal relevance between the electric energy data of the power grid of each participant and the business target;
secondly, combing basic information such as characteristic dimensions, label data, sample scale and the like of the power grid and the partners;
Thirdly, establishing corresponding scoring rules for the respective basic information of the carded power grid and the cooperators according to the inherent relevance between the power grid electric energy data of each participator and the business targets, and calculating corresponding scores through a privacy calculation method;
and finally, training an enterprise default condition scoring/enterprise payment capability scoring model based on federal learning to obtain an energy credit scoring model under multiparty collaboration.
The risk analysis is carried out on training by using the credit scoring model, in the model training or reasoning process oriented to federal learning, more participants usually participate in training or reasoning, and if a federal learning agency does not timely detect malicious users, the problem that the global model is polluted or even privacy is leaked is easily caused. Because of the specificity of federal learning, federal learning agents cannot access the collection and training phases of the participants' private data, and federal learning agents can only obtain the encryption model update parameters of each participant. This particular distributed architecture makes federal learning agents vulnerable to attacks. Many attack defenses follow the strategy of outlier detection, exclude abnormal model updates, and assume that all participants' local model updates are independently and equidistributed, so that only abnormal models different from most models need to be filtered to reduce the impact of the attack. However, in the federal learning scenario, since the data of the participants are non-independently and uniformly distributed, there is a large difference between the model updates of the participants, so that many models of normal participants are rejected, and the trained final model cannot meet the original requirements.
In one possible implementation manner, the updating parameters of the energy credit rating model under the multiparty collaboration constructed based on federal learning through the gradient values in step S1 specifically includes:
the training algorithm of the credit scoring model adopts a common neural network optimization algorithm, the neural network optimization algorithm starts from a group of random parameters when optimizing parameters in the neural network, calculates the gradient of the optimized nonlinear function at each step and updates the parameters to reduce the gradient until the algorithm converges to a local optimum or reaches a maximum iteration value.
Let the parameter vector w j To flatten all model parameters into one-dimensional vectors by using credit scoring model, E i For loss function, the loss function pair is calculated by back propagation algorithmThe partial derivative of each parameter in the model parameter vector, namely the gradient value of the model, updates the parameters of the credit scoring model through the gradient value, and the calculation expression is as follows:
where η is the learning rate.
When training the credit rating model, the parameter vector w j Is updated independently of the specific values of the other parameters.
In one possible implementation manner, the finding, in step S2, the input data with the highest accuracy of the energy credit scoring model after updating the parameters through the model loss function specifically includes:
Assume that the loss function with the credit rating model is:
wherein f (·) is the activation function, b is the bias term in the credit model, y label For the true tag value, w is the weight parameter vector, w i Is one weight parameter in the weight parameter vector, x is the input data vector, x i Is input data;
performing a small-batch gradient descent algorithm on the loss function, finding out a weight parameter enabling the loss function to reach a minimum value, and training a value h as the loss function value is smaller w,b (x) And the true tag value y label The smaller the difference between the values, the higher the accuracy of the credit score model, the kth input data x k The gradient of (2) is calculated as follows:
the gradient of the bias term b is calculated as:
if the attacker obtains the kth input value x k Gradient information of the bias term b, i.e. the privacy information x of the participant can be known k The privacy disclosure problem is caused, and the relation is as follows:
in one possible implementation manner, the step S3 of performing input data gradient clipping and adding disturbance according to the minimum risk function in the model training process specifically includes:
assuming a total of n participants, the total dataset is d= { D 1 ,…,d n For the kth participant, the minimum risk function in the corresponding training process is:
wherein ,representing the loss function of party k during the ith round of iterative training, +.>Representing data of party k at the ith round of iterative training, the gradient optimization function of party k is:
the gradient cutting mode is as follows:
order theThen the gradient optimization iteratorThe calculation expression is:
G[g 2 ] t ←ρ(g t ) 2 +(1-ρ)G[g 2 ] t-1
wherein ,G[g2 ] t-1 For estimating a cumulative square of the historical gradient; ρ is the attenuation coefficient; epsilon 0 Is 10 -8 The purpose is to make the denominator non-zero;
taking the information of the size, the number of participants and the like of the credit rating model into consideration, and when cutting the threshold valueWhen in use, the method can realize the approximate optimal cutting effect, and adopts G [ G ] 2 ] t-1 Predicting global gradient->And as clipping threshold for the current wheelBeta is the local clipping factor, < >>The calculated expression of (2) is:
the manner of adding the disturbance is as follows:
in the t-th training, the process of locally clipping the gradient and adding disturbance by the kth participant is as follows:
wherein, C is a gradient clipping threshold value when the priori knowledge of the gradient at the initial stage of training is insufficient; s is(s) k The model size for the kth participant such that the added perturbation information is known only to the corresponding participant; sigma is a gaussian disturbance value; as the model continues to converge, the clipping threshold C gradually decreases, so that noise added to the gradient gradually decreases, contributing to the convergence of the model later.
In one possible implementation manner, the protecting the input data of gradient clipping and adding disturbance using homomorphic encryption in step S4 specifically includes:
homomorphic encryption is divided into triples { K, E, D }, a key pair { pk, sk } is obtained through a key generation function K, pk is a public key for encrypting plaintext, sk is a private key for decrypting, plaintext m and public key pk are input, and ciphertext c=E is generated through an encryption function E pk (m) obtaining plaintext m=d by decrypting function D and private key sk sk (c);
The Paillier algorithm is selected to strengthen the safety attribute of federal learning, and can be divided into three parts of key generation, encryption and decryption, and the Paillier algorithm is specifically as follows:
and (3) key generation: selecting two large prime numbers p and q, calculating n=p×q, and γ=lcm (p-1, q-1), wherein lcm represents the least common multiple, and definingFunction, randomly select->g and n satisfy gcd (l (g) r modn 2 ) N) =1, wherein gcd represents the greatest common divisor, solving the greatest common divisor ensures that the two prime numbers are equal in length, the public key pk is (n, g), and the private key sk is γ;
encryption: given plaintext m ε Z N Randomly selectCalculating ciphertext c=e using public key pk pk (m)=g m r n modn 2
Decryption: given ciphertext c and private key sk, recovering plaintext
And the homomorphic encryption method is used for carrying out aggregation updating on the intermediate parameters when training by using the credit scoring model, so that the user privacy and the data safety of each participant in the model training process are protected.
The method for protecting the privacy of the electric power data under the multiparty collaboration can solve the privacy disclosure problem under the multiparty collaboration faced by the electric power company in the work of conducting electricity inspection, credit investigation and the like based on marketing big data and intelligent algorithms.
Example 2
Referring to fig. 2, a power data privacy protection system under multiparty collaboration according to an embodiment of the present invention includes:
the parameter updating module is used for updating parameters of the energy credit rating model under multiparty cooperation constructed based on federal learning through the gradient values;
the gradient calculation module is used for finding out the input data gradient with highest accuracy of the credit rating model after parameter updating through the model loss function;
the gradient cutting and disturbance adding module is used for carrying out gradient cutting on input data according to a minimum risk function in the model training process and adding disturbance;
and the homomorphic encryption module is used for protecting the input data subjected to gradient clipping and disturbance addition by using homomorphic encryption.
In a possible implementation manner, the power data privacy protection system under multiparty collaboration in the embodiment of the invention further comprises a credit rating model building module for:
firstly, analyzing a business target of a sharing scene of credit-evaluating data, and analyzing the internal relevance between the electric energy data of the power grid of each participant and the business target;
Secondly, combing basic information such as characteristic dimensions, label data, sample scale and the like of the power grid and the partners;
thirdly, establishing corresponding scoring rules for the respective basic information of the carded power grid and the cooperators according to the inherent relevance between the power grid electric energy data of each participator and the business targets, and calculating corresponding scores through a privacy calculation method;
and finally, training an enterprise default condition scoring/enterprise payment capability scoring model based on federal learning to obtain an energy credit scoring model under multiparty collaboration.
In one possible implementation, the parameter updating module adopts a neural network optimization algorithm to train the credit scoring model, iteratively calculates the gradient of the optimized nonlinear function and updates the parameter to reduce the gradient until the algorithm converges to a local optimum or reaches a maximum iteration value;
let the parameter vector w j To flatten all model parameters into one-dimensional vectors by using credit scoring model, E i For the loss function, calculating the partial derivative of the loss function to each parameter in the model parameter vector, namely the gradient value of the model, and updating the parameters of the credit scoring model by the gradient value by using a back propagation algorithm, wherein the calculation expression is as follows:
Where η is the learning rate.
In one possible implementation, the gradient calculation module finds the input data gradient with the highest accuracy of the updated energy credit scoring model through the model loss function, including:
the loss function of the credit rating model is as follows:
wherein f (·) is the activation function, b is the bias term in the credit model, y label For the true tag value, w is the weight parameter vector, w i Is one weight parameter in the weight parameter vector, x is the input data vector, x i Is input data;
finding a weight parameter for minimizing the loss function, and training the value h as the loss function value is smaller w,b (x) And the true tag value y label The smaller the difference between the values, the higher the accuracy of the credit score model, the kth input data x k The gradient of (2) is calculated as follows:
the gradient of the bias term b is calculated as:
if the attacker obtains the kth input value x k Gradient information of the bias term b, i.e. the privacy information x of the participant can be known k The privacy disclosure problem is caused, and the relation is as follows:
in one possible implementation manner, the step of performing input data gradient clipping and adding disturbance according to the minimum risk function in the model training process by the gradient clipping and disturbance adding module includes:
Assuming a total of n participants, the total dataset is d= { D 1 ,…,d n For the kth participant, the minimum risk function in the corresponding training process is:
wherein ,representing the loss function of party k during the ith round of iterative training, +.>Representing data of party k at the ith round of iterative training, the gradient optimization function of party k is:
the gradient cutting mode is as follows:
order theThe gradient optimization iterative calculation expression is:
G[g 2 ] t ←ρ(g t ) 2 +(1-ρ)G[g 2 ] t-1
wherein ,G[g2 ] t-1 For estimating a cumulative square of the historical gradient; ρ is the attenuation coefficient; epsilon 0 Is 10 -8 The purpose is to make the denominator non-zero;
when cutting threshold valueWhen in use, the method can realize the approximate optimal cutting effect, and adopts G [ G ] 2 ] t-1 Predicting global gradientsAnd as clipping threshold for the current wheel +.>Beta is the local clipping factor, < >>The calculated expression of (2) is:
the manner of adding the disturbance is as follows:
in the t-th training, the process of locally clipping the gradient and adding disturbance by the kth participant is as follows:
wherein, C is a gradient clipping threshold value when the priori knowledge of the gradient at the initial stage of training is insufficient; s is(s) k The model size for the kth participant such that the added perturbation information is known only to the corresponding participant; sigma is a gaussian disturbance value; as the model continues to converge, the clipping threshold C gradually decreases, so that the noise added to the gradient gradually decreases.
In one possible implementation, the step of protecting the input data of gradient clipping and adding disturbance by the homomorphic encryption module using homomorphic encryption includes:
homomorphic encryption is divided into triples { K, E, D }, a key pair { pk, sk } is obtained through a key generation function K, pk is a public key for encrypting plaintext, sk is a private key for decrypting, plaintext m and public key pk are input, and ciphertext c=E is generated through an encryption function E pk (m) by solution ofThe secret function D and the private key sk result in plaintext m=d sk (c);
The key generation, encryption and decryption are carried out through the Paillier algorithm, and the method specifically comprises the following steps:
and (3) key generation: selecting two large prime numbers p and q, calculating n=p×q, and γ=lcm (p-1, q-1), wherein lcm represents the least common multiple, and definingFunction, randomly select->g and n satisfy gcd (l (g) r modn 2 ) N) =1, wherein gcd represents the greatest common divisor, solving the greatest common divisor ensures that the two prime numbers are equal in length, the public key pk is (n, g), and the private key sk is γ;
encryption: given plaintext m ε Z N Randomly selectCalculating ciphertext c=e using public key pk pk (m)=g m r n modn 2
Decryption: given ciphertext c and private key sk, recovering plaintext/>
And the homomorphic encryption method is used for carrying out aggregation updating on the intermediate parameters when training by using the credit scoring model, so that the user privacy and the data safety of each participant in the model training process are protected.
Example 3
An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method for protecting power data privacy under multiparty collaboration of embodiment 1 when the computer program is executed.
Example 4
A computer readable storage medium storing a computer program which, when executed by a processor, performs the steps of the method for protecting privacy of power data under multi-party collaboration of embodiment 1.
The computer program comprises computer program code which may be in source code form, object code form, executable file or in some intermediate form, etc. The computer readable storage medium may include: any entity or device, medium, usb disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory, random access memory, electrical carrier wave signals, telecommunications signals, software distribution media, and the like capable of carrying the computer program code. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals. For convenience of description, the foregoing disclosure shows only those parts relevant to the embodiments of the present invention, and specific technical details are not disclosed, but reference is made to the method parts of the embodiments of the present invention. The computer readable storage medium is non-transitory and can be stored in a storage device formed by various electronic devices, and can implement the execution procedure described in the method according to the embodiment of the present invention.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flowchart and/or block of the flowchart illustrations and/or block diagrams, and combinations of flowcharts and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical aspects of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those of ordinary skill in the art that: modifications and equivalents may be made to the specific embodiments of the invention without departing from the spirit and scope of the invention, which is intended to be covered by the claims.

Claims (14)

1. The utility model provides a power data privacy protection method under multiparty cooperation, which is characterized by comprising the following steps:
updating parameters of an energy credit rating model under multiparty cooperation constructed based on federal learning through gradient values;
finding out the input data gradient with highest accuracy of the credit rating model after parameter updating through a model loss function;
performing gradient clipping on input data according to a minimum risk function in the model training process, and adding disturbance;
and protecting the input data subjected to gradient clipping and disturbance addition by using homomorphic encryption.
2. The method for protecting the privacy of electric power data under multiparty cooperation according to claim 1, wherein the step of constructing the credit rating model under multiparty cooperation based on federal learning is as follows:
analyzing a business target of a sharing scene of credit investigation data, and analyzing the inherent relevance between the electric energy data of the electric network of each participant and the business target;
carding the basic information of each power grid and each partner;
according to the inherent relevance of the electric energy data of the electric network of each participant and the business target, establishing a corresponding grading rule for the basic information of each of the carded electric network and the cooperators, and calculating corresponding grading through a privacy calculation method;
And training an enterprise default condition scoring/enterprise payment capability scoring model based on federal learning to obtain a credit rating model under multiparty collaboration.
3. The method for protecting power data privacy under multiparty collaboration according to claim 1, wherein the step of updating parameters of the credit rating model under multiparty collaboration constructed based on federal learning by using gradient values comprises:
training the credit scoring model by adopting a neural network optimization algorithm, iteratively calculating the gradient of the optimized nonlinear function and updating parameters to reduce the gradient until the iteration termination condition;
calculating partial derivatives of the loss function on each parameter in the model parameter vector through a back propagation algorithm, wherein the partial derivatives are used as gradient values of the model, and the parameters of the credit scoring model are updated through the gradient values, and the calculation expression is as follows:
in the parameter vector w j To flatten all model parameters into one-dimensional vectors by using credit scoring model, E i As a loss function, η is the learning rate.
4. The method for protecting power data privacy under multiparty collaboration according to claim 1, wherein the finding out the input data gradient with highest accuracy of the updated energy credit scoring model through the model loss function comprises:
The loss function of the credit rating model is as follows:
wherein f (·) is the activation function, b is the bias term in the credit model, y label For the true tag value, w is the weight parameter vector, w i Is one weight parameter in the weight parameter vector, x is the input data vector, x i Is input data;
finding a weight parameter for minimizing the loss function, and training the value h as the loss function value is smaller w,b (x) And the true tag value y label The smaller the difference between the values, the higher the accuracy of the credit score model, the kth input data x k The gradient of (2) is calculated as follows:
the gradient of the bias term b is calculated as:
if the attacker obtains the kth input value x k Gradient information of the bias term b, i.e. the privacy information x of the participant can be known k The privacy disclosure problem is caused, and the relation is as follows:
5. the method for protecting power data privacy under multiparty collaboration according to claim 1, wherein the step of performing input data gradient clipping and adding disturbance according to a minimum risk function in a model training process comprises:
assuming a total of n participants, the total dataset is d= { D 1 ,…,d n For the kth participant, the minimum risk function in the corresponding training process is:
wherein ,representing the loss function of party k during the ith round of iterative training, +.>Representing data of party k at the ith round of iterative training, the gradient optimization function of party k is:
the gradient cutting mode is as follows:
order theThe gradient optimization iterative calculation expression is:
G[g 2 ] t ←ρ(g t ) 2 +(1-ρ)G[g 2 ] t-1
wherein ,G[g2 ] t-1 For estimating a cumulative square of the historical gradient; ρ is the attenuation coefficient; epsilon 0 Is 10 -8 The purpose is to make the denominator non-zero;
when cutting threshold valueWhen in use, the method can realize the approximate optimal cutting effect, and adopts G [ G ] 2 ] t-1 Predicting global gradient->And as clipping threshold for the current wheel +.>Beta is the local clipping factor, < >>The calculated expression of (2) is:
the manner of adding the disturbance is as follows:
in the t-th training, the process of locally clipping the gradient and adding disturbance by the kth participant is as follows:
wherein, C is a gradient clipping threshold value when the priori knowledge of the gradient at the initial stage of training is insufficient; s is(s) k The model size for the kth participant such that the added perturbation information is known only to the corresponding participant; sigma is a gaussian disturbance value; as the model continues to converge, the clipping threshold C gradually decreases, so that the noise added to the gradient gradually decreases.
6. The method for protecting privacy of power data in multi-party collaboration according to claim 1, wherein the step of protecting input data of gradient clipping and adding disturbance using homomorphic encryption comprises:
Homomorphic encryption is divided into triples { K, E, D }, a key pair { pk, sk } is obtained through a key generation function K, pk is a public key for encrypting plaintext, sk is a private key for decrypting, plaintext m and public key pk are input, and ciphertext c=E is generated through an encryption function E pk (m) obtaining plaintext m=d by decrypting function D and private key sk sk (c);
The key generation, encryption and decryption are carried out through the Paillier algorithm, and the method specifically comprises the following steps:
and (3) key generation: selecting two large prime numbers p and q, calculating n=p×q, and γ=lcm (p-1, q-1), wherein lcm represents the least common multiple, and definingFunction, randomly select->g and n satisfy gcd (l (g) r modn 2 ) N) =1, wherein gcd represents the greatest common divisor, solving the greatest common divisor ensures that the two prime numbers are equal in length, the public key pk is (n, g), and the private key sk is γ;
encryption: given plaintext m ε Z N Randomly selectCalculating ciphertext c=e using public key pk pk (m)=g m r n modn 2
Decryption: given ciphertext c and private key sk, recovering plaintext
And the homomorphic encryption method is used for carrying out aggregation updating on the intermediate parameters when training by using the credit scoring model, so that the user privacy and the data safety of each participant in the model training process are protected.
7. A system for protecting privacy of power data in a multiparty collaboration, comprising:
The parameter updating module is used for updating parameters of the energy credit rating model under multiparty cooperation constructed based on federal learning through the gradient values;
the gradient calculation module is used for finding out the input data gradient with highest accuracy of the credit rating model after parameter updating through the model loss function;
the gradient cutting and disturbance adding module is used for carrying out gradient cutting on input data according to a minimum risk function in the model training process and adding disturbance;
and the homomorphic encryption module is used for protecting the input data subjected to gradient clipping and disturbance addition by using homomorphic encryption.
8. The system for protecting power data privacy in multi-party collaboration of claim 7, further comprising a credit rating model building module for:
analyzing a business target of a sharing scene of credit investigation data, and analyzing the inherent relevance between the electric energy data of the electric network of each participant and the business target;
carding the basic information of each power grid and each partner;
according to the inherent relevance of the electric energy data of the electric network of each participant and the business target, establishing a corresponding grading rule for the basic information of each of the carded electric network and the cooperators, and calculating corresponding grading through a privacy calculation method;
And training an enterprise default condition scoring/enterprise payment capability scoring model based on federal learning to obtain a credit rating model under multiparty collaboration.
9. The system according to claim 7, wherein the parameter updating module trains the credit scoring model by using a neural network optimization algorithm, iteratively calculates gradients of the optimized nonlinear function and updates parameters to reduce the gradients until an iteration termination condition;
let the parameter vector w j In order to use the one-dimensional vector after flattening all model parameters in the credit scoring model, the partial derivative is used as a gradient value of the model, the parameters in the credit scoring model are updated through the gradient value, and the calculation expression is as follows:
in the parameter vector w j To flatten all model parameters into one-dimensional vectors by using credit scoring model, E i As a loss function, η is the learning rate.
10. The system of claim 7, wherein the gradient calculation module finds an input data gradient with highest accuracy of the updated energy credit scoring model by using a model loss function, comprising:
The loss function of the credit rating model is as follows:
wherein f (·) is the activation function, b is the bias term in the credit model, y label For the true tag value, w is the weight parameter vector, w i Is one weight parameter in the weight parameter vector, x is the input data vector, x i Is input data;
finding a weight parameter for minimizing the loss function, and training the value h as the loss function value is smaller w,b (x) And the true tag value y label The smaller the difference between the values, the higher the accuracy of the credit score model, the kth input data x k The gradient of (2) is calculated as follows:
the gradient of the bias term b is calculated as:
if the attacker obtains the kth input value x k Gradient information of the bias term b, i.e. the privacy information x of the participant can be known k The privacy disclosure problem is caused, and the relation is as follows:
11. the system for protecting power data privacy under multiparty collaboration according to claim 7, wherein the gradient clipping and perturbation adding module performs input data gradient clipping and adds perturbation according to a minimum risk function in the model training process, comprising:
assuming a total of n participants, the total dataset is d= { D 1 ,…,d n For the kth participant, the minimum risk function in the corresponding training process is:
wherein ,representing the loss function of party k during the ith round of iterative training, +.>Representing data of party k at the ith round of iterative training, the gradient optimization function of party k is:
the gradient cutting mode is as follows:
order theThe gradient optimization iterative calculation expression is:
G[g 2 ] t ←ρ(g t ) 2 +(1-ρ)G[g 2 ] t-1
wherein ,G[g2 ] t-1 For estimating a cumulative square of the historical gradient; ρ is the attenuation coefficient; epsilon 0 Is 10 -8 The purpose is to make the denominator non-zero;
when cutting threshold valueWhen in use, the method can realize the approximate optimal cutting effect, and adopts G [ G ] 2 ] t-1 Predicting global gradient->And as clipping threshold for the current wheel +.>Beta is the local clipping factor, < >>The calculated expression of (2) is:
the manner of adding the disturbance is as follows:
in the t-th training, the process of locally clipping the gradient and adding disturbance by the kth participant is as follows:
wherein, C is a gradient clipping threshold value when the priori knowledge of the gradient at the initial stage of training is insufficient; s is(s) k The model size for the kth participant such that the added perturbation information is known only to the corresponding participant; sigma is a gaussian disturbance value; cutting as the model continuously convergesThe shear threshold C gradually decreases so that the noise added to the gradient gradually decreases.
12. The system for protecting privacy of power data in cooperation among multiple parties of claim 7, wherein the homomorphic encryption module uses homomorphic encryption to protect input data from gradient clipping and adding disturbances comprising:
Homomorphic encryption is divided into triples { K, E, D }, a key pair { pk, sk } is obtained through a key generation function K, pk is a public key for encrypting plaintext, sk is a private key for decrypting, plaintext m and public key pk are input, and ciphertext c=E is generated through an encryption function E pk (m) obtaining plaintext m=d by decrypting function D and private key sk sk (c);
The key generation, encryption and decryption are carried out through the Paillier algorithm, and the method specifically comprises the following steps:
and (3) key generation: selecting two large prime numbers p and q, calculating n=p×q, and γ=lcm (p-1, q-1), wherein lcm represents the least common multiple, and definingFunction, randomly select->g and n satisfy gcd (l (g) r modn 2 ) N) =1, wherein gcd represents the greatest common divisor, solving the greatest common divisor ensures that the two prime numbers are equal in length, the public key pk is (n, g), and the private key sk is γ;
encryption: given plaintext m ε Z N Randomly selectCalculating ciphertext c=e using public key pk pk (m)=g m r n modn 2
Decryption: given ciphertext c and private key sk, recovering plaintext
And the homomorphic encryption method is used for carrying out aggregation updating on the intermediate parameters when training by using the credit scoring model, so that the user privacy and the data safety of each participant in the model training process are protected.
13. An electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, characterized by: the steps of the method for protecting the privacy of the electric power data under the multi-party cooperation according to any one of claims 1 to 6 are realized when the processor executes the computer program.
14. A computer-readable storage medium storing a computer program, characterized in that: the computer program when executed by a processor performs the steps of the method for protecting privacy of power data under multiparty cooperation according to any one of claims 1 to 6.
CN202310582461.3A 2023-05-22 2023-05-22 Power data privacy protection method, system, equipment and medium under multiparty collaboration Pending CN116663052A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310582461.3A CN116663052A (en) 2023-05-22 2023-05-22 Power data privacy protection method, system, equipment and medium under multiparty collaboration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310582461.3A CN116663052A (en) 2023-05-22 2023-05-22 Power data privacy protection method, system, equipment and medium under multiparty collaboration

Publications (1)

Publication Number Publication Date
CN116663052A true CN116663052A (en) 2023-08-29

Family

ID=87723529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310582461.3A Pending CN116663052A (en) 2023-05-22 2023-05-22 Power data privacy protection method, system, equipment and medium under multiparty collaboration

Country Status (1)

Country Link
CN (1) CN116663052A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117113418A (en) * 2023-10-18 2023-11-24 武汉大学 Anti-image enhancement data desensitization method and system based on iterative optimization

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117113418A (en) * 2023-10-18 2023-11-24 武汉大学 Anti-image enhancement data desensitization method and system based on iterative optimization
CN117113418B (en) * 2023-10-18 2024-01-16 武汉大学 Anti-image enhancement data desensitization method and system based on iterative optimization

Similar Documents

Publication Publication Date Title
Xing et al. Mutual privacy preserving $ k $-means clustering in social participatory sensing
Shen et al. From distributed machine learning to federated learning: In the view of data privacy and security
Priyadarshini et al. Identifying cyber insecurities in trustworthy space and energy sector for smart grids
CN112714106B (en) Block chain-based federal learning casual vehicle carrying attack defense method
Erkin et al. Privacy-preserving distributed clustering
Zhu et al. PIVODL: Privacy-preserving vertical federated learning over distributed labels
CN112532383B (en) Privacy protection calculation method based on secret sharing
CN112132577B (en) Multi-supervision transaction processing method and device based on block chain
CN116663052A (en) Power data privacy protection method, system, equipment and medium under multiparty collaboration
CN108833120A (en) A kind of CRT-RSA selection gangs up against new method and system in plain text
Zhang et al. PPO-DFK: A privacy-preserving optimization of distributed fractional knapsack with application in secure footballer configurations
Senosi et al. Classification and evaluation of privacy preserving data mining: a review
Lakshmanan et al. Efficient Auto key based Encryption and Decryption using GICK and GDCK methods
Xue et al. Secure and privacy-preserving decision tree classification with lower complexity
Liu Modeling ransomware spreading by a dynamic node-level method
Masuda et al. Model fragmentation, shuffle and aggregation to mitigate model inversion in federated learning
CN115310120A (en) Robustness federated learning aggregation method based on double trapdoors homomorphic encryption
Itokazu et al. Outlier Detection by Privacy-Preserving Ensemble Decision Tree U sing Homomorphic Encryption
Zhang et al. A Quantitative and Qualitative Analysis-based Security Risk Assessment for Multimedia Social Networks.
Landau NSA and dual EC_DRBG: deja vu all over again?
Mehnaz et al. Privacy-preserving multi-party analytics over arbitrarily partitioned data
Anikin et al. Privacy preserving data mining in terms of DBSCAN clustering algorithm in distributed systems
Zuo et al. ApaPRFL: Robust Privacy-Preserving Federated Learning Scheme Against Poisoning Adversaries for Intelligent Devices Using Edge Computing
Shaikh et al. A technique for DoS attack detection in e-commerce transactions based on ECC and Optimized Support Vector Neural Network
Sumana et al. Privacy preserving naive bayes classifier for horizontally partitioned data using secure division

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination