CN114358323A - Third-party-based efficient Pearson coefficient calculation method in federated learning environment - Google Patents

Third-party-based efficient Pearson coefficient calculation method in federated learning environment Download PDF

Info

Publication number
CN114358323A
CN114358323A CN202111639035.6A CN202111639035A CN114358323A CN 114358323 A CN114358323 A CN 114358323A CN 202111639035 A CN202111639035 A CN 202111639035A CN 114358323 A CN114358323 A CN 114358323A
Authority
CN
China
Prior art keywords
party
calculating
parties
correlation coefficient
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111639035.6A
Other languages
Chinese (zh)
Inventor
谈扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qianhai Xinxin Digital Technology Co ltd
Original Assignee
Shenzhen Qianhai Xinxin Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qianhai Xinxin Digital Technology Co ltd filed Critical Shenzhen Qianhai Xinxin Digital Technology Co ltd
Priority to CN202111639035.6A priority Critical patent/CN114358323A/en
Publication of CN114358323A publication Critical patent/CN114358323A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/50Adding; Subtracting

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Operations Research (AREA)
  • Probability & Statistics with Applications (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Complex Calculations (AREA)

Abstract

The invention relates to a third-party-based efficient Pearson coefficient calculation method in a federated learning environment, wherein open-source FATE is selected as a framework of overall calculation communication for calculating a Pearson coefficient, two parties participating in feature correlation coefficient calculation are an A party and a B party respectively, and a half-honest third party is a C party. In the new scheme, a semi-honest third party can remove homomorphic encryption operation without sacrificing security, so that (a, b, c) Beaver Triplets are safely generated, and meanwhile, two parties respectively obtain addition secret sharing shares of (a, b, c). Because a large amount of large integer modular exponentiations in Paillier homomorphic encryption in the original scheme, tensor dot products and addition and subtraction operations are not available, the efficiency is greatly improved.

Description

Third-party-based efficient Pearson coefficient calculation method in federated learning environment
Technical Field
The invention relates to a third-party-based efficient Pearson coefficient calculation method in a federated learning environment.
Background
The following is some basic knowledge in the art:
federal machine learning: the application of the privacy calculation in the field of machine learning can fuse data of multiple parties under the condition of not revealing privacy data of all parties, and a model is trained through a machine learning algorithm to predict.
The security model for federal learning is mostly a semi-honest model.
Characteristic engineering: a data preprocessing method in machine learning engineering is used for screening characteristic data of a sample and discretizing the characteristic data so as to train a better machine learning model later.
Pearson coefficient: a method for calculating data correlation can be used in feature engineering to calculate the correlation between sample data features so as to screen out the redundant features which have little effect on overall model prediction and are irrelevant. Make things convenient for the later stage to carry out better, more efficient machine learning model training.
Semi-honest security model: in analyzing the security of a computing or communication protocol, a model is provided under which protocol participants strictly adhere to the relevant steps and requirements of the protocol, but attackers attempt to obtain other data that they should not know from the data obtained during the protocol.
Multi-party security computation: the safe multi-party calculation is a cryptology technology, is also a popular research direction in the current cryptology field, and belongs to the privacy calculation category. The method is mainly used for two or more parties to jointly calculate a function result according to the input data of the parties under the condition that the privacy input data of the parties are not disclosed. Secure multiparty computation the input of each party, except for the final result, remains private.
The research direction originates from the problem of the millionaire which is provided by the Turing machine acquirer Yaoqian, and the initial solution has low efficiency and is not practical. In recent years, with the continuous development of the technology, the efficiency is greatly improved, and the technology is gradually applied to the ground.
Fully homomorphic encryption: the fully homomorphic encryption is an encryption algorithm which can carry out arbitrary calculation (divided into arithmetic addition AND multiplication or exclusive or AND on logic bit positions) on a ciphertext AND obtains the same calculation result on a corresponding plaintext after decryption.
Semi-homomorphic encryption, finite number of fully homomorphic encryption: at present, because the fully homomorphic encryption is limited in efficiency and storage, the fully homomorphic encryption is more widely applied to semi-homomorphic encryption or fully homomorphic encryption with a limited number of stages. The semi-homomorphic encryption algorithm only supports homomorphic operation of ciphertext addition or multiplication, and the finite series fully homomorphic encryption supports homomorphic operation of addition and finite series multiplication.
Federal learning was proposed by Google in 2017, and a more accurate model is trained by fusing data of a plurality of users and a machine learning algorithm so as to better provide services such as recommendation for the users. Meanwhile, in order to better protect the privacy of the user, the user does not need to transmit privacy data to an intermediate facilitator such as Google, the calculation of the data by all machine learning training is completed locally by the user, all the users only need to transmit final results such as gradient and the like to the Google, finally, the Google integrates the final results, and after the final results are integrated, a new model is sent to all the users. All users perform a new round of learning and training process, and the process is repeated until a satisfactory model is output.
The federated learning can eliminate data islands under the condition of protecting the privacy of user data, and combines data of all parties to train a better prediction model, thereby providing better service. Therefore, after the concept is proposed, enterprises concerned by all parties, such as the Internet, finance and the like, take full advantage of the federal learning field.
The federate learning framework FATE provided by the member is directed at the financial field, so that data can be safely shared among enterprises, and machine learning training is carried out. Because of its simplicity, practicality, high efficiency, and open source, it attracts a large number of users, with about 9k plus stars and 1k branches in github.
In the technical framework of FATE, feature engineering is a very important ring, some irrelevant features can be removed, feature data are discretized, and the machine learning model which is more accurate and more accordant with practical application scenes is trained. FATE (Federated AI Technology Enabler Federal Artificial Intelligence Technology Enabler) is an open source project initiated by the AI department of the micro-mass bank, and provides a reliable safe computing framework for the Federal learning ecosystem.
Wherein the Pearson coefficients between features are used to determine the linear correlation between features and thus to cull features with larger linear correlation for better machine learning.
However, the computation of the Pearson coefficients of the feature data among the participants of the FATE refers to a multi-party computation framework SPDZ, in which the generation of the Beaver Triplets involves a large number of Paillier homomorphic encryption operations. Such a calculation process makes it impossible to complete the calculation task in one day when the characteristic correlation calculation of a large-scale data volume becomes extremely slow, such as tens of millions of data, which makes the data correlation calculation of the FATE highly impractical.
At present, the FATE uses a multi-party computing framework similar to the SPDZ to perform correlation computation between the participant characteristics through secret sharing and Paillier homomorphic encryption, and meanwhile, ciphertext encryption and MAC verification under an SPDZ malicious adversary are removed, so that the computing efficiency is improved.
However, this method still uses a large amount of Paillier homomorphic public key encryption operations to generate the Beaver Triplets (Triplets), which results in that the calculation of the integral Pearson correlation coefficient of FATE is relatively inefficient and is not practical in the case of large-scale data. Such as tens of millions of data, the computing task cannot be completed a day, making data dependency calculations for FATE impractical.
Disclosure of Invention
Aiming at the existing calculation method for generating Beaver Triplets (Triplets) by using a large amount of Paillier homomorphic public key encryption operation in the technical framework of FATE, the invention provides a third-party-based high-efficiency Pearson coefficient calculation method in the federated learning environment.
The technical scheme adopted by the invention for realizing the technical purpose is as follows: the method comprises the steps that an open source FATE is selected as a framework of overall calculation communication for calculating a Pearson coefficient, two parties participating in feature correlation coefficient calculation are an A party and a B party respectively, and a semi-honest third party is a C party; the method comprises the following steps:
step S1, the two parties A and B participating in the calculation of the correlation coefficient make the respective characteristic data tensors x and y an addition secret sharing, and each party obtains an addition secret sharing share of the data tensor of the other party;
step S2, the parties a and B involved in calculating the correlation coefficient, after obtaining the additive secret share of the data tensor x and y of the other party, locally generate the tensor a of the same scale as the characteristic data tensor x and yi,biAs a secret share in the triplets a, b;
step S3, the two parties a and B involved in calculating the correlation coefficient respectively generate the tensors ai,biSending the information to a semi-honest third party C;
step S4, the semi-honest third party C uses the received tensor ai,biAdding to obtain a and b in the triplet, and further calculating c in the triplet, wherein c is the dot product between the tensors a and b;
step S5, the semi-honest third party C generates C by sharing C as an addition secret1,c2And respectively sent to two parties A and B participating in calculating the correlation coefficient;
step S6, the two parties A and B involved in calculating the correlation coefficient respectively obtain ciThereafter, the existing triple shares a are each usedi,bi,ciAnd tensor xi,yiZ is obtained by interactive calculation with the other partyi,ziAn additive secret share equal to the dot product z between tensors x, y;
step S7, the interaction z of the two parties A and B participating in calculating the correlation coefficientiAnd adding the two to obtain z which is output as the Pearson correlation coefficient tensor of the two characteristics.
Further, in the method for calculating a high-efficiency pearson coefficient based on a third party in the federal learning environment, the method comprises the following steps: before step S1, the method further includes:
in step S0, the two parties a and B involved in calculating the correlation coefficient perform sample alignment based on their respective input data.
Further, in the method for calculating a high-efficiency pearson coefficient based on a third party in the federal learning environment, the method comprises the following steps: the sample alignment algorithm employs a secure privacy intersection algorithm.
Further, in the method for calculating a high-efficiency pearson coefficient based on a third party in the federal learning environment, the method comprises the following steps: the algorithm adopted by the sample alignment algorithm is a FATE self-contained RSA-based privacy intersection algorithm.
Further, in the method for calculating a high-efficiency pearson coefficient based on a third party in the federal learning environment, the method comprises the following steps: the step S0 further includes: both parties a, B involved in the calculation of the correlation coefficient also need to preprocess the respective data sets.
Further, in the method for calculating a high-efficiency pearson coefficient based on a third party in the federal learning environment, the method comprises the following steps: the preprocessing of the data set comprises:
for a data set X of A that participates in the calculation of the correlation coefficient, for each XiBelongs to X, first, sum _ X ═ Σ X is calculatedi,sum_x2=∑xi 2
avg _ X ═ sum _ X/n, where n is the number of data X
Then
Figure BDA0003442475960000051
All X in XiInstead of being xi' and output as a new data set, denoted x, where xi'=(xi-avg _ x)/α. The similar processing is performed on the data set Y on the B side, and the data set Y is output.
In the invention, in a multi-party calculation framework for calculating the Pearson coefficient, a half-honest third party is introduced to generate the Beaver Triplets, and compared with the method for generating the Beaver Triplets by using Paillier homomorphic encryption in the original FATE federal learning framework, the calculation efficiency is greatly improved.
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
Drawings
FIG. 1 is a flow chart of example 1 of the present invention.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described examples are only a part of the embodiments of the present application, but not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Embodiment 1, as shown in fig. 1, is a third-party-based efficient Pearson coefficient calculation method in a federated learning environment, in which an open-source FATE is selected as a framework for overall calculation communication for calculating a Pearson coefficient, two parties participating in feature correlation coefficient calculation are respectively an a party and a B party, and a semi-honest third party is a C party. The method specifically comprises the following steps:
(1) first, a framework of overall computational communication is selected that requires the calculation of Pearson coefficients, where the FATE of the source is selected.
(2) Two parties participating in feature correlation coefficient calculation are assumed to be a party A and a party B respectively, and a semi-honest third party is assumed to be a party C.
(3) The two parties A and B need to carry out sample alignment (screening data with the same id and different characteristics) according to respective input data, the sample alignment algorithm can be any safe privacy cross-section algorithm, and here, the FATE self-carried RSA-based privacy cross-section algorithm is selected.
(4) After the sample pair, A, B output data sets X, Y respectively
(5) In order to calculate the feature correlation in the next step, both a and B also need to preprocess their respective data sets, specifically:
first for A, for a data set X, for each XiBelongs to X, first, sum _ X ═ Σ X is calculatedi,sum_x2=∑xi 2
avg _ X ═ sum _ X/n, where n is the number of data X
Then
Figure BDA0003442475960000061
All X in XiInstead of being xi' and output as a new data set denoted x, where xi'=(xi-avg_x)/α
The similar processing is performed on the data set Y on the B side, and the data set Y is output.
The final desired correlation coefficient of Pearson for the feature data is then calculated dot (x, y), and dot represents the dot product between tensors.
(6) And the two parties A and B participating in the calculation of the correlation coefficient make the respective feature data tensors x and y to be an addition secret sharing, and each party obtains an addition secret sharing share of the data tensor of the other party. Specifically, the method comprises the following steps: party A makes data set x as an addition secret sharing x ═ x1+x2And x is2And sending the data to the B party. Similarly, party B shares y with y as a secret1+y2And is combined with y2And sending the data to the party A.
(7) After the share of the data x and y of the other party is obtained, the two parties A and B involved in calculating the correlation coefficient respectively locally generate tensors (a) with the same scale as the tensor x and y of the characteristic datai,bi) As a secret share of (a, b) in Beaver Triplets, each (a) is sharedi,bi) Sent to the semi-honest third party C. The specific A side locally generates random tensor (a) with the same scale of x and y1,b1) And sending the data to a semi-honest third party C, and a party B also locally generates a random tensor (a) with the same scale of x and y2,b2) And sent to the semi-honest third party C.
(8) C uses the received (a)i,bi) Adding the values to obtain (a, b) in the Beaver Triplets, and further calculating c ═ dot (a, b) in the Beaver Triplets, wherein dot represents the dot product between tensors. The semi-honest third party C generates C by using C calculated in the last step as an addition secret sharing1,c2And sent to a and B, respectively. Specifically, the C-side calculation a ═ a1+a2,b=b1+b2C-dot (a, b) is calculated at the same time, c-c is shared as c by adding c1+c2And sent to party a and party B, respectively.
(9) C is obtained from both A and BiThereafter, share (a) of the Beaver Triplets was used respectivelyi,bi,ci) And (x)i,yi) Z is obtained by interactive calculation with the other partyi=doti(x, y). Specifically, both A and B parties obtain (x)i,yi),(ai,bi,ci) For convenience of the following description, share of the additive secret share of the triplets of the data sets (x, y) and (a, b, c) the slave trees, respectively, may also be expressed as [ a],[b],[c]。
(10) Each interacting between two parties ziAnd adding to obtain z which is used as the tensor output of the correlation coefficient of the two characteristics. Specifically, A and B respectively calculate their respective [ x + a [ + ]],[y+b]And sending the sum to the other party, and adding the share received from the other party and the own share to obtain a complete k which is x + a; j is y + b. A and B respectively calculate the respective [ z]=[a*b]-k[b]-j[a]+ k + j, i.e. ziAnd sending the sum to the opposite side, and adding the share received from the opposite side and the own share to obtain complete z ═ dot (x, y), namely outputting the final characteristic correlation data of the two sides.
In this embodiment: in the new scheme, through a semi-honest third party, the homomorphic encryption operation can be removed while the security is not sacrificed, the (a, b, c) Beaver Triplets are generated safely, and meanwhile, the two parties respectively obtain the addition secret sharing share of the (a, b, c).
Because a large amount of large integer modular exponentiations in Paillier homomorphic encryption in the original scheme, tensor dot products and addition and subtraction operations are not available, the efficiency is greatly improved.

Claims (6)

1. The method comprises the steps that an open source FATE is selected as a framework of overall calculation communication for calculating a Pearson coefficient, two parties participating in feature correlation coefficient calculation are an A party and a B party respectively, and a semi-honest third party is a C party; the method is characterized in that: the method comprises the following steps:
step S1, the two parties A and B participating in the calculation of the correlation coefficient make the respective characteristic data tensors x and y an addition secret sharing, and each party obtains an addition secret sharing share of the data tensor of the other party;
step S2, the parties a and B involved in calculating the correlation coefficient, after obtaining the additive secret share of the data tensor x and y of the other party, locally generate the tensor a of the same scale as the characteristic data tensor x and yi,biAs an additive secret share in the triplets a, b;
step S3, the two parties a and B involved in calculating the correlation coefficient respectively generate the tensors ai,biSending the information to a semi-honest third party C;
step S4, the semi-honest third party C uses the received tensor ai,biAdding to obtain a and b in the triplet, and further calculating c in the triplet, wherein c is the dot product between the tensors a and b;
step S5, the semi-honest third party C generates C by sharing C as an addition secret1,c2And respectively sent to two parties A and B participating in calculating the correlation coefficient;
step S6, the two parties A and B involved in calculating the correlation coefficient respectively obtain ciThereafter, the existing triple shares a are each usedi,bi,ciAnd tensor xi,yiZ is obtained by interactive calculation with the other partyi,ziAn additive secret share equal to the dot product z between tensors x, y;
step S7, the interaction z of the two parties A and B participating in calculating the correlation coefficientiAnd z is obtained by addition and is output as the tensor of the correlation coefficient of the two characteristics.
2. The method for calculating efficient pilsner coefficients based on a third party in a federated learning environment as claimed in claim 1, wherein: before step S1, the method further includes:
in step S0, the two parties a and B involved in calculating the correlation coefficient perform sample alignment based on their respective input data.
3. The method for calculating efficient pilsner coefficients based on a third party in a federated learning environment as claimed in claim 2, wherein: the sample alignment algorithm employs a secure privacy intersection algorithm.
4. The method for calculating efficient Pearson coefficients based on a third party in a federated learning environment as claimed in claim 3, wherein: the algorithm adopted by the sample alignment algorithm is a FATE self-contained RSA-based privacy intersection algorithm.
5. The method for calculating efficient pilsner coefficients based on a third party in a federated learning environment as claimed in claim 2, wherein: the step S0 further includes: both parties a, B involved in the calculation of the correlation coefficient also need to preprocess the respective data sets.
6. The method for calculating efficient Pearson coefficients based on a third party in a federated learning environment as claimed in claim 5, wherein: the preprocessing of the data set comprises:
for a data set X of A that participates in the calculation of the correlation coefficient, for each XiBelongs to X, first, sum _ X ═ Σ X is calculatedi,sum_x2=∑xi 2
avg _ X ═ sum _ X/n, where n is the number of elements in dataset X
Then
Figure FDA0003442475950000021
All X in XiInstead of being xi', and output as a new data set denoted x, where xi'=(xi-avg _ x)/α; the similar processing is performed on the data set Y on the B side, and the data set Y is output.
CN202111639035.6A 2021-12-29 2021-12-29 Third-party-based efficient Pearson coefficient calculation method in federated learning environment Pending CN114358323A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111639035.6A CN114358323A (en) 2021-12-29 2021-12-29 Third-party-based efficient Pearson coefficient calculation method in federated learning environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111639035.6A CN114358323A (en) 2021-12-29 2021-12-29 Third-party-based efficient Pearson coefficient calculation method in federated learning environment

Publications (1)

Publication Number Publication Date
CN114358323A true CN114358323A (en) 2022-04-15

Family

ID=81102584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111639035.6A Pending CN114358323A (en) 2021-12-29 2021-12-29 Third-party-based efficient Pearson coefficient calculation method in federated learning environment

Country Status (1)

Country Link
CN (1) CN114358323A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115225264A (en) * 2022-06-17 2022-10-21 上海富数科技有限公司广州分公司 Secure multi-party computing method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115225264A (en) * 2022-06-17 2022-10-21 上海富数科技有限公司广州分公司 Secure multi-party computing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Patra et al. BLAZE: blazing fast privacy-preserving machine learning
CN112989368B (en) Method and device for processing private data by combining multiple parties
Zhao et al. PVD-FL: A privacy-preserving and verifiable decentralized federated learning framework
Chen et al. Privacy-preserving backpropagation neural network learning
Xiong et al. Toward lightweight, privacy-preserving cooperative object classification for connected autonomous vehicles
Xie et al. Achieving privacy-preserving online diagnosis with outsourced SVM in internet of medical things environment
CN114936650A (en) Method and device for jointly training business model based on privacy protection
Kengnou Telem et al. A simple and robust gray image encryption scheme using chaotic logistic map and artificial neural network
WO2021237437A1 (en) Data processing method and apparatus employing secure multi-party computation, and electronic device
CN112769542B (en) Multiplication triple generation method, device, equipment and medium based on elliptic curve
Zhang et al. Privacy-preserving deep learning based on multiparty secure computation: A survey
CN113420232A (en) Privacy protection-oriented graph neural network federal recommendation method
El-Zoghabi et al. Survey report on cryptography based on neural network
CN112883387A (en) Privacy protection method for machine-learning-oriented whole process
Xue et al. Secure and privacy-preserving decision tree classification with lower complexity
CN112532383B (en) Privacy protection calculation method based on secret sharing
CN113962286A (en) Decentralized logistic regression classification prediction method based on piecewise function
Zhao et al. SGBoost: An efficient and privacy-preserving vertical federated tree boosting framework
CN114358323A (en) Third-party-based efficient Pearson coefficient calculation method in federated learning environment
Cao et al. Privacy-preserving healthcare monitoring for IoT devices under edge computing
CN111859440A (en) Sample classification method of distributed privacy protection logistic regression model based on mixed protocol
CN115564447A (en) Credit card transaction risk detection method and device
Huang et al. Secure word-level sorting based on fully homomorphic encryption
Shuguo et al. Multi-party privacy-preserving decision trees for arbitrarily partitioned data
CN116595589B (en) Secret sharing mechanism-based distributed support vector machine training method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination