CN116738494B - Model training method and device for multiparty security calculation based on secret sharing - Google Patents

Model training method and device for multiparty security calculation based on secret sharing Download PDF

Info

Publication number
CN116738494B
CN116738494B CN202311027867.1A CN202311027867A CN116738494B CN 116738494 B CN116738494 B CN 116738494B CN 202311027867 A CN202311027867 A CN 202311027867A CN 116738494 B CN116738494 B CN 116738494B
Authority
CN
China
Prior art keywords
sub
secret
result
data
bit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311027867.1A
Other languages
Chinese (zh)
Other versions
CN116738494A (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Real AI Technology Co Ltd
Original Assignee
Beijing Real AI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Real AI Technology Co Ltd filed Critical Beijing Real AI Technology Co Ltd
Priority to CN202311027867.1A priority Critical patent/CN116738494B/en
Publication of CN116738494A publication Critical patent/CN116738494A/en
Application granted granted Critical
Publication of CN116738494B publication Critical patent/CN116738494B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06GANALOGUE COMPUTERS
    • G06G7/00Devices in which the computing operation is performed by varying electric or magnetic quantities
    • G06G7/12Arrangements for performing computing operations, e.g. operational amplifiers
    • G06G7/16Arrangements for performing computing operations, e.g. operational amplifiers for multiplication or division
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Bioethics (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Power Engineering (AREA)
  • Computer Security & Cryptography (AREA)
  • Complex Calculations (AREA)

Abstract

The embodiment of the invention provides a model training method and device for multiparty security calculation based on secret sharing, which relate to the technical field of data processing and comprise the following steps: in the process of executing multiplication calculation of the characteristic data and the model parameters, in each calculation party, a multiplication operator is executed on the first sub-secret data and the second sub-secret data, and an initial sub-secret result is obtained; performing inverting operation and logic right shift operation on binary bits of the initial sub-secret result to obtain a first sub-secret result; performing logic right shift operation on binary bits of the initial sub-secret result to obtain a second sub-secret result; determining the first sub-secret result or the second sub-secret result as a final sub-secret result of the computing party according to the positive and negative conditions of the initial sub-secret result; in the data demand side, corresponding restoration operation is carried out on each received final sub-secret result and the final sub-secret result of the data demand side, so as to obtain the true value of the calculated result of multiplication calculation, and control the training process of the model.

Description

Model training method and device for multiparty security calculation based on secret sharing
Technical Field
The invention relates to the technical field of data processing, in particular to a model training method and device for multiparty security calculation based on secret sharing.
Background
In the problem of Secure Multi-Party Computation (MPC), secret sharing (secret sharing) is a common technique, in which additive secret sharing (additive secret sharing) is a simple and efficient implementation, and many MPC protocols have been proposed to help users complete desired computations while protecting private data.
The article SecureNN 3-Party Secure Computation for Neural Network Training proposes the MPC protocol SecureNN based on additive secret sharing.
Secret sharing often uses two tuplesRepresentation scheme, which represents the secret that needs protection +.>Is decomposed into->Each part may be referred to as a sub-secret that is to be submitted to different ownersStoring; thereafter collecting at least->The participants of the sub-secret can reconstruct (reconstruct) the original secret +.>Possess less than->The sub-secret participants will not be aware of +>Any information of (3).
The decomposition method used for additive secret sharing is addition and subtraction operation, namely for an integer needing to protect the true valueIt is decomposed into a sum of several integers and the reconstruction uses addition. The following are examples of the use of this scheme:
SecureNN is a protocol comprising three parties, including two primary computing partiesAnd->And an auxiliary computing party->Two primary computing parties keep the child secrets. More specifically, the protected integer will be decomposed into +.>After which->To->Keep (keep)>To->Keeping, the user must obtain +.>Child secrets on the hand.
The following describes the basic operators of some of the SecureNN protocols (unified in the following description,/>Etc. represent input in +.>Representing the output) to assist in understanding its principles.
The following numbers are writtenIs +.>I.e. +.>A great property of the SecureNN protocol is the consistency of its operator inputs and outputs, i.e. all inputs and outputs are in a shared form, which allows the user to combine basic operators arbitrarily, completing complex computational logic. Another property is that the algorithm flow does not expose the true value of the data, and in the view angle of the main calculator, the operand owned by the algorithm is a random number, and no more information can be obtained.
For addition, the input isOutput is +.>Wherein->(/>Is->Arithmetic number of->Is->Arithmetic number of->Is->To be pushed in this way), the computational expression of the addition operator can be noted +.>The flow is as follows:
1、calculator owns input->,/>Calculator owns input->Wherein->And->Is->Is (are) sub-secret->Andis->Is a sub-secret of (2);
2、the calculator performs calculation +.>,/>The calculator performs calculation +.>
3. Easy verification of the final resultsNamely +.>Is a sharing scheme of the utility model.
The calculation side does not need to play an auxiliary role in addition, so in the following description the addition sub-step is not emphasized anymore +.>And calculating the action of the party.
The subtraction and addition are the same, and the computational expression of the subtraction operator can be written as
For multiplication, its input isOutput is +.>Wherein->The computational expression of the multiplication operator can be noted +.>The flow is as follows:
1、the calculator generates a random number +.>And calculate +.>Obtaining triplet->And send it to the computing party +.>
2、Calculator owns->,/>Calculator owns->Wherein->And->Is->Is (are) sub-secret->And->Is->Is (are) sub-secret->And->Is->Is (are) sub-secret->And->Is->Is (are) sub-secret->And->Is->Is a sub-secret of (2);
3、the calculator performs calculation +. >And->,/>The calculator performs calculation +.>And->
4、The computing parties exchange with each other->Both sides reconstruct to get +.>And->Is a true value of (2);
5、the calculator performs calculation +.>,/>The calculation is performed by the calculation party
6. Can verify the final resultNamely +.>Is a sharing scheme of the utility model.
From the above three examples, the input and output of the operators are in a shared form, which means that for two main computing parties, the result obtained by the previous operator can be directly used as the input of the next operator, and the result does not need to be restored to a true value and then re-shared. Meanwhile, the calculation process of the operator does not restore the value capable of exposing the input information, so that the safety of the data is ensured.
The core operator of the SecureNN protocol is presented below:
for the DReLU operator,output is +.>Wherein->The computational expression of the DReLU operator can be noted +.>
It is the core operator of the SecureNN protocol, which provides the ability to verify the positive and negative conditions of numbers, thus enabling further support for operators such as comparison operations.
The whole DReLU algorithm flow is longer, and only the realization thought of the operator is supplemented here:
1. determining whether the number is negative may translate into determining whether its complement indicates the most significant bit (most significant bit, MSB) as 1.
2. Recording device Wherein LSB is the complement representing the least significant bit.
(1) When (when)1->The unsigned value represented must be equal to +.>Thus->Is an odd number, i.e. LSB is 1.
(2) Otherwise the first set of parameters is selected,is an even number, i.e. LSB is 0.
3. For the result of an addition operationIs an exclusive-or operation,
4. for operandsExclusive OR operation with value 0 or 1 only, which satisfies +.>
The other core operator of the SecureNN framework is a "secret pick" operator, which provides the function of binary selection, based on a conditional value of 0 or 1.
More specifically, the "secret pick" operator input isAnd->Output is +.>. Wherein->,/>The computational expression of the "secret pick" operator is abbreviated as +.>
The operator flow is as follows:
1、calculator owns->,/>Calculator owns->
2. Computational subtractionTwo main computing parties->、/>Obtain->
3. Computing multiplicationTwo main computing parties->、/>Obtain->
4. Computing additionTwo main computing parties->、/>Obtain->
From the calculation step, it can be seen thatIt is easy to verify that this is the correct result.
In an actual program implementation of secret sharing, the following details need to be processed:
1. for efficiency reasons, the numbers in most protocol implementations will be represented using unsigned 64-bit shaping, meaning that the addition described above will be modulo In the sense of->
2. The manner in which the unsigned 64-bit integer representation is implemented in a computer system is to store integers using a complement representation. The most significant bit in the complement representation is the sign bit, which is such that the complement representation of the negative number will correspond to a large positive integer binary representation of the unsigned bit, e.g.,is expressed as +.>I.e. 64 binary numbers consisting of 1, which in turn corresponds to a positive number +.>Is a 64-bit unsigned bit binary representation of (c). All sign bits of the negative numbers (i.e., the most significant bits) are 1's under the complement representation, and the non-negative numbers are 0's. At the same time this means that the true value of the numbers involved in the operation in the protocol is in the range +.>Within this interval.
3. After the digits are represented by shaping, in order to handle floating point numbers, the common way is to use fixed-point numbers (fixed-point numbers), the last of the 64 binary bitsBits are used to represent the fractional part, corresponding to the floating point number +.>Is regarded as having a size of +.>Is stored.
However, when the fixed point number representation is adopted, the following problems exist:
when multiplying two numbers represented by a fixed point number, the fraction of the floating point will double, more specifically,and->The result of multiplication is- >Whereas the desired result is +.>. For this purpose we need to add the result->Equivalent to->Doing twoBinary arithmetic right shift +.>Bit this operation, which we will refer to generally as a truncate operation (truncate).
A simple idea is to useAnd respectively performing arithmetic right shift on the two shared sub-secrets. However, this approach is applicable to multiply and shift left operations and is not applicable to divide and arithmetic shift right operations, since the original value and the shared sub-secret may have different sign bits.
For example, the number of the cells to be processed,,/>,/>the value divided by 2, i.e. arithmetic right shift by 1 bit, is +.>And->The value after arithmetic right shift by 1 bit is +.>Their sum is a positive number, i.e. the result is severely erroneous.
It is easy to verify that when the sign bit of the original true value and the sign bit of at least one sub-secret value are different, the result is greatly error caused by simply performing the right-shift truncation on the sub-secret. To cope with this phenomenon, the existing improvement method is to limit the range of one secret sharing value, so that the probability of generating a huge error in the above manner is as small as possible, but the improvement method also compromises some security.
Therefore, under the condition that the result generates huge error due to the cutting-off mode of performing arithmetic right shift on the sub-secret, the accuracy of the multiparty safety calculation result is further reduced, and when model training is performed by adopting the multiparty safety calculation result, the accuracy and the effectiveness of model training are reduced, so that the accuracy of the model is influenced.
Disclosure of Invention
In view of the above, the embodiment of the invention provides a model training method for multiparty safety calculation based on secret sharing, which aims to solve the technical problem of low model accuracy of training caused by inaccuracy of multiparty safety calculation results in the prior art. The method comprises the following steps:
in the process of performing multiplication computation of characteristic data and model parameters, in each computing party, receiving first sub-secret data and second sub-secret data, and performing multiplication operations on the first sub-secret data and the second sub-secret data to obtain initial sub-secret results of the multiplication computation, wherein the characteristic data is decomposed into a first preset number of first sub-secret data, the model parameters are decomposed into a first preset number of second sub-secret data, the first preset number of values is the same as the number of computing parties, and the first sub-secret data and the second sub-secret data are used for generating a first sub-secret result of the multiplication computationBit unsigned bit binary form, +.>A positive integer greater than or equal to 64 and a power of 2;
in each computing party, performing inverting operation and logic right shift operation on binary bits of an initial sub-secret result obtained by the computing party to obtain a first sub-secret result; performing logic right shift operation on binary bits of the initial sub-secret result obtained by the computing party to obtain a second sub-secret result; determining the first sub-secret result or the second sub-secret result as a final sub-secret result of the computing party according to the positive and negative conditions of the initial sub-secret result obtained by the computing party;
Transmitting a final sub-secret result obtained by each of the at least two computing parties except the data requiring party to the data requiring party;
and in the data demand party, carrying out corresponding restoration operation on each received final sub-secret result and the final sub-secret result of the data demand party according to a decomposition mode to obtain a true value of a calculated result of multiplication calculation of the characteristic data and the model parameters, and controlling a training process of the model according to the true value.
The embodiment of the invention also provides a model training device for multiparty safety calculation based on secret sharing, which is used for solving the technical problem of low training model precision caused by inaccuracy of multiparty safety calculation results in the prior art. The device comprises:
a data calculation module, each data calculation module being disposed in a calculation party, each data calculation module being configured to receive a first sub-secret data and a second sub-secret data in the process of performing multiplication calculation of feature data and model parameters, and perform multiplication operators on the first sub-secret data and the second sub-secret data to obtain an initial sub-secret result of the multiplication calculation, wherein the feature data is decomposed into a first preset number of first sub-secret data, the model parameters are decomposed into a first preset number of second sub-secret data, the first preset number of values is the same as the number of calculation parties, the first sub-secret data and the second sub-secret data are used for Bit unsigned bit binary form, +.>A positive integer greater than or equal to 64 and a power of 2;
each data calculation module is further used for performing inverting operation and logic right shift operation on binary bits of the initial sub-secret result obtained by the data calculation module to obtain a first sub-secret result; performing logic right shift operation on binary bits of the initial sub-secret result obtained by the computing party to obtain a second sub-secret result; determining the first sub-secret result or the second sub-secret result as a final sub-secret result of the computing party according to the positive and negative conditions of the initial sub-secret result obtained by the computing party;
the data calculation modules in the other calculation parties except the data demand party in the at least two calculation parties are also used for sending the final sub-secret result obtained by the data calculation modules to the data demand party;
the model training module is deployed in the data demand side and is used for carrying out corresponding restoration operation on each received final sub-secret result and the final sub-secret result obtained by the data calculation module in the data demand side according to a decomposition mode to obtain a true value of the calculated result of multiplication calculation of the characteristic data and the model parameters, and controlling the training process of the model according to the true value.
The embodiment of the invention also provides computer equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the arbitrary model training method based on secret sharing multiparty safety calculation when executing the computer program, so as to solve the technical problem of low training model precision caused by inaccuracy of multiparty safety calculation results in the prior art.
The embodiment of the invention also provides a computer readable storage medium which stores a computer program for executing the model training method of any multiparty safety calculation, so as to solve the technical problem of low training model precision caused by inaccuracy of multiparty safety calculation results in the prior art.
Compared with the prior art, the beneficial effects that above-mentioned at least one technical scheme that this description embodiment adopted can reach include at least: in the model training process, after multiplication operators for carrying out secret sharing on characteristic data and model parameters are needed to calculate, each calculator respectively obtains an initial sub-secret result of multiplication calculation, further, the binary bit of the initial sub-secret result obtained by the calculator is subjected to inverting operation and logic right shifting operation in each calculator to obtain a first sub-secret result, the binary bit of the initial sub-secret result obtained by the calculator is subjected to logic right shifting operation to obtain a second sub-secret result, the first sub-secret result or the second sub-secret result is determined to be a final sub-secret result of the calculator according to the positive and negative conditions of the initial sub-secret result obtained by the calculator, the arithmetic right shifting of the initial sub-secret result is realized to obtain an accurate cut-off result of the initial sub-secret result, namely a final sub-secret result, and finally, in the data demand side, carrying out corresponding restoration operation on each received final sub-secret result and the final sub-secret result of the data demand side according to a decomposition mode to obtain the true value of the calculated result of the multiplication calculation of the characteristic data and the model parameters, controlling the training process of the model according to the true value, carrying out restoration operation based on the final sub-secret result of each calculation side because the final sub-secret result is an accurate truncated result of the initial sub-secret result to obtain the true value of the calculated result of the multiplication calculation of the characteristic data and the model parameters, thereby being beneficial to improving the accuracy of the calculated result of the multiplication calculation of the characteristic data and the model parameters, simultaneously, ensuring the safety of the data by the multiplication calculation process, controlling the training process of the model based on the true value of the accurate multiplication calculation result, the accuracy and the effectiveness of model training are improved, and the accuracy of the model is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a model training method for multiparty security computing based on secret sharing, provided by an embodiment of the present application;
FIG. 2 is a block diagram of a computer device according to an embodiment of the present application;
fig. 3 is a block diagram of a model training device for multiparty security computing based on secret sharing according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Other advantages and effects of the present application will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present application with reference to specific examples. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. The application may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present application. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In an embodiment of the present invention, there is provided a model training method for secret sharing-based multiparty security computation, the method running on a multiparty security computing system, the system including at least two computing parties, one of the at least two computing parties being a data demander, as shown in fig. 1, the method comprising:
step S101: in the process of executing multiplication calculation of characteristic data and model parameters, in each calculation party, receiving first sub-secret data and second sub-secret data, and executing multiplication operators on the first sub-secret data and the second sub-secret data to obtain initial sub-secret results of multiplication calculation, wherein the characteristic data is decomposed into first sub-secret data with a first preset quantity, and the model parameters are decomposed into second sub-secret data with the first preset quantityThe first preset number is the same as the number of the computing parties, and the first sub-secret data and the second sub-secret data are used for generating a first data streamBit unsigned bit binary form, +.>A positive integer greater than or equal to 64 and a power of 2;
step S102: in each computing party, performing inverting operation and logic right shift operation on binary bits of an initial sub-secret result obtained by the computing party to obtain a first sub-secret result; performing logic right shift operation on binary bits of the initial sub-secret result obtained by the computing party to obtain a second sub-secret result; determining the first sub-secret result or the second sub-secret result as a final sub-secret result of the computing party according to the positive and negative conditions of the initial sub-secret result obtained by the computing party;
Step S103: transmitting a final sub-secret result obtained by each of the at least two computing parties except the data requiring party to the data requiring party;
step S104: and in the data demand party, carrying out corresponding restoration operation on each received final sub-secret result and the final sub-secret result of the data demand party according to a decomposition mode to obtain a true value of a calculated result of multiplication calculation of the characteristic data and the model parameters, and controlling a training process of the model according to the true value.
As can be seen from the flow shown in fig. 1, in the embodiment of the present invention, the arithmetic right shift is performed on the initial sub-secret result to obtain an accurate truncated result of the initial sub-secret result, that is, a final sub-secret result, and finally, in the data demander, the corresponding restoration operation is performed on each received final sub-secret result and its own final sub-secret result according to the decomposition mode, so as to obtain the actual value of the calculated result of the multiplication calculation of the feature data and the model parameter, and the training process of the model is controlled according to the actual value, and since the final sub-secret result is the accurate truncated result of the initial sub-secret result, the restoration operation is performed based on the final sub-secret result of each calculation party, so as to obtain the actual value of the calculated result of the multiplication calculation of the feature data and the model parameter, which is beneficial to improving the accuracy of the calculated result of the multiplication calculation of the feature data and the model parameter, and at the same time, the multiplication calculation process can also ensure the safety of the data, so that the training process of the model is controlled based on the actual value of the accurate calculated result, which is beneficial to improving the accuracy and effectiveness of the model.
In the embodiment, in order to improve accuracy of the truncated result and further control the model training process based on the accurate calculation result, it is proposed that in the truncated process, positive and negative conditions of a multiplication calculation result (i.e., an initial sub-secret result) obtained by each calculation party are distinguished, and different operations are further performed based on different positive and negative conditions, so as to obtain an accurate truncated result of the multiplication calculation result, for example, when the initial sub-secret result obtained by the calculation party is positive, the second sub-secret result is determined to be a final sub-secret result of the calculation party, i.e., the calculation result of multiplication by each calculation party is positive, and a binary bit of the initial sub-secret result is subjected to a logical right shift operation so as to obtain the truncated result; and when the initial sub-secret result obtained by the computing party is negative, determining the first sub-secret result as a final sub-secret result of the computing party, namely, when the computing result of each computing party multiplication is negative, obtaining a truncated result by performing inverting operation and logical right shift operation on binary bits of the initial sub-secret result.
In specific implementation, in order to improve accuracy of the first sub-secret result, it is proposed to perform a negation operation and a logical right shift operation on binary bits of an initial sub-secret result obtained by the computing party to obtain the first sub-secret result by:
Respectively performing inverting operation on each binary bit of the initial sub-secret result obtained by the computing party to obtain a first intermediate sub-secret result;
performing logic right shift operation of a second preset number of binary bits on the first intermediate sub-secret result to obtain a second intermediate sub-secret result;
and respectively performing inverting operation on each binary bit of the second intermediate sub-secret result to obtain the first sub-secret result.
In specific implementation, in order to improve accuracy of the negation operation, it is proposed to perform the negation operation on each binary bit of the initial sub-secret result obtained by the computing party to obtain a first intermediate sub-secret result by: and performing subtraction operator to respectively perform inverting operation on each binary bit of the initial sub-secret result obtained by the computing party to obtain the first intermediate sub-secret result.
In specific implementation, the performing a subtraction operator to perform a negation operation on each binary bit of the initial sub-secret result obtained by the computing party to obtain the first intermediate sub-secret result includes:
the method comprises the steps of respectively carrying out inverting operation on each binary bit of an initial sub-secret result obtained by a computing party by executing the following subtracting operator to obtain a first intermediate sub-secret result:
Wherein,representing the initial sub-secret result as an input; />Is a sub-secret of (2); />Representing the first intermediate sub-secret result.
In particular implementations, the flow of the "inverting by bit" operator may include the following steps:
the input isOutput is +.>Wherein->,/>Taking 64 as an example, the calculation expression of the "inverting by bit" operator can be noted +.>The operator flow is as follows:
1、calculator owns->,/>Calculator owns->Wherein->And->For the purpose of +.>An operand of (a);
2、the calculator performs calculation +.>,/>The calculation is performed by the calculation party/>
3. Can verifyIs->When the output data is the first intermediate sub-secret result,corresponding to +.>Is a operand of (c).
In specific implementation, in order to improve accuracy of the second sub-secret result, it is proposed to implement a logical right shift operation on the binary bits of the initial sub-secret result obtained by the computing party, to obtain the second sub-secret result: and performing logical right shift operation on binary bits of the initial sub-secret result obtained by the computing party by executing a subtraction operator, a DReLU operator, a multiplication operator and an addition operator to obtain the second sub-secret result.
In specific implementation, performing a logical right shift operation on binary bits of an initial sub-secret result obtained by the computing party by executing a subtraction operator, a DReLU operator, a multiplication operator and an addition operator to obtain the second sub-secret result, including:
initialization ofAnd->Wherein->Is the divisor left in vertical division, < >>Is the initial sub-secret result,/->For outputting the result +.>、/>And +.>All are->Bit unsigned bit binary representation, in +.>In the bit unsigned bit binary, bit 0 is the lowest bit, bit +.>-1 bit is the most significant bit, the steps of traversing the initial sub-secret result from +.>Each binary bit from bit to lowest bit, until the cycle is completed after traversing the lowest binary bit, will +.>As a result of said second sub-secret:
let current traverse toIs> Preset->Middle-> Is 1, performs a subtraction operator calculation +.>Minus->Obtaining a third intermediate sub-secret result, wherein +.> A logical right-shifted binary digit;
executing the DReLU operator, judging the positive and negative conditions of the third intermediate sub-secret result, and determining if the output result of the DReLU operator is 1 and the third intermediate sub-secret result is positive Middle-> The number of (2) is 1; if the output result of the DReLU operator is 0, which means that the third intermediate sub-secret result is negative, determining +.>Middle->
Executing the multiplication operator to calculate the output result of the DReLU operator Taking the product result as a reduced value;
executing the multiplication operator to calculate the output result of the DReLU operatorTaking the result of the product as an influence value;
performing subtraction operator calculationsSubtracting the decrease value, assigning the resulting difference value to +.>
Performing addition operator calculationsAdding the influence values and assigning the resulting sum to +.>Wherein, when->Middle->When the bit value is 1, < >>Middle->The number of bits is 1; when->Middle->Number of bitsWhen the value is 0, < >>Middle->The value of the bit is 0.
In particular, the inventors have found that "logical right shift" differs from "arithmetic right shift" in that "logical right shift" always complements 0 at the highest bit, and "arithmetic right shift" complements sign bit at the highest bit. If the value of sign bit 1 is considered to be a large positive integer instead of a negative number, then the "logical right shift"Bit result and divide->The result of the lower rounding is the same, so that at the determination result the highest +.>In the case of a bit of 0, we can implement the "logical right shift" operator by vertical division.
For example, the input of the "logical right shift" operator isAnd->Output is +.>The calculation expression "logical right shift" can be written as +.> Is provided with->Binary representation of +.>Then->Binary representation of +.>The "logical shift right" operator flow is as follows:
1、calculator owns->,/>Calculator owns->
2. Initialization ofI.e. +.>Calculator initialization->,/>Calculator initialization->. Here->The remaining dividend in vertical division;
3、、/>and +.>All at 64 (here>In the form of a 64-bit unsigned bit binary, for example), in the 64-bit unsigned bit binary, the 0 th bit is the lowest bit, the 64-1 th bit is the highest bit, due to the highest +.>Bit 0, and the following steps are cyclically performed in order from the first +.>Starting enumerating the answers of each bit from bit to lowest until the lowest binary bit is traversed to finish the cycle>As a result of said second sub-secret, assuming that the enumeration is currently +.>Is>Bit binary bits;
(1) Assume thatMiddle->The +p bit is 1, then the remaining dividend +.>,/>I.e. the third intermediate sub-secret result, i.e +.>The calculator performs calculation +.>,/>The calculation party performs the party calculation ∈ ->
(2) Judging the residual dividendCalculating +.>Namely the output result of the DReLU operator; if the remaining dividend +. >Positive (i.e. the output of the DReLU operator is 1), then determine +.>Middle->The value of +p bit is 1, otherwise (i.e. the remaining dividend +.>Negative, the output of the DReLU operator is 0), then determine +.>Middle->The value of +p bit is 0, and the remaining dividend is calculated correspondingly;
(3) Calculation of,/>I.e. the above-mentioned reduction, which is the reduction of the remaining dividend;
(4) Calculation of,/>I.e. the above-mentioned influence value, which is the influence value of the current bit on the answer;
(5) Performing subtraction operator calculationsSubtracting the decrease value->To calculate the actual value of the remaining dividend, the difference obtained is assigned to +.>I.e. +.>
(6) Performing addition operator calculationsPlus the influence value->To calculate the answer value after considering the current bit and assign the sum to +.>I.e. +.>
4. Can verify the final resultIs->Is a sharing scheme of the utility model.
In the implementation, the process of performing the logical right shift operation on the second preset number of binary bits on the first intermediate sub-secret result is similar to the process of performing the logical right shift operation on the binary bits of the initial sub-secret result, which is not repeated herein.
In particular, the inventors have found that for an "arithmetic right shift" operator, it can be noted that if the operand is a non-negative number, then the result of the "logical right shift" and the "arithmetic right shift" are the same. And for the case where the operand is negative, i.e It is "arithmetically right shifted">The result of bit is->Wherein there is->A preamble 1; but->The value after "bit-wise inversion" is +.>It is "logically right shifted">The result of bit is->Therein is provided withLeading 0. It is easy to verify that it is in inverse relation to our desired result.
From the above analysis, the present embodiment may assume the positive and negative conditions of the operands, calculate two possible result values, use the ability of the DReLU operator to verify the positive and negative and the ability of the "secret picking" operator to select one of the two values, calculate the "arithmetic right shift" correct result, i.e. the truncated correct result, the "arithmetic right shift" process is the process of calculating the first and second sub-secret results of the initial sub-secret result, and determine the first or second sub-secret result as the final sub-secret result based on the positive and negative conditions of the initial sub-secret result.
For example, the flow of the "arithmetic right shift" operator is as follows:
the input of the arithmetic right shift operator isAnd->Output is +.>The calculation expression is marked as +.>. Is provided with->Binary representation of +.>Then->Binary representation of +.>The operator flow is as follows:
1、the recipe owns->,/>Square congestionThere is->
2. Calculation ofThe "bit-wise inversion" value of (2) gives the above +. >The calculation expression is +.>
3. Calculation ofThe "logical Right shift" result->The calculation expression is +.>
4. Calculating the "logical right shift" result of kThe calculation expression is +.>
5. Calculation ofThe "bit-wise inversion" value of +.>The calculation expression is +.>
6. Executing DReLU operator, calculatingCalculating the expression of (1)Is->
7. Executing a "secret pick" operator, according toFrom->And->The final result->The expression is calculated as
In the specific implementation, in the data demand side, by performing corresponding restoration operations (for example, decomposing the feature data and the model parameters into the sum of the plurality of sub-secret data, and summing the plurality of final sub-secret results by the corresponding restoration operations) on the received final sub-secret results and the final sub-secret results obtained from the data demand side according to the decomposition mode (the decomposition mode is a mode of decomposing the feature data into the first preset number of first sub-secret data and a mode of decomposing the model parameters into the first preset number of second sub-secret data, and the decomposition mode is an addition-subtraction operation), so as to obtain the true value of the calculation result of the multiplication calculation of the feature data and the model parameters, and the specific process of controlling the model training after obtaining the true value can be implemented by referring to the process of adjusting the model parameters during the model training in the prior art.
In this embodiment, a computer device is provided, as shown in fig. 2, including a memory 201, a processor 202, and a computer program stored in the memory and capable of running on the processor, where the processor implements any of the above model training methods based on secret sharing multiparty security computation when executing the computer program.
In particular, the computer device may be a computer terminal, a server or similar computing means.
In this embodiment, a computer-readable storage medium is provided, in which a computer program for executing the above-described arbitrary secret sharing-based multiparty security calculation model training method is stored.
In particular, computer-readable storage media, including both permanent and non-permanent, removable and non-removable media, may be used to implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer-readable storage media include, but are not limited to, phase-change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable storage media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
Based on the same inventive concept, the embodiment of the invention also provides a model training device for multiparty safety calculation based on secret sharing, as described in the following embodiment. Because the principle of solving the problem of the model training device based on the secret sharing multiparty safety calculation is similar to that of the model training method based on the secret sharing multiparty safety calculation, the implementation of the model training device based on the secret sharing multiparty safety calculation can be referred to the implementation of the model training method based on the secret sharing multiparty safety calculation, and repeated parts are omitted. As used below, the term "unit" or "module" may be a combination of software and/or hardware that implements the intended function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
FIG. 3 is a block diagram of a model training apparatus for secret sharing based multi-party secure computing, the apparatus operating on a multi-party secure computing system, the system including at least two computing parties, one of the at least two computing parties being a data-requiring party, as shown in FIG. 3, according to an embodiment of the invention, the apparatus comprising:
A data calculation module 301, each of the data calculation modules being disposed in a computing party, each of the data calculation modules being configured to receive a first sub-secret data and a second sub-secret data in the process of performing multiplication of feature data and model parameters, and perform multiplication operators on the first sub-secret data and the second sub-secret data to obtain an initial sub-secret result of the multiplication, wherein the feature data is decomposed into a first preset number of first sub-secret data, the model parameters are decomposed into a first preset number of second sub-secret data, the first preset number of values is the same as the number of computing parties, the first sub-secret data and the second sub-secret data are used forBit unsigned bit binary form, +.>A positive integer greater than or equal to 64 and a power of 2; />
Each data calculation module 301 is further configured to perform a negation operation and a logical right shift operation on the binary bit of the initial sub-secret result obtained by the data calculation module, to obtain a first sub-secret result; performing logic right shift operation on binary bits of the initial sub-secret result obtained by the computing party to obtain a second sub-secret result; determining the first sub-secret result or the second sub-secret result as a final sub-secret result of the computing party according to the positive and negative conditions of the initial sub-secret result obtained by the computing party;
The data calculation module 301 in each of the at least two calculation parties except the data requiring party is further configured to send a final sub-secret result obtained by itself to the data requiring party;
the model training module 302 is deployed in the data demand party, and is configured to perform corresponding restoration operation on each received final sub-secret result and the final sub-secret result obtained by the data calculation module in the data demand party according to a decomposition mode, obtain a true value of the calculated result of multiplication calculation of the feature data and the model parameters, and control a training process of the model according to the true value.
In one embodiment, the data calculation module 301 includes:
the first computing unit is used for respectively carrying out inverting operation on each binary bit of the initial sub-secret result obtained by the computing party to obtain a first intermediate sub-secret result; performing logic right shift operation of a second preset number of binary bits on the first intermediate sub-secret result to obtain a second intermediate sub-secret result; and respectively performing inverting operation on each binary bit of the second intermediate sub-secret result to obtain the first sub-secret result.
In one embodiment, the first computing unit is configured to perform a subtraction operation on each binary bit of the initial sub-secret result obtained by the computing party by executing a subtraction operator, so as to obtain the first intermediate sub-secret result.
In one embodiment, the first computing unit is configured to perform a negation operation on each binary bit of an initial sub-secret result obtained by the computing party by performing the following subtraction operator, to obtain the first intermediate sub-secret result:
wherein,representing the initial sub-secretResults; />Is a sub-secret of (2); />Representing the first intermediate sub-secret result.
In one embodiment, the data calculation module 301 includes:
and the second calculation unit is used for performing logical right shift operation on binary bits of the initial sub-secret result obtained by the calculation party by executing a subtraction operator, a DReLU operator, a multiplication operator and an addition operator to obtain the second sub-secret result.
In one embodiment, a second computing unit for initializing And +.>Wherein->Is the divisor left in vertical division, < >>Is the initial sub-secret result,/->For outputting the result +.>、/> To->Bit unsigned bit binary representation, in +. >In the bit unsigned bit binary, bit 0 is the lowest bit, bit +.>-1 bit is the most significant bit, the steps of traversing the initial sub-secret result from +.>Each binary bit from bit to lowest bit, until the cycle is completed after traversing the lowest binary bit, will +.>As a result of said second sub-secret:
let current traverse toIs> Preset->Middle-> Is 1, performs a subtraction operator calculation +.>Minus->Obtaining a third intermediate sub-secret result, wherein +.> A logical right-shifted binary digit;
executing the DReLU operator, judging the positive and negative conditions of the third intermediate sub-secret result, and determining if the output result of the DReLU operator is 1 and the third intermediate sub-secret result is positiveMiddle-> The number of (2) is 1; if the output result of the DReLU operator is 0, which means that the third intermediate sub-secret result is negative, determining +.>Middle->
Executing the multiplication operator to calculate the output result of the DReLU operator Taking the product result as a reduced value;
executing the multiplication operator to calculate the output result of the DReLU operatorTaking the result of the product as an influence value;
performing subtraction operator calculationsSubtracting the decrease value, assigning the resulting difference value to +. >
Performing addition operator calculations Wherein, when->Middle-> When the value of (2) is 1, ">Middle->The number of bits is 1; when->Middle->When the bit value is 0, < >> The value of the bit is 0.
In one embodiment, the data calculation module 301 includes:
a selecting unit, configured to determine the second sub-secret result as a final sub-secret result of the computing party when the initial sub-secret result obtained by the computing party is positive; and when the initial sub-secret result obtained by the computing party is negative, determining the first sub-secret result as a final sub-secret result of the computing party.
The embodiment of the invention realizes the following technical effects: in the model training process, after multiplication operators for carrying out secret sharing on characteristic data and model parameters are needed to calculate, each calculator respectively obtains an initial sub-secret result of multiplication calculation, further, the binary bit of the initial sub-secret result obtained by the calculator is subjected to inverting operation and logic right shifting operation in each calculator to obtain a first sub-secret result, the binary bit of the initial sub-secret result obtained by the calculator is subjected to logic right shifting operation to obtain a second sub-secret result, the first sub-secret result or the second sub-secret result is determined to be a final sub-secret result of the calculator according to the positive and negative conditions of the initial sub-secret result obtained by the calculator, the arithmetic right shifting of the initial sub-secret result is realized to obtain an accurate cut-off result of the initial sub-secret result, namely a final sub-secret result, and finally, in the data demand side, carrying out corresponding restoration operation on each received final sub-secret result and the final sub-secret result of the data demand side according to a decomposition mode to obtain the true value of the calculated result of the multiplication calculation of the characteristic data and the model parameters, controlling the training process of the model according to the true value, carrying out restoration operation based on the final sub-secret result of each calculation side because the final sub-secret result is an accurate truncated result of the initial sub-secret result to obtain the true value of the calculated result of the multiplication calculation of the characteristic data and the model parameters, thereby being beneficial to improving the accuracy of the calculated result of the multiplication calculation of the characteristic data and the model parameters, simultaneously, ensuring the safety of the data by the multiplication calculation process, controlling the training process of the model based on the true value of the accurate multiplication calculation result, the accuracy and the effectiveness of model training are improved, and the accuracy of the model is improved.
It will be apparent to those skilled in the art that the modules or steps of the embodiments of the invention described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may alternatively be implemented in program code executable by computing devices, so that they may be stored in a storage device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than what is shown or described, or they may be separately fabricated into individual integrated circuit modules, or a plurality of modules or steps in them may be fabricated into a single integrated circuit module. Thus, embodiments of the invention are not limited to any specific combination of hardware and software.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, and various modifications and variations can be made to the embodiments of the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. A model training method for multiparty security computing based on secret sharing, the method operating on a multiparty security computing system, the system comprising at least two computing parties, one of the at least two computing parties being a data-requiring party, the method comprising:
in the process of performing multiplication calculation of characteristic data and model parameters, in each calculation party, receiving a first sub-secret data and a second sub-secret data, and performing multiplication operations on the first sub-secret data and the second sub-secret data to obtain an initial sub-secret result of the multiplication calculation, wherein the characteristic data is divided into two sub-secret dataFirst sub-secret data of a first preset number are resolved, the model parameters are resolved into second sub-secret data of the first preset number, the value of the first preset number is the same as the number of the computing party, and the first sub-secret data and the second sub-secret data are used for processing the first sub-secret data and the second sub-secret dataBit unsigned bit binary form, +.>A positive integer greater than or equal to 64 and a power of 2;
in each computing party, performing inverting operation and logic right shift operation on binary bits of an initial sub-secret result obtained by the computing party to obtain a first sub-secret result; performing logic right shift operation on binary bits of the initial sub-secret result obtained by the computing party to obtain a second sub-secret result; determining the first sub-secret result or the second sub-secret result as a final sub-secret result of the computing party according to the positive and negative conditions of the initial sub-secret result obtained by the computing party;
Transmitting a final sub-secret result obtained by each of the at least two computing parties except the data requiring party to the data requiring party;
in the data demand party, corresponding restoration operation is carried out on each received final sub-secret result and the final sub-secret result of the data demand party according to a decomposition mode, so as to obtain a true value of a calculation result of multiplication calculation of the characteristic data and the model parameters, and a training process of the model is controlled according to the true value;
performing a negation operation and a logical right shift operation on binary bits of an initial sub-secret result obtained by the computing party to obtain a first sub-secret result, including:
respectively performing inverting operation on each binary bit of the initial sub-secret result obtained by the computing party to obtain a first intermediate sub-secret result;
performing logic right shift operation of a second preset number of binary bits on the first intermediate sub-secret result to obtain a second intermediate sub-secret result;
respectively performing inverting operation on each binary bit of the second intermediate sub-secret result to obtain the first sub-secret result;
performing logic right shift operation on binary bits of the initial sub-secret result obtained by the computing party to obtain a second sub-secret result, including:
Performing logical right shift operation on binary bits of the initial sub-secret result obtained by the computing party by executing a subtraction operator, a DReLU operator, a multiplication operator and an addition operator to obtain the second sub-secret result;
determining the first sub-secret result or the second sub-secret result as a final sub-secret result of the computing party according to the positive and negative conditions of the initial sub-secret result obtained by the computing party, including:
when the initial sub-secret result obtained by the computing party is positive, determining the second sub-secret result as a final sub-secret result of the computing party; and when the initial sub-secret result obtained by the computing party is negative, determining the first sub-secret result as a final sub-secret result of the computing party.
2. The model training method for multiparty security computation based on secret sharing according to claim 1, wherein performing a negation operation on each binary bit of the initial sub-secret result obtained by the computing party to obtain a first intermediate sub-secret result comprises:
and performing subtraction operator to respectively perform inverting operation on each binary bit of the initial sub-secret result obtained by the computing party to obtain the first intermediate sub-secret result.
3. The model training method for multiparty security computation based on secret sharing according to claim 2, wherein the performing subtraction operator to perform the inverting operation on each binary bit of the initial sub-secret result obtained by the computing party to obtain the first intermediate sub-secret result comprises:
the method comprises the steps of respectively carrying out inverting operation on each binary bit of an initial sub-secret result obtained by a computing party by executing the following subtracting operator to obtain a first intermediate sub-secret result:
wherein,representing the initial sub-secret result; />Is a sub-secret of (2); />Representing the first intermediate sub-secret result.
4. The model training method for secret sharing-based multiparty security computation of claim 1, wherein performing a logical right shift operation on binary bits of an initial sub-secret result obtained by the computing party by performing a subtraction operator, a DReLU operator, a multiplication operator, and an addition operator to obtain the second sub-secret result comprises:
initialization ofAnd->Wherein->Is the divisor left in vertical division, < >>Is the initial sub-secret result,/->For outputting the result +.>、/>And +.>All are- >Bit unsigned bit binary representation, in +.>In the bit unsigned bit binary, bit 0 is the lowest bit, bit +.>-1 bit is the most significant bit, the steps of traversing the initial sub-secret result from +.>Each binary bit from bit to lowest bit, until the cycle is completed after traversing the lowest binary bit, will +.>As a result of said second sub-secret:
let current traverse toIs>Bit binary bit, preset->Middle->The value of the bit is 1, subtraction operator calculation is performed>Minus->Obtaining a third intermediate sub-secret result, wherein +.>A binary digit number that is logically right shifted;
executing the DReLU operator, judging the positive and negative conditions of the third intermediate sub-secret result, and determining if the output result of the DReLU operator is 1 and the third intermediate sub-secret result is positiveMiddle->The number of bits is 1; if the output result of the DReLU operator is 0, which means that the third intermediate sub-secret result is negative, determining +.>Middle->The value of the bit is 0;
executing the multiplication operator to calculate the output result of the DReLU operatorTaking the product result as a reduced value;
executing the multiplication operator to calculate the output result of the DReLU operator Taking the result of the product as an influence value;
performing subtraction operator calculationsSubtracting the decrease value, assigning the resulting difference value to +.>
Performing addition operator calculationsAdding the influence values and assigning the resulting sum to +.>Wherein, when->Middle (f)When the bit value is 1, < >>Middle->The number of bits is 1; when->Middle->When the bit value is 0, < >>Middle->The value of the bit is 0.
5. A model training apparatus for multiparty secure computing based on secret sharing, the apparatus operating on a multiparty secure computing system, the system comprising at least two computing parties, one of the at least two computing parties being a data requiring party, the apparatus comprising:
a data calculation module, each data calculation module being disposed in a calculation party, each data calculation module being configured to receive a first sub-secret data and a second sub-secret data in the process of performing multiplication calculation of feature data and model parameters, and perform multiplication operators on the first sub-secret data and the second sub-secret data to obtain an initial sub-secret result of the multiplication calculation, wherein the feature data is decomposed into a first preset number of first sub-secret data, the model parameters are decomposed into a first preset number of second sub-secret data, the first preset number of values is the same as the number of calculation parties, the first sub-secret data and the second sub-secret data are used for Bit unsigned bit binary form, +.>A positive integer greater than or equal to 64 and a power of 2;
each data calculation module is further used for performing inverting operation and logic right shift operation on binary bits of the initial sub-secret result obtained by the data calculation module to obtain a first sub-secret result; performing logic right shift operation on binary bits of the initial sub-secret result obtained by the computing party to obtain a second sub-secret result; determining the first sub-secret result or the second sub-secret result as a final sub-secret result of the computing party according to the positive and negative conditions of the initial sub-secret result obtained by the computing party;
the data calculation modules in the other calculation parties except the data demand party in the at least two calculation parties are also used for sending the final sub-secret result obtained by the data calculation modules to the data demand party;
the model training module is deployed in the data demand side and is used for carrying out corresponding restoration operation on each received final sub-secret result and the final sub-secret result obtained by the data calculation module in the data demand side according to a decomposition mode to obtain a true value of a calculated result of multiplication calculation of the characteristic data and the model parameters, and controlling a training process of the model according to the true value;
The data calculation module comprises:
the first computing unit is used for respectively carrying out inverting operation on each binary bit of the initial sub-secret result obtained by the computing party to obtain a first intermediate sub-secret result; performing logic right shift operation of a second preset number of binary bits on the first intermediate sub-secret result to obtain a second intermediate sub-secret result; respectively performing inverting operation on each binary bit of the second intermediate sub-secret result to obtain the first sub-secret result;
the data calculation module comprises:
the second computing unit is used for performing logical right shift operation on binary bits of the initial sub-secret result obtained by the computing party by executing a subtraction operator, a DReLU operator, a multiplication operator and an addition operator to obtain a second sub-secret result;
the data calculation module comprises:
a selecting unit, configured to determine the second sub-secret result as a final sub-secret result of the computing party when the initial sub-secret result obtained by the computing party is positive; and when the initial sub-secret result obtained by the computing party is negative, determining the first sub-secret result as a final sub-secret result of the computing party.
6. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the secret sharing based model training method of multiparty security computation of any of claims 1 to 4 when the computer program is executed by the processor.
7. A computer readable storage medium, characterized in that it stores a computer program for executing the model training method of secret sharing based multiparty security computation according to any one of claims 1 to 4.
CN202311027867.1A 2023-08-16 2023-08-16 Model training method and device for multiparty security calculation based on secret sharing Active CN116738494B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311027867.1A CN116738494B (en) 2023-08-16 2023-08-16 Model training method and device for multiparty security calculation based on secret sharing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311027867.1A CN116738494B (en) 2023-08-16 2023-08-16 Model training method and device for multiparty security calculation based on secret sharing

Publications (2)

Publication Number Publication Date
CN116738494A CN116738494A (en) 2023-09-12
CN116738494B true CN116738494B (en) 2023-11-14

Family

ID=87903048

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311027867.1A Active CN116738494B (en) 2023-08-16 2023-08-16 Model training method and device for multiparty security calculation based on secret sharing

Country Status (1)

Country Link
CN (1) CN116738494B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112182649A (en) * 2020-09-22 2021-01-05 上海海洋大学 Data privacy protection system based on safe two-party calculation linear regression algorithm
CN112464287A (en) * 2020-12-12 2021-03-09 同济大学 Multi-party XGboost safety prediction model training method based on secret sharing and federal learning
WO2022168257A1 (en) * 2021-02-05 2022-08-11 日本電気株式会社 Federated learning system, federated learning device, federated learning method, and federated learning program
CN115632761A (en) * 2022-08-29 2023-01-20 哈尔滨工业大学(深圳) Multi-user distributed privacy protection regression method and device based on secret sharing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112182649A (en) * 2020-09-22 2021-01-05 上海海洋大学 Data privacy protection system based on safe two-party calculation linear regression algorithm
CN112464287A (en) * 2020-12-12 2021-03-09 同济大学 Multi-party XGboost safety prediction model training method based on secret sharing and federal learning
WO2022168257A1 (en) * 2021-02-05 2022-08-11 日本電気株式会社 Federated learning system, federated learning device, federated learning method, and federated learning program
CN115632761A (en) * 2022-08-29 2023-01-20 哈尔滨工业大学(深圳) Multi-user distributed privacy protection regression method and device based on secret sharing

Also Published As

Publication number Publication date
CN116738494A (en) 2023-09-12

Similar Documents

Publication Publication Date Title
US10778410B2 (en) Homomorphic data encryption method and apparatus for implementing privacy protection
CN110199339B (en) Secret calculation system, secret calculation device, secret calculation method, and recording medium
CN110199338B (en) Secret calculation system, secret calculation device, secret calculation method, and recording medium
JP7031682B2 (en) Secret calculator, system, method, program
Seo et al. Efficient arithmetic on ARM‐NEON and its application for high‐speed RSA implementation
JP2022523182A (en) Arithmetic for secure multi-party computation with modular integers
JP7067632B2 (en) Secret sigmoid function calculation system, secret logistic regression calculation system, secret sigmoid function calculation device, secret logistic regression calculation device, secret sigmoid function calculation method, secret logistic regression calculation method, program
CN112200713A (en) Business data processing method, device and equipment in federated learning
Moon et al. An Efficient Encrypted Floating‐Point Representation Using HEAAN and TFHE
EP4016506B1 (en) Softmax function secret calculation system, softmax function secret calculation device, softmax function secret calculation method, neural network secret calculation system, neural network secret learning system, and program
CN114095149B (en) Information encryption method, device, equipment and storage medium
CN116738494B (en) Model training method and device for multiparty security calculation based on secret sharing
CN116436709B (en) Encryption and decryption method, device, equipment and medium for data
Lee et al. Area-efficient subquadratic space-complexity digit-serial multiplier for type-II optimal normal basis of $ GF (2^{m}) $ using symmetric TMVP and block recombination techniques
George et al. IEEE 754 floating-point addition for neuromorphic architecture
CN113467752B (en) Division operation device, data processing system and method for private calculation
WO2022079891A1 (en) Confidential msb normalization system, distributed processing device, confidential msb normalization method, and program
CN108075889B (en) Data transmission method and system for reducing complexity of encryption and decryption operation time
CN114706557B (en) ASIC chip and implementation method and device of Montgomery modular multiplication
JP7331951B2 (en) Secret Square Root Calculation System, Secret Normalization System, Their Methods, Secure Calculation Apparatus, and Program
AU2020425195B2 (en) Secure inverse square root computation system, secure normalization system, methods therefor, secure computation apparatus, and program
AU2020423805B2 (en) Secure selective product computation system, secure selective product computation method, secure computation apparatus, and program
WO2021149104A1 (en) Secure computation device, secure computation method, and program
CN116894254A (en) Apparatus and method using homomorphic encryption operations
CN117196053A (en) Polynomial modular squaring arithmetic unit, arithmetic method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant