CN113807534A - Model parameter training method and device of federal learning model and electronic equipment - Google Patents

Model parameter training method and device of federal learning model and electronic equipment Download PDF

Info

Publication number
CN113807534A
CN113807534A CN202110251790.0A CN202110251790A CN113807534A CN 113807534 A CN113807534 A CN 113807534A CN 202110251790 A CN202110251790 A CN 202110251790A CN 113807534 A CN113807534 A CN 113807534A
Authority
CN
China
Prior art keywords
gradient
information
generating
parameter
decryption operation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110251790.0A
Other languages
Chinese (zh)
Other versions
CN113807534B (en
Inventor
陈忠
陈晓霖
冯泽瑾
王虎
黄志翔
彭南博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Holding Co Ltd
Original Assignee
Jingdong Technology Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Holding Co Ltd filed Critical Jingdong Technology Holding Co Ltd
Priority to CN202110251790.0A priority Critical patent/CN113807534B/en
Publication of CN113807534A publication Critical patent/CN113807534A/en
Application granted granted Critical
Publication of CN113807534B publication Critical patent/CN113807534B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a method and a device for training model parameters of a federated learning model and electronic equipment, wherein the method comprises the following steps: sample alignment with a data provider server; acquiring a public parameter; calculating gradient information of a current sample, and sending the gradient information to a data provider server; receiving intermediate parameters and gradient return information provided by a data provider server; generating a target split point number according to the gradient return information, and generating a ciphertext based on the service key and the intermediate parameter or the public parameter; generating a feature confusion dictionary based on the target split point number and the confusion split point number, and sending the feature confusion dictionary and the ciphertext to a data provider server; and receiving the first decryption operation value set and the second decryption operation value set sent by the data provider server, and performing node splitting according to the service key, the first decryption operation value set and the second decryption operation value set.

Description

Model parameter training method and device of federal learning model and electronic equipment
Technical Field
The application relates to the technical field of data processing, in particular to a method and a device for training model parameters of a federated learning model and electronic equipment.
Background
With the development of machine learning, more and more machine learning techniques are applied to various industries. The quantity and quality of the data often determine the upper limit of the effectiveness of the machine learning model. However, as regulations and regulations become more stringent and people pay more attention to data security and privacy protection, data islanding is formed. Under the scene, federal learning comes by the fortune, and the joint training can be carried out on the basis that the participators do not share data, so that the problem of data island is solved.
In the related art, federal learning is an encrypted distributed machine learning technology, and various technologies such as information encryption, distributed computation, machine learning and the like are fused. Federal learning can be classified into horizontal federal learning, vertical federal learning, and federal migratory learning according to the characteristics of data held by participants. Under the wind control scene, the application of longitudinal federal learning is wider.
Disclosure of Invention
The embodiment of the first aspect of the application provides a model parameter training method for a federated learning model, which can effectively prevent model extraction attack and model reverse attack, protect the safety of a business side model and training data, prevent information leakage of a data provider, and protect the data safety of the data provider.
The embodiment of the second aspect of the application provides a model parameter training method for a federated learning model.
The embodiment of the third aspect of the application provides a model parameter training device of a federated learning model.
The embodiment of the fourth aspect of the application provides a model parameter training device of a joint learning model.
The embodiment of the fifth aspect of the present application provides an electronic device.
A sixth aspect of the present application provides a computer-readable storage medium.
An embodiment of a first aspect of the present application provides a method for training model parameters of a federated learning model, including:
sample alignment with a data provider server;
acquiring a public parameter;
calculating gradient information of a current sample, and sending the gradient information to the data provider server;
receiving intermediate parameters and gradient return information provided by the data provider server, wherein the intermediate parameters are first key powers of the public parameters;
generating a target split point number according to the gradient return information, and generating a ciphertext based on a service key and the intermediate parameter or the public parameter;
generating a feature obfuscation dictionary based on the target split point number and the obfuscated split point number, and sending the feature obfuscation dictionary and the ciphertext to the data provider server; and
and receiving a first decryption operation value set and a second decryption operation value set sent by the data provider server, and performing node splitting according to the service key, the first decryption operation value set and the second decryption operation value set.
According to the model parameter training method of the federal learning model, firstly, the sample alignment is carried out with the data provider server, and acquires the common parameters, calculates the gradient information of the current sample, and sends the gradient information to the data provider server, then receiving intermediate parameters and gradient return information provided by the data provider server, generating a target split point number according to the gradient return information, and based on the service key, and the intermediate parameter or the common parameter to generate a ciphertext, and then generating a feature obfuscated dictionary based on the target split point number and the obfuscated split point number, and transmitting the feature obfuscated dictionary and the ciphertext to a data provider server, and finally receiving a first decryption operation value set and a second decryption operation value set transmitted by the data provider server, and node splitting is carried out according to the service key, the first decryption operation value set and the second decryption operation value set. Therefore, model extraction attack and model reverse attack can be effectively prevented, the safety of the model and the training data of the business side is protected, information leakage of the data provider can be prevented, and the data safety of the data provider is protected, so that the privacy and the benefit of the business side and the data provider are protected.
In addition, the model parameter training method of the federal learning model according to the above embodiment of the present application may further have the following additional technical features:
in an embodiment of the present application, the calculating gradient information of the current sample includes:
generating a first gradient value and a second gradient value of the current sample;
homomorphically encrypting the first-order gradient value and the second-order gradient value to generate the gradient information.
In an embodiment of the present application, the gradient return information includes a plurality of gradient return information, and each gradient return information corresponds to a corresponding number, where the generating a target split point number according to the gradient return information includes:
respectively generating a plurality of corresponding information gains according to the gradient return information;
and selecting the maximum information gain from the plurality of information gains, and taking the number corresponding to the maximum information gain as the target split point number.
In an embodiment of the application, the generating a ciphertext based on the service key and the intermediate parameter or the common parameter includes:
acquiring a service intermediate value, wherein the service intermediate value is 1 or 0;
when the service intermediate value is 0, generating the ciphertext based on the service key and the public parameter;
and when the service intermediate value is 1, generating the ciphertext based on the service key and the intermediate parameter.
In an embodiment of the application, the generating a feature confusion dictionary based on the target split point number and the confusion split point number includes:
and generating a feature confusion dictionary according to the service intermediate value, the target split point number and the confusion split point number, wherein the confusion split point number is a number selected from corresponding numbers corresponding to the gradient return information.
In an embodiment of the present application, the performing node splitting according to the service key, the first decryption operation value set, and the second decryption operation value set includes:
calculating a first exclusive-or value of the service key power of a first element in the first decryption operation value set and a second element in the first decryption operation value set;
calculating a second exclusive-or value of the service key power of a first element in the second decryption operation value set and a second element in the second decryption operation value set;
generating split space information according to the first exclusive-or value and the second exclusive-or value;
and splitting nodes according to the current sample and the splitting space information.
An embodiment of a second aspect of the present application provides a method for training model parameters of a bang learning model, including:
performing sample alignment with a service side server;
acquiring a public parameter;
receiving gradient information of a currently trained sample sent by the service side server, and acquiring gradient return information according to the gradient information;
sending an intermediate parameter and gradient return information to the service side server, wherein the intermediate parameter is a first secret key power of the public parameter;
receiving a cipher text generated based on the service key, the intermediate parameter or the public parameter and a feature confusion dictionary which are sent by the service side server; and
and generating a first decryption operation value set and a second decryption operation value set according to the intermediate parameter, the public parameter, the ciphertext and the feature confusion dictionary, and sending the first decryption operation value set and the second decryption operation value set to the service side server.
According to the model parameter training method of the federated learning model, firstly, sample alignment is carried out on a business side server, public parameters are obtained, gradient information of a currently trained sample sent by the business side server is received, gradient return information is obtained according to the gradient information, then intermediate parameters and the gradient return information are sent to the business side server, a business key and a ciphertext generated by the intermediate parameters or the public parameters and a feature confusion dictionary which are sent by the business side server are received, and finally a first decryption operation value set and a second decryption operation value set are generated according to the intermediate parameters, the public parameters, the ciphertext and the feature confusion dictionary and are sent to the business side server. Therefore, model extraction attack and model reverse attack can be effectively prevented, the safety of the model and the training data of the business side is protected, information leakage of the data provider can be prevented, and the data safety of the data provider is protected, so that the privacy and the benefit of the business side and the data provider are protected.
In addition, the model parameter training method of the federal learning model according to the above embodiment of the present application may further have the following additional technical features:
in an embodiment of the present application, the obtaining gradient return information according to the gradient information includes:
splitting the sample space according to the splitting threshold value corresponding to each feature to obtain a splitting space on the designated side;
acquiring gradient summation information of the splitting space of the designated side corresponding to each feature according to the gradient information, and numbering the gradient summation information;
and generating the gradient return information by using the gradient summation information and the serial number of the gradient summation information.
In an embodiment of the present application, after the numbering the gradient summation information, the method further includes:
and generating the number and a mapping relation among the feature corresponding to the number, the splitting threshold and the gradient summation information corresponding to the number.
In one embodiment of the present application, the generating a first set of decryption operation values and a second set of decryption operation values from the intermediate parameter, the common parameter, the ciphertext, and the feature obfuscation dictionary includes:
generating the first decryption operation value set and the second decryption operation value set according to the intermediate parameter, the public parameter, the feature obfuscation dictionary, the ciphertext, a second key and a third key; wherein the content of the first and second substances,
the feature confusion dictionary is generated based on a service intermediate value, a target split point number and a confusion split point number, wherein the confusion split point number is a number selected from corresponding numbers corresponding to the gradient return information, and the target split point number is generated according to the gradient return information.
In one embodiment of the present application, the generating the first and second sets of decryption operation values according to the intermediate parameter, the common parameter, the feature obfuscation dictionary, the ciphertext, a second key, and a third key includes:
respectively acquiring a first split space corresponding to the target split point number and a second split space corresponding to the confusion split point number;
acquiring first coding information of the first split space and second coding information of the second split space;
generating a first element according to the second key and the public parameter, and generating a second element according to the ciphertext, the second key and the first coding information;
generating a third element according to the third key and the public parameter, and generating a fourth element according to the intermediate parameter, the ciphertext, the third key and the second encoding information;
generating the first set of decryption operation values from the first element and the second element;
generating the second set of decryption operation values from the third element and the fourth element.
An embodiment of a third aspect of the present application provides a device for training model parameters of a bang learning model, including:
the alignment module is used for aligning samples with the data provider server;
the acquisition module is used for acquiring the public parameters;
the calculation module is used for calculating gradient information of the current sample and sending the gradient information to the data provider server;
a receiving module, configured to receive an intermediate parameter and gradient return information provided by the data provider server, where the intermediate parameter is a first key power of the public parameter;
the first generation module is used for generating a target split point number according to the gradient return information and generating a ciphertext based on a service key and the intermediate parameter or the public parameter;
the second generation module is used for generating a feature confusion dictionary based on the target split point number and the confusion split point number and sending the feature confusion dictionary and the ciphertext to the data provider server; and
and the node splitting module is used for receiving the first decryption operation value set and the second decryption operation value set sent by the data provider server and splitting nodes according to the service key, the first decryption operation value set and the second decryption operation value set.
The model parameter training device of the federal learning model in the embodiment of the application firstly aligns samples with a data provider server through an aligning module, acquires public parameters through an acquiring module, calculates gradient information of the current samples through a calculating module, sends the gradient information to the data provider server, receives intermediate parameters and gradient return information provided by the data provider server through a receiving module, generates a target split point number according to the gradient return information through a first generating module, generates a ciphertext based on a service key and the intermediate parameters or the public parameters, generates a feature confusion dictionary based on the target split point number and the confusion split point number through a second generating module, sends the feature confusion dictionary and the ciphertext to the data provider server, and finally receives a first decryption operation value set and a second decryption operation value set sent by the data provider server through a node splitting module, and node splitting is carried out according to the service key, the first decryption operation value set and the second decryption operation value set. Therefore, model extraction attack and model reverse attack can be effectively prevented, the safety of the model and the training data of the business side is protected, information leakage of the data provider can be prevented, and the data safety of the data provider is protected, so that the privacy and the benefit of the business side and the data provider are protected.
In addition, the model parameter training device of the federal learning model according to the above embodiment of the present application may further have the following additional technical features:
in an embodiment of the present application, the calculation module is specifically configured to:
generating a first gradient value and a second gradient value of the current sample;
homomorphically encrypting the first-order gradient value and the second-order gradient value to generate the gradient information.
In an embodiment of the application, the gradient return information includes a plurality of gradient return information, and each gradient return information corresponds to a corresponding number, where the first generating module is specifically configured to:
respectively generating a plurality of corresponding information gains according to the gradient return information;
and selecting the maximum information gain from the plurality of information gains, and taking the number corresponding to the maximum information gain as the target split point number.
In an embodiment of the application, the first generating module is specifically configured to:
acquiring a service intermediate value, wherein the service intermediate value is 1 or 0;
when the service intermediate value is 0, generating the ciphertext based on the service key and the public parameter;
and when the service intermediate value is 1, generating the ciphertext based on the service key and the intermediate parameter.
In an embodiment of the application, the second generating module is specifically configured to:
and generating a feature confusion dictionary according to the service intermediate value, the target split point number and the confusion split point number, wherein the confusion split point number is a number selected from corresponding numbers corresponding to the gradient return information.
In an embodiment of the present application, the node splitting module is specifically configured to:
calculating a first exclusive-or value of the service key power of a first element in the first decryption operation value set and a second element in the first decryption operation value set;
calculating a second exclusive-or value of the service key power of a first element in the second decryption operation value set and a second element in the second decryption operation value set;
generating split space information according to the first exclusive-or value and the second exclusive-or value;
and splitting nodes according to the current sample and the splitting space information.
An embodiment of a fourth aspect of the present application provides a model parameter training device for a bang learning model, including:
the alignment module is used for aligning samples with the service side server;
the first acquisition module is used for acquiring the public parameters;
the second acquisition module is used for receiving the gradient information of the currently trained sample sent by the service side server and acquiring gradient return information according to the gradient information;
a sending module, configured to send an intermediate parameter and gradient return information to the service side server, where the intermediate parameter is a first key power of the public parameter;
the receiving module is used for receiving a cipher text which is sent by the service side server and is generated based on the service key, the intermediate parameter or the public parameter, and a feature confusion dictionary; and
and the generating module is used for generating a first decryption operation value set and a second decryption operation value set according to the intermediate parameter, the public parameter, the ciphertext and the feature confusion dictionary and sending the first decryption operation value set and the second decryption operation value set to the service side server.
The model parameter training device of the federal learning model in the embodiment of the application firstly aligns samples with a service side server through an aligning module, acquires public parameters through a first acquiring module, receives gradient information of currently trained samples sent by the service side server through a second acquiring module, acquires gradient return information according to the gradient information, sends intermediate parameters and the gradient return information to the service side server through a sending module, receives a cipher text and a feature confusion dictionary which are sent by the service side server and are generated based on a service key and the intermediate parameters or the public parameters through a receiving module, and finally generates a first decryption operation value set and a second decryption operation value set according to the intermediate parameters, the public parameters, the cipher text and the feature confusion dictionary through a generating module and sends the first decryption operation value set and the second decryption operation value set to the service side server. Therefore, model extraction attack and model reverse attack can be effectively prevented, the safety of the model and the training data of the business side is protected, information leakage of the data provider can be prevented, and the data safety of the data provider is protected, so that the privacy and the benefit of the business side and the data provider are protected.
In addition, the model parameter training device of the federal learning model according to the above embodiment of the present application may further have the following additional technical features:
in an embodiment of the application, the second obtaining module is specifically configured to:
splitting the sample space according to the splitting threshold value corresponding to each feature to obtain a splitting space on the designated side;
acquiring gradient summation information of the splitting space of the designated side corresponding to each feature according to the gradient information, and numbering the gradient summation information;
and generating the gradient return information by using the gradient summation information and the serial number of the gradient summation information.
In an embodiment of the application, the second obtaining module is further configured to:
after the gradient summation information is numbered, the number is generated, and the mapping relation among the feature corresponding to the number, the splitting threshold and the gradient summation information corresponding to the number is generated.
In one embodiment of the present application, the generating module includes:
a generating unit, configured to generate the first decryption operation value set and the second decryption operation value set according to the intermediate parameter, the common parameter, the feature obfuscation dictionary, the ciphertext, a second key, and a third key; wherein the content of the first and second substances,
the feature confusion dictionary is generated based on a service intermediate value, a target split point number and a confusion split point number, wherein the confusion split point number is a number selected from corresponding numbers corresponding to the gradient return information, and the target split point number is generated according to the gradient return information.
In an embodiment of the application, the generating unit is specifically configured to:
respectively acquiring a first split space corresponding to the target split point number and a second split space corresponding to the confusion split point number;
acquiring first coding information of the first split space and second coding information of the second split space;
generating a first element according to the second key and the public parameter, and generating a second element according to the ciphertext, the second key and the first coding information;
generating a third element according to the third key and the public parameter, and generating a fourth element according to the intermediate parameter, the ciphertext, the third key and the second encoding information;
generating the first set of decryption operation values from the first element and the second element;
generating the second set of decryption operation values from the third element and the fourth element.
An embodiment of a fifth aspect of the present application provides an electronic device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method for training model parameters of a federated learning model as described in the foregoing embodiments of the first aspect or the second aspect when executing the program.
According to the electronic equipment, the processor executes the computer program stored on the memory, so that model extraction attack and model reverse attack can be effectively prevented, the safety of a model and training data of a business party is protected, information leakage of a data provider can be prevented, the data safety of the data provider is protected, and the privacy and benefits of the business party and the data provider are protected.
An embodiment of a sixth aspect of the present application provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements a method for training model parameters of a federated learning model as described in the embodiment of the first aspect or the embodiment of the second aspect.
The computer-readable storage medium of the embodiment of the application stores the computer program and is executed by the processor, so that the model extraction attack and the model reverse attack can be effectively prevented, the safety of the model and the training data of the business party is protected, the information leakage of the data provider can be prevented, the data safety of the data provider is protected, and the privacy and the benefit of the business party and the data provider are protected.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart diagram of a method for model parameter training of a federated learning model in accordance with one embodiment of the present application;
FIG. 2 is a schematic diagram illustrating interaction between a server at a business entity and a server at a data provider according to an embodiment of the present application;
FIG. 3 is a schematic flow chart diagram illustrating a method for model parameter training of a federated learning model in accordance with another embodiment of the present application;
FIG. 4 is a block diagram of a model parameter training apparatus for a federated learning model in accordance with another embodiment of the present application;
FIG. 5 is a block diagram of a model parameter training apparatus for a federated learning model in accordance with another embodiment of the present application; and
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The following describes a model parameter training method, device and electronic device of the federal learning model according to an embodiment of the present application with reference to the accompanying drawings.
The method for training the model parameters of the federal learning model provided in the embodiment of the present application may be executed by an electronic device, which may be a PC (Personal Computer), a tablet Computer, a server, or the like, and is not limited herein.
In the embodiment of the application, the electronic device can be provided with a processing component, a storage component and a driving component. Optionally, the driver component and the processing component may be integrated, the storage component may store an operating system, an application program, or other program modules, and the processing component implements the model parameter training method of the federal learning model provided in this embodiment by executing the application program stored in the storage component.
FIG. 1 is a flow chart illustrating a method for training model parameters of a federated learning model according to one embodiment of the present application.
The method for training the model parameters of the federated learning model in the embodiments of the present application may also be implemented by a device for training the model parameters of the federated learning model provided in the embodiments of the present application, where the device may be configured in an electronic device to achieve sample alignment with a data provider server, obtain a common parameter, calculate gradient information of a current sample, send the gradient information to the data provider server, receive an intermediate parameter and gradient return information provided by the data provider server, generate a target split point number according to the gradient return information, generate a ciphertext based on a service key and the intermediate parameter or the common parameter, generate a feature confusion dictionary based on the target split point number and the confusion split point number, send the feature confusion dictionary and the ciphertext to the data provider server, and receive a first decryption operation value set and a second decryption value set sent by the data provider server, and node splitting is carried out according to the service key, the first decryption operation value set and the second decryption operation value set, so that the privacy and the benefit of a service party and a data provider are protected.
As a possible situation, the model parameter training method of the federal learning model in the embodiment of the present application may also be executed at a server side, where the server may be a cloud server, and the model parameter training method of the federal learning model may be executed at a cloud side.
As shown in fig. 1, the method for training model parameters of the federal learning model may include:
step 101, aligning samples with a data provider server.
In the embodiment of the present application, a business party (i.e., a business party server) may perform sample alignment with a data provider server through a preset method. The preset method can be calibrated according to actual conditions.
It should be noted that the sample alignment described in this embodiment may refer to the alignment of the sample positions between the service server and the data provider server, so as to facilitate accurate sample transmission. In addition, during sample alignment, a communication channel (channel) between the service and the data provider server may be established and encrypted.
Step 102, obtaining a common parameter. For example, a negotiation with the data provider server to obtain common parameters.
In the embodiment of the present application, a service party (i.e., a service party server) may negotiate with a data provider (i.e., a data provider server) in advance, and a common parameter g is preset in the service party server and the data provider server.
It should be noted that the common parameter g described in this embodiment may be a p-order finite field ZpWhere p may be a large prime number. Wherein, the finite field: a domain having a limited number of elements, the limited domain having wide application in cryptographic coding; order of finite field: the number of elements in the finite field; generating an element: a special element in a space is special in that any other element in the space can be generated with this element.
And 103, calculating gradient information of the current sample, and sending the gradient information to the data provider server.
In one embodiment of the present application, calculating the gradient information of the current sample may include generating a first gradient value and a second gradient value of the current sample, and homomorphically encrypting the first gradient value and the second gradient value to generate the gradient information.
Specifically, referring to fig. 2, the server at the service side may first generate a gradient value g of a current sample (i.e., aligned sample) according to a preset gradient generation algorithm1And a second order gradient value h1And for a step gradient value g1And a second order gradient value h1Homomorphic encryption to generate gradient information (g)1>,<h1>And the gradient information is combined<g1>,<h1>And sending the data to a data provider server. The preset gradient generation algorithm can be calibrated according to actual conditions.
Further, in this embodiment, the current sample may be multiple, and the service server may generate a first gradient value and a second gradient value (g) for each sample according to a preset gradient generation algorithm1,h1),...,(gn,hn) Then by homomorphic encryption to obtain<g1>,<h1>),...,(<gn>,<hn>) And sending the data to a data provider server, wherein n is a positive integer.
In this embodiment of the application, the data provider server may receive gradient information of a currently trained sample sent by the service provider server, obtain gradient return information according to the gradient information, generate three keys, that is, a first key, a second key, and a third key, according to a preset key generation algorithm, and calculate an intermediate parameter based on the first key and a public parameter, where the intermediate parameter may be a first key power of the public parameter, and the preset key generation algorithm may be calibrated according to an actual situation.
The obtaining of the gradient return information according to the gradient information may include splitting the sample space according to a splitting threshold corresponding to each feature to obtain a splitting space on the designated side, obtaining gradient summation information of the splitting space on the designated side corresponding to each feature according to the gradient information, numbering the gradient summation information, and generating the gradient return information by using the gradient summation information and the numbering of the gradient summation information. After the gradient summation information is numbered, generating the number, and mapping relationships among the features corresponding to the number, the splitting threshold and the gradient summation information corresponding to the number may also be included.
Specifically, referring to fig. 2, after receiving gradient information of a currently trained sample sent by a server at a data provider, the server at the data provider may first generate three keys (i.e., a first secret) according to a preset key generation algorithmKey, second key, and third key), which may be numbered as first key s, second key r, respectively0And a third key r1. Meanwhile, an intermediate parameter g can be calculated based on the first secret key s and the public parameter gs
Then, the data provider server may split the sample space according to the splitting threshold corresponding to each feature to obtain a splitting space on the designated side, i.e., perform binning operation, and obtain gradient summation information of the splitting space on the designated side corresponding to each feature according to the gradient information, i.e., calculate gradient summation information of samples in each bin, for example, calculate gradient summation information in the splitting space on the left side (i.e., the left space) by the following formulas (1) and (2):
Figure BDA0002966356900000101
Figure BDA0002966356900000102
wherein the content of the first and second substances,<GL>the information may be summed for the first order gradient of the sample,<HL>the information may be summed for the second order gradient of the sample,<gi>may be the first order gradient information of the sample,<hi>may be first order gradient information of the sample, I may be a positive integer less than or equal to n, ILThere may be a split space on the left (i.e., a space of i samples).
The data provider server may then number the gradient summation information and generate gradient return information using the gradient summation information and the numbering of the gradient summation information.
Further, after numbering the gradient summation information, the data provider server may further generate the number, and a mapping relationship between the feature corresponding to the number, the splitting threshold, and the gradient summation information corresponding to the number, and may generate a table. For example, the following mapping in table a (i.e., number-feature-split threshold-gradient summation information table):
Figure BDA0002966356900000111
TABLE A
It should be noted that the gradient return information described in this embodiment may include the number and the gradient summation information.
Finally, the data provider server may send (synchronize) the intermediate parameters and the gradient return information to the server of the business party. Wherein the data provider server may encrypt data sent (synchronized) to the server of the business party.
And 104, receiving intermediate parameters and gradient return information provided by the data provider server, wherein the intermediate parameters are the first key power of the public parameters.
And 105, generating a target split point number according to the gradient return information, and generating a ciphertext based on the service key and the intermediate parameter or the public parameter. The service key may be generated in advance by the service side server and stored in the storage space of the service side server for subsequent use.
In an embodiment of the present application, the gradient return information may be multiple, and each gradient return information corresponds to a corresponding number, wherein generating the target split point number according to the gradient return information may include generating a plurality of corresponding information gains according to the plurality of gradient return information, respectively, and selecting a maximum information gain from the plurality of information gains, and using the number corresponding to the maximum information gain as the target split point number.
Specifically, referring to fig. 2, after receiving the intermediate parameter and the gradient return information, the service server may generate a plurality of corresponding information gains according to the plurality of gradient return information, select a maximum information gain from the plurality of information gains, and use a number corresponding to the maximum information gain as a target split point number.
For example, a plurality of information gains in the split space on the above-described left side can be calculated according to the following equations (3) and (4):
Figure BDA0002966356900000121
Figure BDA0002966356900000122
wherein G isLiMay be the first order gradient information gain of the sample, HLiMay be a second order gradient information gain of the sample, I may be a positive integer less than or equal to n, ILThere may be a split space on the left (i.e., a space of i samples).
Then, the service-side server finds the maximum information gain (i.e., the first-order gradient information gain and the second-order gradient information gain) among the plurality of information gains and the number q (i.e., the target split point number) in the correspondence table a.
Further, in an embodiment of the present application, generating the ciphertext based on the service key and the intermediate parameter or the common parameter may include obtaining a service intermediate value, where the service intermediate value is 1 or 0, generating the ciphertext based on the service key and the common parameter when the service intermediate value is 0, and generating the ciphertext based on the service key and the intermediate parameter when the service intermediate value is 1.
In the embodiment of the present application, the service intermediary value may be set by a person associated with the service party (e.g., 1 or 0), and is pre-stored in the storage space of the service party server, so that the service party server may directly obtain the service intermediary value from its own storage space when in use.
Wherein, the ciphertext can be obtained by the following formula (5):
Figure BDA0002966356900000123
wherein, UjMay be a cipher text, j may be a service intermediate value, g may be a public parameter, k may be a service key, s may be a first key, g may be a second key, g may be a first key, andsmay be an intermediate parameter and i may be a positive integer less than or equal to n. Wherein, when j isAt 0, Uj=gkWhen j is 1, Uj=gs-k
And 106, generating a feature confusion dictionary based on the target split point number and the confusion split point number, and sending the feature confusion dictionary and the ciphertext to the data provider server.
In an embodiment of the application, generating the feature confusion dictionary based on the target split point number and the confusion split point number may include generating the feature confusion dictionary according to the service median, the target split point number and the confusion split point number, where the confusion split point number is a number selected from corresponding numbers corresponding to the gradient return information.
Specifically, referring to fig. 2, after determining the target split point number (i.e., number q), the service server may first randomly select the feature with number w and the number corresponding to the threshold (i.e., the split threshold) to participate in the confusion, for example, select the feature with number w and the threshold from table a, and the number w and the number q are not the same. The service server may then generate a feature obfuscation dictionary B based on the number q and the number w, where if the service intermediate value j is 0, the feature obfuscation dictionary B is { 0: q; 1: w, namely the number of the target split point (namely, the number corresponding to the maximum information gain) is q, and the index of the feature confusion dictionary is 0; if the traffic median j is 1, the feature confusion dictionary B is { 0: w; 1: q, i.e. the number of the target split point (i.e. the number corresponding to the maximum information gain) is q, and the feature obfuscating dictionary index is 1. See the following formula (6) for details:
Figure BDA0002966356900000131
then, the business side server can send the ciphertext UjAnd feature obfuscation dictionary B are sent (synchronized) to the data provider server.
In the embodiment of the application, the data provider server receives a cipher text generated based on the service key and the intermediate parameter or the common parameter and the feature obfuscation dictionary, which are sent by the service provider server, and generates a first decryption operation value set and a second decryption operation value set according to the intermediate parameter, the common parameter, the cipher text and the feature obfuscation dictionary, and sends the first decryption operation value set and the second decryption operation value set to the service provider server.
The generating of the first decryption operation value set and the second decryption operation value set according to the intermediate parameter, the common parameter, the ciphertext and the feature obfuscation dictionary may include generating the first decryption operation value set and the second decryption operation value set according to the intermediate parameter, the common parameter, the feature obfuscation dictionary, the ciphertext, the second key and the third key. The feature confusion dictionary is generated based on the service intermediate value, the target split point number and the confusion split point number, wherein the confusion split point number is a number selected from corresponding numbers corresponding to the gradient return information, and the target split point number is generated according to the gradient return information.
Further, generating a first decryption operation value set and a second decryption operation value set according to the intermediate parameter, the public parameter, the feature obfuscating dictionary, the ciphertext, the second key and the third key, which may include respectively obtaining a first split space corresponding to the target split point number and a second split space corresponding to the obfuscated split point number, and obtaining first encoding information of the first split space and second encoding information of the second split space, then generating a first element according to the second key and the public parameter, and generating a second element according to the ciphertext, the second key and the first encoding information, then generating a third element according to the third key and the public parameter, and generating a fourth element according to the intermediate parameter, the ciphertext, the third key and the second encoding information, and finally generating the first decryption operation value set according to the first element and the second element, a second set of decryption operation values is generated.
It should be noted that the data provider server receives the ciphertext U sent by the service provider serverjAnd a feature obfuscation dictionary B, since the data provider server knows even the common parameters g and gk(i.e., ciphertext U)j) K cannot be broken (since there is no efficient solution for Discrete Logarithm Problem, Discrete logarithproblem), the data provider server cannot know that g is sent by the data provider serverkOr gs-k(i.e., secret)Chinese Uj) Therefore, the value of the service intermediate value j of the service server cannot be known.
Specifically, referring to fig. 2, the data provider server receives the ciphertext U sent by the service provider serverjAfter the feature confusion dictionary B, the data provider server may find the one-side split space corresponding to the feature number in the feature confusion dictionary B through the above table a, for example, the split spaces concerned by the human in advance are all the left spaces IL. And a set of samples in a given space may be encoded using 0, 1. For example, sample 1 appears in the left space corresponding to the feature with number p and its threshold (i.e., split threshold), then it is recorded as 1 in this position, otherwise it is 0, and so on to the other samples (0, 1 encoding is required to ensure that the data provider server and the service server samples are aligned and the ID number order remains consistent). The sample space information thus filled with 0, 1 is denoted as M, and the sample space information of the corresponding feature number may be numbered according to the index of the feature number in the feature confusion dictionary B, for example, denoted as { 1: q, M0;2:w,M1In which M is0May be the first encoded information, M, described above1May be the second encoded information described above.
The data provider server may then rely on the second key r0Generating a first element according to the public parameter g and the ciphertext UjA second key r0And first coded information, M0Generating a second element and according to a third key r1Generating a third element according to the common parameter g and the intermediate parameter gsCipher text UjA third key r1And second coding information M1A fourth element is generated. Then, according to the first element and the second element, a first decryption operation value set C is generated0And generating a second decryption operation value set C according to the third element and the fourth element1. Wherein, the data provider server can calculate the first decryption operation value set C according to the following formulas (7) and (8)0And a second set of decryption operation values C1
Figure BDA0002966356900000141
Figure BDA0002966356900000142
Then, the data provider server calculates the first decryption operation value set C0And a second set of decryption operation values C1Send (synchronize) to the server at the business side.
And step 107, receiving the first decryption operation value set and the second decryption operation value set sent by the data provider server, and performing node splitting according to the service key, the first decryption operation value set and the second decryption operation value set.
In one embodiment of the present application, performing node splitting according to the service key, the first decryption operation value set, and the second decryption operation value set may include: and calculating the service key power of a first element in the first decryption operation value set and a first exclusive-or value of a second element in the first decryption operation value set, calculating the service key power of the first element in the second decryption operation value set and a second exclusive-or value of the second element in the second decryption operation value set, generating split space information according to the first exclusive-or value and the second exclusive-or value, and performing node splitting according to the current sample and the split space information.
Specifically, referring to fig. 2, the server at the business side receives the first set of decryption operation values C0And a second set of decryption operation values C1Then, the first exclusive or value M can be calculated by the following equations (9) and (10), respectively0' and a second exclusive or value M1′:
Figure BDA0002966356900000143
Figure BDA0002966356900000144
It should be noted that the service side server has a service intermediate value j ∈ {0, 1}, and thus it is understood that, when j ∈ 0, U is equal to 0j=U0=gkFurther, the service side server calculates M0' and M1'. M is shown by the following formula (11)0′=M0And s, r can not be obtained by the service side server1And the data provider server generates and holds the secret key s, r1Thus passing through M1' cannot resolve M1(actually M)1' is a meaningless value).
Figure BDA0002966356900000151
In addition, when j is 1, Uj=U1=gs-kFurther, the service side server calculates M0' and M1'. M is obtained from the following formula (12)1′=M1And s, r can not be obtained by the service side server0And the data provider server generates and holds the secret key s, r0Thus passing through M0' cannot resolve M0(actually M)0' is a meaningless value).
Figure BDA0002966356900000152
Further, when j is 0, M may be determined according to the first xor value0' and a second exclusive or value M1', an intentional first exclusive-OR value M can be obtained0' corresponding first coding information M0Then the service side server can pass the first coding information M0Information of the first split space (i.e., one-sided split space information of the optimal feature) is obtained.
It should be noted that the first coding information M is described in this embodiment0And obtaining the information of the first split space, which can be the space information required by the server of the service side. When the service sideOnly the information of the splitting space at one side of the required optimal splitting characteristic can be obtained, and the data provider does not know which splitting space of the characteristic is solved by the service party, so that the privacy of the service party is protected. At the same time, the XOR operation produces another result M1' is free of valuable information and does not cause other private information of the data provider to be leaked. This step embodies that inadvertent transmission can protect both business and data provider privacy.
Further, the service side server may perform a difference set operation according to the self-aligned sample information M and the split space information (i.e., the first split space information) on one side of the optimal split feature to obtain the split space information on the other side of the optimal split feature, thereby completing the node splitting.
When j is 1, M may be determined according to the first exclusive-or value0' and a second exclusive or value M1', an intentional second exclusive-OR value M can be obtained1' corresponding second coding information M1Then the service side server can pass the second coding information M1Information of the first split space (i.e., one-sided split space information of the optimal feature) is obtained.
It should be noted that the above steps 101 to 106 may be repeated until the model converges to complete the training of the federal learning model.
In the embodiment of the application, the method for training the model parameters of the federal learning model provided in the embodiment of the application can ensure that a service provider can only obtain a sample space with optimal characteristics but cannot obtain additional information of a non-optimal sample space, prevent information leakage of a data provider, protect benefits of the data provider, protect the sample space of a current split node, further enable the data provider to be incapable of knowing whether the current split node is the left side or the right side of a previous node, hide the split direction, and further avoid the data provider from knowing the model structure.
In summary, according to the model parameter training method of the federal learning model in the embodiment of the present application, first, sample alignment is performed with the data provider server, and acquires the common parameters, calculates the gradient information of the current sample, and sends the gradient information to the data provider server, then receiving intermediate parameters and gradient return information provided by the data provider server, generating a target split point number according to the gradient return information, and based on the service key, and the intermediate parameter or the common parameter to generate a ciphertext, and then generating a feature obfuscated dictionary based on the target split point number and the obfuscated split point number, and transmitting the feature obfuscated dictionary and the ciphertext to a data provider server, and finally receiving a first decryption operation value set and a second decryption operation value set transmitted by the data provider server, and node splitting is carried out according to the service key, the first decryption operation value set and the second decryption operation value set. Therefore, model extraction attack and model reverse attack can be effectively prevented, the safety of the model and the training data of the business side is protected, information leakage of the data provider can be prevented, and the data safety of the data provider is protected, so that the privacy and the benefit of the business side and the data provider are protected.
FIG. 3 is a schematic flow chart diagram illustrating a method for training model parameters of a federated learning model according to another embodiment of the present application.
The method for training the model parameters of the federated learning model in the embodiment of the present application may also be implemented by a model parameter training device of the federated learning model provided in the embodiment of the present application, which may be configured in an electronic device to achieve sample alignment with a service side server, and obtain a common parameter, and receive gradient information of a currently trained sample sent by the service side server, and obtain gradient return information according to the gradient information, then send an intermediate parameter and the gradient return information to the service side server, and receive a cipher text and a feature confusion dictionary sent by the service side server and generated based on a service key, the intermediate parameter or the common parameter, and then generate a first decryption operation value set and a second decryption operation value set according to the intermediate parameter, the common parameter, the cipher text and the feature confusion dictionary, and send the first decryption operation value set and the second decryption operation value set to the service side server, thereby protecting the privacy and benefits of the business and data providers.
As a possible situation, the model parameter training method of the federal learning model in the embodiment of the present application may also be executed at a server side, where the server may be a cloud server, and the model parameter training method of the federal learning model may be executed at a cloud side.
As shown in fig. 3, the method for training model parameters of the federal learning model may include:
step 301, aligning the samples with the service server.
Step 302, common parameters are obtained.
Step 303, receiving gradient information of the currently trained sample sent by the service side server, and obtaining gradient return information according to the gradient information.
And step 304, sending the intermediate parameter and the gradient return information to the service side server, wherein the intermediate parameter is a first secret key power of the public parameter.
Step 305, receiving a cipher text generated based on the service key, the intermediate parameter or the common parameter and the feature confusion dictionary sent by the service side server.
And step 306, generating a first decryption operation value set and a second decryption operation value set according to the intermediate parameter, the public parameter, the ciphertext and the feature confusion dictionary, and sending the first decryption operation value set and the second decryption operation value set to the service side server.
In one embodiment of the present application, obtaining gradient return information from gradient information includes: splitting the sample space according to the splitting threshold value corresponding to each feature to obtain a splitting space on the designated side; acquiring gradient summation information of the splitting space of the designated side corresponding to each feature according to the gradient information, and numbering the gradient summation information; and generating gradient return information by using the gradient summation information and the number of the gradient summation information.
In an embodiment of the present application, after numbering the gradient summation information, the method further includes: and generating the number, and mapping relation among the feature corresponding to the number, the splitting threshold and the gradient summation information corresponding to the number.
In one embodiment of the present application, generating a first set of decryption operation values and a second set of decryption operation values from the intermediate parameters, the common parameters, the ciphertext, and the feature obfuscation dictionary includes: generating a first decryption operation value set and a second decryption operation value set according to the intermediate parameter, the public parameter, the feature confusion dictionary, the ciphertext, the second key and the third key; the feature confusion dictionary is generated based on the service intermediate value, the target split point number and the confusion split point number, wherein the confusion split point number is a number selected from corresponding numbers corresponding to the gradient return information, and the target split point number is generated according to the gradient return information.
In one embodiment of the present application, generating a first set of decryption operation values and a second set of decryption operation values from the intermediate parameter, the common parameter, the feature obfuscation dictionary, the ciphertext, the second key, and the third key includes: respectively acquiring a first split space corresponding to the target split point number and a second split space corresponding to the confusion split point number; acquiring first coding information of a first split space and second coding information of a second split space; generating a first element according to the second key and the public parameter, and generating a second element according to the ciphertext, the second key and the first coding information; generating a third element according to the third key and the public parameter, and generating a fourth element according to the intermediate parameter, the ciphertext, the third key and the second coding information; generating a first set of decryption operation values from the first element and the second element; and generating a second decryption operation value set according to the third element and the fourth element.
It should be noted that, for details that are not disclosed in the method for training model parameters of the federal learning model in the embodiment of the present application, please refer to details that are disclosed in the method for training model parameters of the federal learning model in the embodiment of fig. 1 of the present application, and details are not repeated here.
To sum up, according to the model parameter training method of the federal learning model in the embodiment of the present application, first, sample alignment is performed with the service side server, a public parameter is obtained, gradient information of a currently trained sample sent by the service side server is received, gradient return information is obtained according to the gradient information, then, an intermediate parameter and the gradient return information are sent to the service side server, a cipher text and a feature confusion dictionary which are sent by the service side server and generated based on a service key and the intermediate parameter or the public parameter are received, and finally, a first decryption operation value set and a second decryption operation value set are generated according to the intermediate parameter, the public parameter, the cipher text and the feature confusion dictionary and are sent to the service side server. Therefore, model extraction attack and model reverse attack can be effectively prevented, the safety of the model and the training data of the business side is protected, information leakage of the data provider can be prevented, and the data safety of the data provider is protected, so that the privacy and the benefit of the business side and the data provider are protected.
FIG. 4 is a block diagram of a model parameter training apparatus for a federated learning model in accordance with another embodiment of the present application.
The model parameter training device of the federal learning model in the embodiment of the application can be configured in electronic equipment to align samples with a data provider server, acquire common parameters, calculate gradient information of current samples, send the gradient information to the data provider server, receive intermediate parameters and gradient return information provided by the data provider server, generate a target split point number according to the gradient return information, generate a ciphertext based on a service key and the intermediate parameters or the common parameters, generate a feature obfuscating dictionary based on the target split point number and the obfuscating split point number, send the feature obfuscating dictionary and the ciphertext to the data provider server, receive a first decryption operation value set and a second decryption operation value set sent by the data provider server, and perform node splitting according to the service key, the first decryption operation value set and the second decryption operation value set, thereby protecting the privacy and benefits of the business and data providers.
As shown in fig. 4, the model parameter training apparatus 400 of the federal learning model may include: an alignment module 410, an acquisition module 420, a computation module 430, a reception module 440, a first generation module 450, a second generation module 460, and a node splitting module 470.
Wherein the alignment module 410 is configured to perform sample alignment with the data provider server.
The obtaining module 420 is configured to obtain the common parameter.
The calculating module 430 is configured to calculate gradient information of the current sample, and send the gradient information to the data provider server.
The receiving module 440 is configured to receive an intermediate parameter and gradient return information provided by the data provider server, where the intermediate parameter is a first power of a key of the public parameter.
The first generating module 450 is configured to generate a target split point number according to the gradient return information, and generate a ciphertext based on the service key and the intermediate parameter or the common parameter.
The second generation module 460 is configured to generate a feature obfuscation dictionary based on the target split point number and the obfuscated split point number, and send the feature obfuscation dictionary and the ciphertext to the data provider server.
The node splitting module 470 is configured to receive the first decryption operation value set and the second decryption operation value set sent by the data provider server, and perform node splitting according to the service key, the first decryption operation value set, and the second decryption operation value set.
In an embodiment of the present application, the calculation module 430 is specifically configured to: generating a first-order gradient value and a second-order gradient value of a current sample; homomorphic encryption is performed on the first gradient value and the second gradient value to generate gradient information.
In an embodiment of the present application, the gradient return information is multiple, and each gradient return information corresponds to a corresponding number, where the first generating module 450 is specifically configured to: respectively generating a plurality of corresponding information gains according to the gradient return information; the maximum information gain is selected from the plurality of information gains, and the number corresponding to the maximum information gain is used as the target split point number.
In an embodiment of the present application, the first generating module 450 is specifically configured to: acquiring a service intermediate value, wherein the service intermediate value is 1 or 0; when the intermediate value of the service is 0, generating a ciphertext based on the service key and the public parameter; and when the service intermediate value is 1, generating a ciphertext based on the service key and the intermediate parameter.
In an embodiment of the present application, the second generating module 460 is specifically configured to: and generating a feature confusion dictionary according to the service intermediate value, the target split point number and the confusion split point number, wherein the confusion split point number is a number selected from corresponding numbers corresponding to the gradient return information.
In an embodiment of the present application, the node splitting module 470 is specifically configured to: calculating a first exclusive-or value of a service key power of a first element in the first decryption operation value set and a second element in the first decryption operation value set; calculating a second exclusive-or value of the service key power of a first element in the second decryption operation value set and a second element in the second decryption operation value set; generating split space information according to the first exclusive-or value and the second exclusive-or value; and splitting the nodes according to the current sample and the splitting space information.
It should be noted that, for details that are not disclosed in the model parameter training device of the federal learning model in the embodiment of the present application, please refer to details disclosed in the model parameter training method of the federal learning model in the embodiment of fig. 1 of the present application, and details are not repeated herein.
To sum up, the model parameter training apparatus of the federal learning model in the embodiment of the present application performs sample alignment with the data provider server through the alignment module, acquires a common parameter through the acquisition module, calculates gradient information of a current sample through the calculation module, and transmits the gradient information to the data provider server, receives an intermediate parameter and gradient return information provided by the data provider server through the reception module, generates a target split point number according to the gradient return information through the first generation module, generates a ciphertext based on a service key and the intermediate parameter or the common parameter, generates a feature confusion dictionary based on the target split point number and the confusion split point number through the second generation module, transmits the feature confusion dictionary and the ciphertext to the data provider server, and receives a first decryption operation value set and a second decryption operation value set transmitted by the data provider server through the node split module, and node splitting is carried out according to the service key, the first decryption operation value set and the second decryption operation value set. Therefore, model extraction attack and model reverse attack can be effectively prevented, the safety of the model and the training data of the business side is protected, information leakage of the data provider can be prevented, and the data safety of the data provider is protected, so that the privacy and the benefit of the business side and the data provider are protected.
FIG. 5 is a block diagram of a model parameter training apparatus for a federated learning model in accordance with another embodiment of the present application.
The model parameter training device of the federal learning model in the embodiment of the application can be configured in electronic equipment to achieve sample alignment with a business side server, obtain public parameters, receive gradient information of a currently trained sample sent by the business side server, obtain gradient return information according to the gradient information, send intermediate parameters and the gradient return information to the business side server, receive a cipher text and a feature confusion dictionary which are sent by the business side server and generated based on a business key and the intermediate parameters or the public parameters, generate a first decryption operation value set and a second decryption operation value set according to the intermediate parameters, the public parameters, the cipher text and the feature confusion dictionary, and send the first decryption operation value set and the second decryption operation value set to the business side server, so that privacy and benefits of a business side and a data provider are protected.
As shown in fig. 5, the model parameter training apparatus 500 of the federal learning model may include: an alignment module 510, a first acquisition module 520, a second acquisition module 530, a sending module 540, a receiving module 550, and a generating module 560.
The alignment module 510 is configured to perform sample alignment with the service server;
the first obtaining module 520 is used for obtaining the common parameters.
The second obtaining module 530 is configured to receive gradient information of a currently trained sample sent by the server at the service side, and obtain gradient return information according to the gradient information.
The sending module 540 is configured to send the intermediate parameter and the gradient return information to the service side server, where the intermediate parameter is a first key power of the public parameter.
The receiving module 550 is configured to receive a cipher text generated based on the service key, the intermediate parameter or the common parameter, and the feature obfuscation dictionary sent by the service-side server.
The generating module 560 is configured to generate a first decryption operation value set and a second decryption operation value set according to the intermediate parameter, the common parameter, the ciphertext, and the feature obfuscation dictionary, and send the first decryption operation value set and the second decryption operation value set to the service server.
In an embodiment of the present application, the second obtaining module 530 is specifically configured to: splitting the sample space according to the splitting threshold value corresponding to each feature to obtain a splitting space on the designated side; acquiring gradient summation information of the splitting space of the designated side corresponding to each feature according to the gradient information, and numbering the gradient summation information; and generating gradient return information by using the gradient summation information and the number of the gradient summation information.
In an embodiment of the present application, the second obtaining module 530 is further configured to: after the gradient summation information is numbered, the number is generated, and the mapping relation among the feature corresponding to the number, the splitting threshold and the gradient summation information corresponding to the number is generated.
In one embodiment of the present application, as shown in fig. 5, the generating module 560 includes: a generating unit 561, wherein the generating unit 561 is configured to generate a first decryption operation value set and a second decryption operation value set according to the intermediate parameter, the common parameter, the feature obfuscating dictionary, the ciphertext, the second key, and the third key; the feature confusion dictionary is generated based on the service intermediate value, the target split point number and the confusion split point number, wherein the confusion split point number is a number selected from corresponding numbers corresponding to the gradient return information, and the target split point number is generated according to the gradient return information.
In an embodiment of the present application, the generating unit 561 is specifically configured to: respectively acquiring a first split space corresponding to the target split point number and a second split space corresponding to the confusion split point number; acquiring first coding information of a first split space and second coding information of a second split space; generating a first element according to the second key and the public parameter, and generating a second element according to the ciphertext, the second key and the first coding information; generating a third element according to the third key and the public parameter, and generating a fourth element according to the intermediate parameter, the ciphertext, the third key and the second coding information; generating a first set of decryption operation values from the first element and the second element; and generating a second decryption operation value set according to the third element and the fourth element.
It should be noted that, for details that are not disclosed in the model parameter training device of the federal learning model in the embodiment of the present application, please refer to details disclosed in the model parameter training method of the federal learning model in the embodiment of fig. 1 of the present application, and details are not repeated herein.
In summary, the model parameter training apparatus of the federal learning model in the embodiment of the present application performs sample alignment with the service server through the alignment module, and obtains the common parameters through the first obtaining module, receiving gradient information of the currently trained sample sent by the service side server through a second acquisition module, acquiring gradient return information according to the gradient information, then the intermediate parameter and the gradient return information are sent to the service side server through the sending module, and the service-based key sent by the service side server is received through the receiving module, and finally, generating a first decryption operation value set and a second decryption operation value set through a generation module according to the intermediate parameter, the common parameter, the ciphertext and the feature obfuscation dictionary, and sending the first decryption operation value set and the second decryption operation value set to the service side server. Therefore, model extraction attack and model reverse attack can be effectively prevented, the safety of the model and the training data of the business side is protected, information leakage of the data provider can be prevented, and the data safety of the data provider is protected, so that the privacy and the benefit of the business side and the data provider are protected.
In order to implement the foregoing embodiment, as shown in fig. 6, the present invention further provides an electronic device 600, which includes a memory 610, a processor 620, and a computer program stored in the memory 610 and executable on the processor 620, where the processor 620 executes the program to implement the method for training the model parameters of the federal learning model proposed in the foregoing embodiment of the present application.
According to the electronic equipment, the processor executes the computer program stored on the memory, so that model extraction attack and model reverse attack can be effectively prevented, the safety of a model and training data of a business party is protected, information leakage of a data provider can be prevented, the data safety of the data provider is protected, and the privacy and benefits of the business party and the data provider are protected.
In order to implement the foregoing embodiments, the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the method for training model parameters of the federal learning model proposed in the foregoing embodiments of the present application.
The computer-readable storage medium of the embodiment of the application stores the computer program and is executed by the processor, so that the model extraction attack and the model reverse attack can be effectively prevented, the safety of the model and the training data of the business party is protected, the information leakage of the data provider can be prevented, the data safety of the data provider is protected, and the privacy and the benefit of the business party and the data provider are protected.
In the description of the present specification, the terms "first", "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (24)

1. A method for training model parameters of a federated learning model is characterized by comprising the following steps:
sample alignment with a data provider server;
acquiring a public parameter;
calculating gradient information of a current sample, and sending the gradient information to the data provider server;
receiving intermediate parameters and gradient return information provided by the data provider server, wherein the intermediate parameters are first key powers of the public parameters;
generating a target split point number according to the gradient return information, and generating a ciphertext based on a service key and the intermediate parameter or the public parameter;
generating a feature obfuscation dictionary based on the target split point number and the obfuscated split point number, and sending the feature obfuscation dictionary and the ciphertext to the data provider server; and
and receiving a first decryption operation value set and a second decryption operation value set sent by the data provider server, and performing node splitting according to the service key, the first decryption operation value set and the second decryption operation value set.
2. The method for model parameter training of a federal learning model as in claim 1, wherein said calculating gradient information for a current sample comprises:
generating a first gradient value and a second gradient value of the current sample;
homomorphically encrypting the first-order gradient value and the second-order gradient value to generate the gradient information.
3. The method for training model parameters of a federal learning model as claimed in claim 1, wherein the gradient return information includes a plurality of gradient return information, and each gradient return information corresponds to a corresponding number, wherein the generating a target split point number according to the gradient return information includes:
respectively generating a plurality of corresponding information gains according to the gradient return information;
and selecting the maximum information gain from the plurality of information gains, and taking the number corresponding to the maximum information gain as the target split point number.
4. The method for model parameter training of a federated learning model as recited in claim 1, wherein the generating of the ciphertext based on the business key and the intermediate parameter or the common parameter comprises:
acquiring a service intermediate value, wherein the service intermediate value is 1 or 0;
when the service intermediate value is 0, generating the ciphertext based on the service key and the public parameter;
and when the service intermediate value is 1, generating the ciphertext based on the service key and the intermediate parameter.
5. The method for model parameter training of a federated learning model as defined in claim 4, wherein the generating a feature confusion dictionary based on the target split point number and confusion split point number comprises:
and generating a feature confusion dictionary according to the service intermediate value, the target split point number and the confusion split point number, wherein the confusion split point number is a number selected from corresponding numbers corresponding to the gradient return information.
6. The method for model parameter training of a federated learning model as recited in claim 4 or 5, wherein the performing node splitting according to the traffic key, the first set of decryption operation values, and the second set of decryption operation values comprises:
calculating a first exclusive-or value of the service key power of a first element in the first decryption operation value set and a second element in the first decryption operation value set;
calculating a second exclusive-or value of the service key power of a first element in the second decryption operation value set and a second element in the second decryption operation value set;
generating split space information according to the first exclusive-or value and the second exclusive-or value;
and splitting nodes according to the current sample and the splitting space information.
7. A method for training model parameters of a federated learning model is characterized by comprising the following steps:
performing sample alignment with a service side server;
acquiring a public parameter;
receiving gradient information of a currently trained sample sent by the service side server, and acquiring gradient return information according to the gradient information;
sending an intermediate parameter and gradient return information to the service side server, wherein the intermediate parameter is a first secret key power of the public parameter;
receiving a cipher text generated based on the service key, the intermediate parameter or the public parameter and a feature confusion dictionary which are sent by the service side server; and
and generating a first decryption operation value set and a second decryption operation value set according to the intermediate parameter, the public parameter, the ciphertext and the feature confusion dictionary, and sending the first decryption operation value set and the second decryption operation value set to the service side server.
8. The method for training model parameters of a federated learning model as recited in claim 7, wherein the obtaining gradient return information based on the gradient information comprises:
splitting the sample space according to the splitting threshold value corresponding to each feature to obtain a splitting space on the designated side;
acquiring gradient summation information of the splitting space of the designated side corresponding to each feature according to the gradient information, and numbering the gradient summation information;
and generating the gradient return information by using the gradient summation information and the serial number of the gradient summation information.
9. The method for model parameter training of a federal learning model as in claim 8, wherein said numbering said gradient sum information further comprises:
and generating the number and a mapping relation among the feature corresponding to the number, the splitting threshold and the gradient summation information corresponding to the number.
10. The method of model parameter training of a federated learning model as defined in claim 7, wherein the generating a first set of decryption operation values and a second set of decryption operation values from the intermediate parameters, the common parameters, the ciphertext, and the feature obfuscation dictionary comprises:
generating the first decryption operation value set and the second decryption operation value set according to the intermediate parameter, the public parameter, the feature obfuscation dictionary, the ciphertext, a second key and a third key; wherein the content of the first and second substances,
the feature confusion dictionary is generated based on a service intermediate value, a target split point number and a confusion split point number, wherein the confusion split point number is a number selected from corresponding numbers corresponding to the gradient return information, and the target split point number is generated according to the gradient return information.
11. The method of model parameter training of a federated learning model as recited in claim 10, wherein the generating the first and second sets of decryption operation values from the intermediate parameters, the common parameters, the feature obfuscation dictionary, the ciphertext, a second key, and a third key comprises:
respectively acquiring a first split space corresponding to the target split point number and a second split space corresponding to the confusion split point number;
acquiring first coding information of the first split space and second coding information of the second split space;
generating a first element according to the second key and the public parameter, and generating a second element according to the ciphertext, the second key and the first coding information;
generating a third element according to the third key and the public parameter, and generating a fourth element according to the intermediate parameter, the ciphertext, the third key and the second encoding information;
generating the first set of decryption operation values from the first element and the second element;
generating the second set of decryption operation values from the third element and the fourth element.
12. The utility model provides a model parameter training device of nation learning model, its characterized in that, the device includes:
the alignment module is used for aligning samples with the data provider server;
the acquisition module acquires the public parameters;
the calculation module is used for calculating gradient information of the current sample and sending the gradient information to the data provider server;
a receiving module, configured to receive an intermediate parameter and gradient return information provided by the data provider server, where the intermediate parameter is a first key power of the public parameter;
the first generation module is used for generating a target split point number according to the gradient return information and generating a ciphertext based on a service key and the intermediate parameter or the public parameter;
the second generation module is used for generating a feature confusion dictionary based on the target split point number and the confusion split point number and sending the feature confusion dictionary and the ciphertext to the data provider server; and
and the node splitting module is used for receiving the first decryption operation value set and the second decryption operation value set sent by the data provider server and splitting nodes according to the service key, the first decryption operation value set and the second decryption operation value set.
13. The model parameter training apparatus of a federal learning model as in claim 12, wherein the calculation module is specifically configured to:
generating a first gradient value and a second gradient value of the current sample;
homomorphically encrypting the first-order gradient value and the second-order gradient value to generate the gradient information.
14. The model parameter training apparatus of a federal learning model as claimed in claim 12, wherein the gradient return information is a plurality of gradient return information, and each gradient return information corresponds to a corresponding number, wherein the first generating module is specifically configured to:
respectively generating a plurality of corresponding information gains according to the gradient return information;
and selecting the maximum information gain from the plurality of information gains, and taking the number corresponding to the maximum information gain as the target split point number.
15. The model parameter training apparatus of a federal learning model as in claim 12, wherein the first generating module is specifically configured to:
acquiring a service intermediate value, wherein the service intermediate value is 1 or 0;
when the service intermediate value is 0, generating the ciphertext based on the service key and the public parameter;
and when the service intermediate value is 1, generating the ciphertext based on the service key and the intermediate parameter.
16. The model parameter training apparatus of a federal learning model as in claim 15, wherein the second generating module is specifically configured to:
and generating a feature confusion dictionary according to the service intermediate value, the target split point number and the confusion split point number, wherein the confusion split point number is a number selected from corresponding numbers corresponding to the gradient return information.
17. The model parameter training apparatus of a federal learning model as in claim 15 or 16, wherein the node split module is specifically configured to:
calculating a first exclusive-or value of the service key power of a first element in the first decryption operation value set and a second element in the first decryption operation value set;
calculating a second exclusive-or value of the service key power of a first element in the second decryption operation value set and a second element in the second decryption operation value set;
generating split space information according to the first exclusive-or value and the second exclusive-or value;
and splitting nodes according to the current sample and the splitting space information.
18. The utility model provides a model parameter training device of nation learning model, its characterized in that, the device includes:
the alignment module is used for aligning samples with the service side server;
the first acquisition module acquires a public parameter; (ii) a
The second acquisition module is used for receiving the gradient information of the currently trained sample sent by the service side server and acquiring gradient return information according to the gradient information;
a sending module, configured to send an intermediate parameter and gradient return information to the service side server, where the intermediate parameter is a first key power of the public parameter;
the receiving module is used for receiving a cipher text which is sent by the service side server and is generated based on the service key, the intermediate parameter or the public parameter, and a feature confusion dictionary; and
and the generating module is used for generating a first decryption operation value set and a second decryption operation value set according to the intermediate parameter, the public parameter, the ciphertext and the feature confusion dictionary and sending the first decryption operation value set and the second decryption operation value set to the service side server.
19. The model parameter training apparatus of a federal learning model as in claim 18, wherein the second obtaining module is specifically configured to:
splitting the sample space according to the splitting threshold value corresponding to each feature to obtain a splitting space on the designated side;
acquiring gradient summation information of the splitting space of the designated side corresponding to each feature according to the gradient information, and numbering the gradient summation information;
and generating the gradient return information by using the gradient summation information and the serial number of the gradient summation information.
20. The model parameter training apparatus of a federal learning model as in claim 19, wherein the second obtaining module is further configured to:
after the gradient summation information is numbered, the number is generated, and the mapping relation among the feature corresponding to the number, the splitting threshold and the gradient summation information corresponding to the number is generated.
21. The model parameter training apparatus of a federal learning model as claimed in claim 18, wherein said generation module comprises:
a generating unit, configured to generate the first decryption operation value set and the second decryption operation value set according to the intermediate parameter, the common parameter, the feature obfuscation dictionary, the ciphertext, a second key, and a third key; wherein the content of the first and second substances,
the feature confusion dictionary is generated based on a service intermediate value, a target split point number and a confusion split point number, wherein the confusion split point number is a number selected from corresponding numbers corresponding to the gradient return information, and the target split point number is generated according to the gradient return information.
22. The model parameter training apparatus of a federal learning model as in claim 21, wherein the generating unit is specifically configured to:
respectively acquiring a first split space corresponding to the target split point number and a second split space corresponding to the confusion split point number;
acquiring first coding information of the first split space and second coding information of the second split space;
generating a first element according to the second key and the public parameter, and generating a second element according to the ciphertext, the second key and the first coding information;
generating a third element according to the third key and the public parameter, and generating a fourth element according to the intermediate parameter, the ciphertext, the third key and the second encoding information;
generating the first set of decryption operation values from the first element and the second element;
generating the second set of decryption operation values from the third element and the fourth element.
23. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, when executing the program, implementing a method for model parameter training of a federated learning model as described in any of claims 1-6 or claims 7-11.
24. A computer-readable storage medium on which a computer program is stored, which program, when executed by a processor, implements a method for model parameter training of a federal learning model as claimed in any of claims 1-6 or claims 7-11.
CN202110251790.0A 2021-03-08 2021-03-08 Model parameter training method and device of federal learning model and electronic equipment Active CN113807534B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110251790.0A CN113807534B (en) 2021-03-08 2021-03-08 Model parameter training method and device of federal learning model and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110251790.0A CN113807534B (en) 2021-03-08 2021-03-08 Model parameter training method and device of federal learning model and electronic equipment

Publications (2)

Publication Number Publication Date
CN113807534A true CN113807534A (en) 2021-12-17
CN113807534B CN113807534B (en) 2023-09-01

Family

ID=78892963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110251790.0A Active CN113807534B (en) 2021-03-08 2021-03-08 Model parameter training method and device of federal learning model and electronic equipment

Country Status (1)

Country Link
CN (1) CN113807534B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114499866A (en) * 2022-04-08 2022-05-13 深圳致星科技有限公司 Key hierarchical management method and device for federal learning and privacy calculation
CN117411652A (en) * 2022-07-08 2024-01-16 抖音视界有限公司 Data processing method, electronic device and computer readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165515A (en) * 2018-08-10 2019-01-08 深圳前海微众银行股份有限公司 Model parameter acquisition methods, system and readable storage medium storing program for executing based on federation's study
WO2020029590A1 (en) * 2018-08-10 2020-02-13 深圳前海微众银行股份有限公司 Sample prediction method and device based on federated training, and storage medium
CN111340614A (en) * 2020-02-28 2020-06-26 深圳前海微众银行股份有限公司 Sample sampling method and device based on federal learning and readable storage medium
CN111368901A (en) * 2020-02-28 2020-07-03 深圳前海微众银行股份有限公司 Multi-party combined modeling method, device and medium based on federal learning
CN111401552A (en) * 2020-03-11 2020-07-10 浙江大学 Federal learning method and system based on batch size adjustment and gradient compression rate adjustment
CN111598186A (en) * 2020-06-05 2020-08-28 腾讯科技(深圳)有限公司 Decision model training method, prediction method and device based on longitudinal federal learning
CN111611610A (en) * 2020-04-12 2020-09-01 西安电子科技大学 Federal learning information processing method, system, storage medium, program, and terminal
US20210004718A1 (en) * 2019-07-03 2021-01-07 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and device for training a model based on federated learning
EP3786872A1 (en) * 2019-08-26 2021-03-03 Accenture Global Solutions Limited Decentralized federated learning system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165515A (en) * 2018-08-10 2019-01-08 深圳前海微众银行股份有限公司 Model parameter acquisition methods, system and readable storage medium storing program for executing based on federation's study
WO2020029590A1 (en) * 2018-08-10 2020-02-13 深圳前海微众银行股份有限公司 Sample prediction method and device based on federated training, and storage medium
US20210004718A1 (en) * 2019-07-03 2021-01-07 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and device for training a model based on federated learning
EP3786872A1 (en) * 2019-08-26 2021-03-03 Accenture Global Solutions Limited Decentralized federated learning system
CN111340614A (en) * 2020-02-28 2020-06-26 深圳前海微众银行股份有限公司 Sample sampling method and device based on federal learning and readable storage medium
CN111368901A (en) * 2020-02-28 2020-07-03 深圳前海微众银行股份有限公司 Multi-party combined modeling method, device and medium based on federal learning
CN111401552A (en) * 2020-03-11 2020-07-10 浙江大学 Federal learning method and system based on batch size adjustment and gradient compression rate adjustment
CN111611610A (en) * 2020-04-12 2020-09-01 西安电子科技大学 Federal learning information processing method, system, storage medium, program, and terminal
CN111598186A (en) * 2020-06-05 2020-08-28 腾讯科技(深圳)有限公司 Decision model training method, prediction method and device based on longitudinal federal learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周俊;方国英;吴楠;: "联邦学习安全与隐私保护研究综述", 西华大学学报(自然科学版), no. 04 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114499866A (en) * 2022-04-08 2022-05-13 深圳致星科技有限公司 Key hierarchical management method and device for federal learning and privacy calculation
CN114499866B (en) * 2022-04-08 2022-07-26 深圳致星科技有限公司 Key hierarchical management method and device for federal learning and privacy calculation
CN117411652A (en) * 2022-07-08 2024-01-16 抖音视界有限公司 Data processing method, electronic device and computer readable storage medium

Also Published As

Publication number Publication date
CN113807534B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
CN107196763B (en) SM2 algorithm collaborative signature and decryption method, device and system
CN111079128B (en) Data processing method and device, electronic equipment and storage medium
US10027654B2 (en) Method for authenticating a client device to a server using a secret element
JP4981072B2 (en) Method and system for decryptable and searchable encryption
CN104270249B (en) It is a kind of from the label decryption method without certificate environment to identity-based environment
CN104301108B (en) It is a kind of from identity-based environment to the label decryption method without certificate environment
Peng Danger of using fully homomorphic encryption: A look at Microsoft SEAL
CN110611670A (en) API request encryption method and device
CN111953479B (en) Data processing method and device
Koppanati et al. P-MEC: polynomial congruence-based multimedia encryption technique over cloud
CN102521785B (en) Homomorphism image encryption and decryption method used for image sharing based on EC-ELGamal algorithm
KR20210139344A (en) Methods and devices for performing data-driven activities
CN110784314A (en) Certificateless encrypted information processing method
CN113807534A (en) Model parameter training method and device of federal learning model and electronic equipment
JP2008042590A (en) Recipient device, sender device, encryption communication system and program
CN109962924B (en) Group chat construction method, group message sending method, group message receiving method and system
CN116167088A (en) Method, system and terminal for privacy protection in two-party federal learning
CN106453253A (en) Efficient identity-based concealed signcryption method
CN114362912A (en) Identification password generation method based on distributed key center, electronic device and medium
CN113806759A (en) Federal learning model training method and device, electronic equipment and storage medium
Singhai et al. An efficient image security mechanism based on advanced encryption standard
KR101793528B1 (en) Certificateless public key encryption system and receiving terminal
Rupa A secure information framework with ap RQ properties
EP3883178A1 (en) Encryption system and method employing permutation group-based encryption technology
CN113824677B (en) Training method and device of federal learning model, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant