CN111598254A - Federal learning modeling method, device and readable storage medium - Google Patents

Federal learning modeling method, device and readable storage medium Download PDF

Info

Publication number
CN111598254A
CN111598254A CN202010445868.8A CN202010445868A CN111598254A CN 111598254 A CN111598254 A CN 111598254A CN 202010445868 A CN202010445868 A CN 202010445868A CN 111598254 A CN111598254 A CN 111598254A
Authority
CN
China
Prior art keywords
parameter
verification
model
parameters
encryption
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010445868.8A
Other languages
Chinese (zh)
Other versions
CN111598254B (en
Inventor
李月
蔡杭
范力欣
张天豫
吴锦和
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202010445868.8A priority Critical patent/CN111598254B/en
Publication of CN111598254A publication Critical patent/CN111598254A/en
Priority to PCT/CN2020/135032 priority patent/WO2021232754A1/en
Application granted granted Critical
Publication of CN111598254B publication Critical patent/CN111598254B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2107File encryption

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Computational Linguistics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a federated learning modeling method, equipment and a readable storage medium, wherein the federated learning modeling method comprises the following steps: receiving encryption model parameters sent by each second device and verification parameters corresponding to the encryption model parameters, respectively performing zero knowledge verification on each encryption model parameter based on each verification parameter to determine false encryption model parameters in each encryption model parameter to obtain a zero knowledge verification result, and coordinating each second device to perform federated learning modeling based on the zero knowledge verification result and each encryption model parameter. The method and the device solve the technical problems of low efficiency and poor accuracy of federal learning modeling.

Description

Federal learning modeling method, device and readable storage medium
Technical Field
The present application relates to the field of artificial intelligence in financial technology (Fintech), and in particular, to a method and apparatus for federated learning modeling, and a readable storage medium.
Background
With the continuous development of financial technologies, especially internet technology and finance, more and more technologies (such as distributed, Blockchain, artificial intelligence and the like) are applied to the financial field, but the financial industry also puts higher requirements on the technologies, such as higher requirements on the distribution of backlog of the financial industry.
With the continuous development of computer software and artificial intelligence, the federal learning modeling development is more and more mature, at present, each participant of the federal learning modeling generally feeds back own cryptographic model parameters to a coordinator of the federal learning modeling, so that each cryptographic model parameter of the coordinator is aggregated, and the aggregated parameters are fed back to each participant to perform the federal learning modeling, but if a malicious participant provides false cryptographic model parameters in the training process, the overall model quality of the federal model obtained by the federal learning modeling is directly influenced, even the whole federal learning modeling process is disabled, so that the efficiency and the accuracy of the federal learning modeling are low.
Disclosure of Invention
The application mainly aims to provide a federated learning modeling method, equipment and a readable storage medium, and aims to solve the technical problems of low federated learning modeling efficiency and poor accuracy in the prior art.
In order to achieve the above object, the present application provides a federal learning modeling method, where the federal learning modeling method is applied to a first device, and the federal learning modeling method includes:
receiving encryption model parameters sent by each second device and verification parameters corresponding to the encryption model parameters;
based on each verification parameter, respectively carrying out zero knowledge verification on each encryption model parameter so as to determine a false encryption model parameter in each encryption model parameter and obtain a zero knowledge verification result;
and coordinating each second device to carry out federated learning modeling based on the zero knowledge verification result and each encryption model parameter.
Optionally, the step of coordinating each second device to perform federal learning modeling based on the zero-knowledge verification result and each cryptographic model parameter includes:
based on the zero-knowledge verification result, eliminating the false encryption model parameters from the encryption model parameters to obtain the credible model parameters;
and aggregating the credible model parameters to obtain aggregated parameters, and feeding the aggregated parameters back to the second equipment respectively so that the second equipment can update the local training models thereof until the local training models reach the preset training ending condition.
Optionally, the zero-knowledge verification is performed on each encryption model parameter based on each verification parameter, so as to determine a false encryption model parameter in each encryption model parameter, and the step of obtaining a zero-knowledge verification result includes:
respectively calculating a first zero knowledge proof result and a second zero knowledge proof result corresponding to each verification parameter;
and respectively verifying whether each encryption model parameter is a false encryption model parameter or not based on each first zero knowledge proof result and each second zero knowledge proof result to obtain a zero knowledge verification result.
Optionally, the verification parameters include verification model parameters and verification random parameters,
the step of calculating a first zero knowledge proof result and a second zero knowledge proof result corresponding to each verification parameter respectively includes:
carrying out validity verification on each encryption model parameter based on a preset verification challenge parameter to obtain each first zero knowledge verification result;
and encrypting each verification model parameter based on a preset coordinator public key and each verification random parameter to obtain each second zero knowledge verification result.
Optionally, the preset verification challenge parameters include a first verification challenge parameter and a second verification challenge parameter;
the step of performing validity verification on each encryption model parameter based on a preset verification challenge parameter to obtain each first zero knowledge verification result comprises:
performing exponentiation operation on the first verification challenge parameter and each encryption model parameter respectively to obtain a first exponentiation operation result corresponding to each encryption model parameter;
performing exponentiation operation on the second verification challenge parameter and each encryption model parameter respectively to obtain a second exponentiation operation result corresponding to each encryption model parameter;
generating each of the first zero knowledge verification results based on each of the first exponentiation results and each of the second exponentiation results.
Optionally, the step of verifying whether each of the cryptographic model parameters is a dummy cryptographic model parameter based on each of the first zero knowledge proof results and each of the second zero knowledge proof results respectively includes:
comparing the first zero knowledge proof result and the second zero knowledge proof result corresponding to each encryption model parameter respectively;
if the first zero knowledge proof result and the second zero knowledge proof result corresponding to the encryption model parameter are not consistent, determining that the encryption model parameter is the false encryption model parameter;
and if the first zero knowledge proof result and the second zero knowledge proof result corresponding to the encryption model parameter are consistent, judging that the encryption model parameter is not the false encryption model parameter.
In order to achieve the above object, the present application further provides a federal learning modeling method, where the federal learning modeling method is applied to a second device, and the federal learning modeling method includes:
acquiring a model training parameter and a first verification random parameter, and encrypting the model training parameter based on the first verification random parameter and a preset public key to obtain an encrypted model parameter;
generating a verification model parameter and a second verification random parameter based on the first verification random parameter, the model training parameter and a preset verification challenge parameter;
sending the encryption model parameter, the verification model parameter and the second verification random parameter to first equipment so that the first equipment can carry out zero knowledge verification to obtain a zero knowledge verification result;
and receiving an aggregation parameter fed back by the first equipment based on the zero knowledge verification result and the encryption model parameter, and updating a local training model corresponding to the model training parameter based on the aggregation parameter until the local training model reaches a preset training end condition.
Optionally, the model training parameters comprise current model parameters and auxiliary model parameters,
the step of obtaining model training parameters comprises:
performing iterative training on a local training model corresponding to the model training parameters until the local training model reaches a preset iteration threshold value, and acquiring the current model parameters of the local training model;
and acquiring the prior model parameters of the local training model, and generating the auxiliary model parameters based on the prior model parameters.
The application also provides a federal study modeling device, federal study modeling device is virtual device, just federal study modeling device is applied to first equipment, federal study modeling device includes:
the receiving module is used for receiving the encryption model parameters sent by each second device and the verification parameters corresponding to the encryption model parameters;
the zero knowledge verification module is used for respectively performing zero knowledge verification on each encryption model parameter based on each verification parameter so as to determine a false encryption model parameter in each encryption model parameter and obtain a zero knowledge verification result;
and the coordination module is used for coordinating each second device to carry out federated learning modeling based on the zero knowledge verification result and each encryption model parameter.
Optionally, the coordination module comprises:
the eliminating submodule is used for eliminating the false encryption model parameters from the encryption model parameters based on the zero-knowledge verification result to obtain the credible model parameters;
and the aggregation sub-module is used for aggregating the credible model parameters to obtain aggregation parameters, and feeding the aggregation parameters back to the second equipment respectively so that the second equipment can update the local training models thereof until the local training models reach the preset training end conditions.
Optionally, the zero knowledge verification module comprises:
the calculation submodule is used for respectively calculating a first zero knowledge proof result and a second zero knowledge proof result corresponding to each verification parameter;
and the zero knowledge verification submodule is used for respectively verifying whether each encryption model parameter is a false encryption model parameter or not based on each first zero knowledge proof result and each second zero knowledge proof result so as to obtain a zero knowledge verification result.
Optionally, the computation submodule includes:
the validity verification unit is used for verifying the validity of each encryption model parameter based on a preset verification challenge parameter to obtain each first zero knowledge verification result;
and the encryption unit is used for encrypting each verification model parameter based on a preset coordinator public key and each verification random parameter to obtain each second zero knowledge verification result.
Optionally, the validity verifying unit includes:
the first power operation subunit is configured to perform power operation on the first verification challenge parameter and each encryption model parameter, respectively, to obtain a first power operation result corresponding to each encryption model parameter;
a second exponentiation subunit, configured to perform exponentiation operation on the second verification challenge parameter and each encryption model parameter, respectively, to obtain a second exponentiation operation result corresponding to each encryption model parameter;
a generating subunit, configured to generate each first zero knowledge verification result based on each first power operation result and each second power operation result.
Optionally, the zero knowledge verification sub-module includes:
a comparison unit, configured to compare the first zero knowledge proof result and the second zero knowledge proof result corresponding to each encryption model parameter respectively;
a first determining unit, configured to determine that the cryptographic model parameter is the dummy cryptographic model parameter if the first zero knowledge proof result and the second zero knowledge proof result corresponding to the cryptographic model parameter are not consistent;
a second determining unit, configured to determine that the cryptographic model parameter is not the dummy cryptographic model parameter if the first zero knowledge proof result and the second zero knowledge proof result corresponding to the cryptographic model parameter are consistent.
In order to achieve the above object, the present application further provides a federal learning modeling apparatus, the federal learning modeling apparatus is a virtual apparatus, and the federal learning modeling apparatus is applied to a second device, the federal learning modeling apparatus includes:
the encryption module is used for acquiring a model training parameter and a first verification random parameter, and encrypting the model training parameter based on the first verification random parameter and a preset public key to acquire an encrypted model parameter;
the generation module is used for generating verification model parameters and second verification random parameters based on the first verification random parameters, the model training parameters and preset verification challenge parameters;
the sending module is used for sending the encryption model parameter, the verification model parameter and the second verification random parameter to first equipment so as to enable the first equipment to carry out zero knowledge verification and obtain a zero knowledge verification result;
and the model updating module is used for receiving the aggregation parameters fed back by the first equipment based on the zero knowledge verification result and the encryption model parameters, and updating the local training model corresponding to the model training parameters based on the aggregation parameters until the local training model reaches a preset training ending condition.
Optionally, the encryption module includes:
the obtaining submodule is used for carrying out iterative training on a local training model corresponding to the model training parameters until the local training model reaches a preset iteration threshold value, and obtaining the current model parameters of the local training model;
and the generation submodule is used for acquiring the previous model parameters of the local training model and generating the auxiliary model parameters based on the previous model parameters.
The application also provides a federal learning modeling equipment, federal learning modeling equipment is entity equipment, federal learning modeling equipment includes: a memory, a processor, and a program of the federal learning modeling method stored in the memory and executable on the processor, the program of the federal learning modeling method being executable by the processor to perform the steps of the federal learning modeling method as described above.
The present application also provides a readable storage medium having stored thereon a program for implementing the federal learning modeling method, the program implementing the steps of the federal learning modeling method as described above when executed by a processor.
According to the method and the device, zero knowledge verification is respectively carried out on each encryption model parameter based on each verification parameter by receiving the encryption model parameter sent by each second device and the verification parameter corresponding to the encryption model parameter, so that a false encryption model parameter is determined in each encryption model parameter to obtain a zero knowledge verification result, and then each second device is coordinated to carry out federal learning modeling based on the zero knowledge verification result. That is, after receiving the encryption model parameters and the verification parameters corresponding to the encryption model parameters sent by each second device, the present application performs zero knowledge verification on each encryption model parameter based on each verification parameter, so as to determine a false encryption model parameter in each encryption model parameter and obtain a zero knowledge verification result, and further, based on the zero knowledge verification result, the false encryption model parameter can be removed from each encryption model parameter to coordinate each second device to perform federal learning modeling. That is, the application provides a method for determining false encryption model parameters in each local model based on zero-knowledge proof, and then when a malicious participant provides the false encryption model parameters in the training process, the false encryption model parameters can be accurately identified and eliminated, so that the situation that the Federal learning modeling is carried out based on the encryption model parameters mixed with the false encryption model parameters is avoided, the overall model quality of the Federal model obtained through the Federal learning modeling is improved, the efficiency and the accuracy of the Federal learning modeling are improved, and the technical problems of low Federal learning modeling efficiency and poor accuracy are solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart diagram of a first embodiment of a federated learning modeling method of the present application;
FIG. 2 is a schematic flow chart diagram of a second embodiment of the Federal learning modeling method of the present application;
FIG. 3 is a schematic flow chart diagram of a third embodiment of the Federal learning modeling method of the present application;
fig. 4 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
The objectives, features, and advantages of the present application will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In a first embodiment of the federal learning modeling method of the present application, the federal learning modeling method is applied to a first device, and referring to fig. 1, the federal learning modeling method includes:
step S10, receiving the encryption model parameters sent by each second device and the verification parameters corresponding to the encryption model parameters;
in this embodiment, it should be noted that, before performing federal learning modeling, the first device performs negotiation interaction with each of the second devices to determine standard verification challenge parameters, where the number of the standard verification challenge parameters may be determined during the negotiation interaction. The first device is a coordinator performing federated learning modeling and is used for coordinating the second devices to perform federated learning modeling, the second devices are participants performing federated learning modeling, the encryption model parameters are model training parameters encrypted based on a homomorphic encryption algorithm, for example, assuming that a public key of the participants held by the second devices is P, and a first verification random parameter used for homomorphic encryption is r1Encrypting model parameter h after encrypting model training parameter m based on homomorphic encryption algorithmm=Enc(P,m,r1) Where Enc is a homomorphic cryptographic symbol, and further, the model training parameter is a model parameter of a local training model of the second device, for example, assuming that the local training model is a linear model and the expression is Y- β01x12x2+…+βnxnThen the model parameters are vectors (β)012+…+βn)。
Additionally, it is required toIt is noted that the verification parameters are parameters for zero-knowledge proof, wherein the verification parameters include a second verification random parameter and verification model parameters, wherein the verification model parameters are generated by the second device based on the model training parameters and the verification challenge parameters, for example, assuming that the verification challenge parameters include a parameter x1And x2If the model training parameter is m, the verification model parameter is n ═ m × x1+m*x2Further, the second verification random parameter is generated by the second device based on the first verification random parameter and the verification challenge parameter, e.g., assuming that the first verification random parameter is r1The verification challenge parameter comprises a parameter x1And parameter x2Then the second verification random parameter r2=r1 x1*r1 x2
Additionally, it should be noted that a malicious party in each of the parties may modify the cryptographic parameters of the homomorphic encryption of the model training parameters to achieve the purpose of generating the false cryptographic model parameters, and when the coordinator performs federated modeling, the cryptographic model parameters sent by each of the second devices are usually directly aggregated to obtain the aggregated parameters, and if the false cryptographic model parameters exist in each of the cryptographic model parameters, the efficiency and accuracy of federated model training will be affected, for example, if the cryptographic model parameters sent by the second device a are 5a, if the second device B is a malicious party, the sent false cryptographic model parameters are 100B, and if the aggregation process is a weighted average, the first aggregated parameters obtained after aggregation process are (5a +100B)/2, if the second device B is not a malicious party, if the sent encryption model parameter is 5b, the second aggregation parameter obtained after aggregation processing is (5a +5b)/2, and therefore, if a malicious party exists in each second device, the difference between the second aggregation parameter obtained by the first device performing aggregation processing on the encryption model parameter sent by each second device and the first set parameter obtained when no malicious party exists is extremely large, and further, if a malicious party exists in each second device, the efficiency and the accuracy of federal model training are greatly influenced.
Additionally, it should be noted that, in this embodiment, the first device and the second device both perform encryption based on a homomorphic encryption algorithm, where in an implementable scheme, the homomorphic encryption algorithm should satisfy the following properties:
c ═ Enc (PK, m, r), and for C1=Enc(PK,m1,r1) And C) and2=Enc(PK,m2,r2) And satisfies the following conditions:
Figure BDA0002504869740000091
wherein, C, C1And C2All are parameters to be encrypted after encryption, PK is an encrypted secret key, m and m1And m2As parameters to be encrypted, r1And r2The random number required for encryption.
Step S20, based on each verification parameter, respectively performing zero-knowledge verification on each encryption model parameter to determine a false encryption model parameter in each encryption model parameter and obtain a zero-knowledge verification result;
in this embodiment, based on each verification parameter, performing zero-knowledge verification on each encryption model parameter, respectively, to determine a dummy encryption model parameter in each encryption model parameter, and obtain a zero-knowledge verification result, specifically, based on each verification parameter, calculating a first zero-knowledge proof result and a second zero-knowledge proof result corresponding to each encryption model parameter, respectively verifying whether each encryption model parameter is a dummy encryption model parameter, and determining a dummy encryption model parameter in each encryption model parameter, and obtaining a zero-knowledge verification result, based on a first zero-knowledge proof result and a second zero-knowledge proof result corresponding to each encryption model parameter, where the zero-knowledge verification result is a verification result of whether each encryption model parameter is a dummy encryption model parameter, and the dummy encryption model parameter is a model training parameter in which a malicious party maliciously encrypts the model training parameter, and the malicious encryption is achieved, for example, by changing the encryption parameters when homomorphic encryption is performed.
Wherein, the zero knowledge verification is respectively carried out on each encryption model parameter based on each verification parameter so as to determine a false encryption model parameter in each encryption model parameter, and the step of obtaining the zero knowledge verification result comprises the following steps:
step S21, respectively calculating a first zero knowledge proof result and a second zero knowledge proof result corresponding to each of the verification parameters;
in this embodiment, it should be noted that the verification parameters include a verification model parameter and a second verification random parameter.
Respectively calculating a first zero knowledge proof result and a second zero knowledge proof result corresponding to each verification parameter, specifically, for each verification parameter, executing the following steps:
performing homomorphic addition operation on the encryption model parameter based on a preset verification challenge parameter to obtain a first zero knowledge proof result, and performing homomorphic encryption operation on the verification model parameter based on a preset coordinator public key and a second verification random parameter to obtain a second zero knowledge proof result, for example, assuming that the preset verification challenge parameter is x1And x2The cryptographic model parameter is hm=Enc(P1,m,r1) Wherein P is1Is a participant public key, r1For the first verification random parameter, m is the model training parameter, and n is the verification model parameter m x1+m*x2And is and
Figure BDA0002504869740000101
and the first zero knowledge verification result is
Figure BDA0002504869740000102
The second zero knowledge verification result is
Figure BDA0002504869740000103
Wherein, P2Is a coordinator public key, and if each of the second devices is maliciously modified, the participant public key and the coordinator public key should be identical.
Step S22, based on each of the first zero knowledge proof results and each of the second zero knowledge proof results, respectively verifying whether each of the encryption model parameters is a dummy encryption model parameter, to obtain a zero knowledge verification result.
In this embodiment, based on each of the first zero knowledge proof results and each of the second zero knowledge proof results, whether each of the encryption model parameters is a dummy encryption model parameter is verified, and a zero knowledge verification result is obtained, specifically, the following steps are performed for each of the first zero knowledge proof result and the second zero knowledge proof result corresponding to each of the verification parameters:
comparing the first zero knowledge proof result with the second zero knowledge proof result, determining whether the first zero knowledge proof result is consistent with the second zero knowledge proof result, if so, determining that the second device does not make malicious modification when performing homomorphic encryption on the model training parameters, that is, the encryption model parameters are not false encryption model parameters, and if not, determining that the second device makes malicious modification when performing homomorphic encryption on the model training parameters, that is, the encryption model parameters are false encryption model parameters.
Wherein the step of verifying whether each of the cryptographic model parameters is a dummy cryptographic model parameter based on each of the first zero knowledge proof results and each of the second zero knowledge proof results, respectively, comprises:
step S221, comparing the first zero knowledge proof result and the second zero knowledge proof result corresponding to each encryption model parameter respectively;
in this embodiment, the first zero knowledge proof result and the second zero knowledge proof result corresponding to each of the encryption model parameters are respectively compared, and specifically, the first zero knowledge proof result and the second zero knowledge proof result corresponding to each of the encryption model parameters are respectively compared to respectively calculate a difference between the first zero knowledge proof result and the second zero knowledge proof result corresponding to each of the encryption model parameters.
Step S222, if the first zero knowledge proof result and the second zero knowledge proof result corresponding to the cryptographic model parameter are not consistent, determining that the cryptographic model parameter is the false cryptographic model parameter;
in this embodiment, if the first zero knowledge proof result and the second zero knowledge proof result corresponding to the cryptographic model parameter are not consistent, it is determined that the cryptographic model parameter is the false cryptographic model parameter, specifically, if the difference is not 0, it is determined that the second device corresponding to the cryptographic model parameter performs malicious encryption on the model training parameter, and then it is determined that the cryptographic model parameter is the false cryptographic model parameter, and then a false identifier is given to the cryptographic model parameter, so as to identify the cryptographic model parameter as the false cryptographic model parameter.
Step S223, if the first zero knowledge proof result and the second zero knowledge proof result corresponding to the encryption model parameter are consistent, determining that the encryption model parameter is not the false encryption model parameter.
In this embodiment, if the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to the cryptographic model parameter are consistent, it is determined that the cryptographic model parameter is not the false cryptographic model parameter, specifically, if the difference is 0, it is determined that the second device corresponding to the cryptographic model parameter does not perform malicious encryption on the model training parameter, and then it is determined that the cryptographic model parameter is not the false cryptographic model parameter, and a trusted identifier is given to the cryptographic model parameter, so as to identify the cryptographic model parameter as the trusted model parameter.
And step S30, coordinating each second device to perform federated learning modeling based on the zero knowledge verification result and each encryption model parameter.
In this embodiment, it should be noted that the zero-knowledge verification result includes a calibration identifier corresponding to each encryption model parameter, where the calibration identifier is an identifier for identifying whether the encryption model parameter is a false encryption model parameter.
And coordinating each second device to perform federated learning modeling based on the zero-knowledge verification result and each encryption model parameter, specifically, based on each calibration identifier, eliminating false encryption model parameters from each encryption model parameter to obtain each credible model parameter, further performing aggregation processing on each credible model parameter to obtain an aggregation parameter, and further coordinating each second device to perform federated learning modeling based on each aggregation model parameter.
The step of coordinating each second device to perform federated learning modeling based on the zero-knowledge verification result and each encryption model parameter includes:
step S31, based on the zero-knowledge verification result, eliminating the false encryption model parameters from the encryption model parameters to obtain the credible model parameters;
in this embodiment, it should be noted that after finding the false cryptographic model parameter, the coordinator may penalize a malicious party corresponding to the false cryptographic model parameter according to a preset incentive mechanism or cancel a subsequent qualification of the malicious party to participate in federal learning modeling.
Step S32, performing aggregation processing on each of the trusted model parameters to obtain aggregated parameters, and feeding back the aggregated parameters to each of the second devices, so that each of the second devices updates its own local training model until the local training model reaches a preset training end condition.
In this embodiment, each trusted model parameter is aggregated to obtain an aggregation parameter, and the aggregation parameter is fed back to each second device, so that each second device updates its local training model until the local training model reaches a preset training end condition, specifically, each trusted model parameter is aggregated to obtain an aggregation parameter based on a preset aggregation processing rule, where the preset aggregation processing rule includes weighted averaging, summation, and the like, and further, each aggregation parameter is sent to each second device, so that each second device decrypts the aggregation parameter based on a participant private key corresponding to the participant public key to obtain a decrypted aggregation parameter, and updates a local training model held by its own party based on the decrypted aggregation parameter, obtaining an updated local training model, judging whether the updated local training model reaches a preset training end condition, if the updated local training model reaches the preset training end condition, judging to complete the task of federal learning modeling, if the updated local training model does not reach the preset training end condition, performing iterative training on the local training model again, and if the updated local training model reaches the preset iteration threshold value, re-obtaining model training parameters of the local training model, re-encrypting and sending the model training parameters to the coordinator to re-perform federal training until the local training model reaches the preset training end condition, wherein the training end condition comprises reaching a preset maximum iteration, and a loss function corresponding to the local training model converges and the like.
Further, the local training model includes a wind control model, wherein the wind control model is a machine learning model for evaluating the loan risk of the user, and when a malicious party exists in each of the parties, the false encryption model parameters sent by the malicious party and the encryption model parameters sent by a normal party in each of the parties are aggregated to obtain an error aggregation parameter which is greatly different from an accurate aggregation parameter, and each of the second devices updates the wind control model based on the error aggregation parameter to reduce the accuracy of the wind control model in evaluating the loan risk of the user, and further based on the federal learning modeling method in the present application, the false encryption model parameters can be screened and eliminated from the encryption model parameters sent by each of the parties to obtain credible encryption model parameters, so that in the whole federal learning modeling process, the wind control model is updated always based on the aggregation parameters obtained by aggregating the parameters of the credible encryption models, so that the assessment of the user loan risk by the wind control model is more accurate, namely, the loan risk assessment accuracy of the wind control model is improved.
In this embodiment, zero knowledge verification is performed on each encryption model parameter respectively based on each verification parameter by receiving the encryption model parameter sent by each second device and the verification parameter corresponding to the encryption model parameter, so as to determine a false encryption model parameter in each encryption model parameter, obtain a zero knowledge verification result, and then coordinate each second device to perform federal learning modeling based on the zero knowledge verification result. That is, in this embodiment, after receiving the encryption model parameters and the verification parameters corresponding to the encryption model parameters sent by each second device, based on each verification parameter, zero knowledge verification is performed on each encryption model parameter, so as to determine a false encryption model parameter in each encryption model parameter, and obtain a zero knowledge verification result, further, based on the zero knowledge verification result, the false encryption model parameter may be removed from each encryption model parameter, so as to coordinate each second device to perform federal learning modeling. That is, the embodiment provides a method for determining false encryption model parameters in each local model based on zero-knowledge proof, and then when a malicious participant provides the false encryption model parameters in the training process, the false encryption model parameters can be accurately identified and eliminated, so that the occurrence of the situation that the federal learning modeling is performed based on each encryption model parameter mixed with the false encryption model parameters is avoided, the overall model quality of the federal model obtained through the federal learning modeling is improved, the efficiency and accuracy of the federal learning modeling are improved, and the technical problems of low efficiency and poor accuracy of the federal learning modeling are solved.
Further, referring to fig. 2, based on the first embodiment in the present application, in another embodiment of the present application, the verification parameters include a verification model parameter and a verification random parameter,
the step of calculating a first zero knowledge proof result and a second zero knowledge proof result corresponding to each verification parameter respectively includes:
step S211, based on preset verification challenge parameters, performing validity verification on each encryption model parameter to obtain each first zero knowledge verification result;
in this embodiment, it should be noted that the preset verification challenge parameters include a first verification challenge parameter and a second verification challenge parameter, and the encryption model parameters include a first encryption model parameter and a second encryption model parameter, where the first encryption model parameter is an encryption parameter obtained after the second device performs homomorphic encryption on a current model parameter, the second encryption model parameter is an encryption parameter obtained after the second device performs homomorphic encryption on a previous model parameter, the current model parameter is a model parameter extracted when the local training model reaches a preset training iteration threshold value when the current round of federation is performed, the previous model parameter is a model parameter based on the previous round of federation before the current round of federation, for example, a historical model parameter corresponding to the previous three rounds of federation is taken, and each historical model parameter is weighted and averaged, obtaining the prior model parameter, e.g., assuming that the current model parameter is m and the prior model parameter is m0Then the first cryptographic model parameter h0=Enc(P,m0,r1) Where P is the participant public key, r1For the first authentication random parameter, the second cryptographic model parameter hm=Enc(P,m,r2) Wherein r is2Is the second authentication random parameter.
Based on preset verification challenge parameters, performing validity verification on each encryption model parameter to obtain each first zero knowledge verification result, and specifically, executing the following steps for each encryption model parameter:
respectively performing exponentiation operation on the first encryption model parameter and the second encryption model parameter and summing based on the first verification challenge parameter and the second verification challenge parameter to obtain a first zero-knowledge verification result, for example, assuming a first encryption model parameter h0=Enc(P,m0,r1) Second cryptographic model parameter hm=Enc(P,m,r2) The first verification challenge parameter is x1Of 1 atThe two verification challenge parameter is x2Then said first zero knowledge proof result
Figure BDA0002504869740000141
Based on the property of homomorphic encryption algorithm, the method can obtain
Figure BDA0002504869740000142
Where P is the participant public key, x1For the first verification challenge parameter, x2For the second verification challenge parameter, r1For the first authentication random parameter, r2For the second verification random parameter, the current model parameter is m, and the previous model parameter is m0
The preset verification challenge parameters comprise a first verification challenge parameter and a second verification challenge parameter;
the step of performing validity verification on each encryption model parameter based on a preset verification challenge parameter to obtain each first zero knowledge verification result comprises:
step A10, performing exponentiation operation on the first verification challenge parameter and each encryption model parameter respectively to obtain a first exponentiation operation result corresponding to each encryption model parameter;
in this embodiment, the first verification challenge parameter and each encryption model parameter are respectively subjected to exponentiation operation to obtain a first exponentiation operation result corresponding to each encryption model parameter, and specifically, for each encryption model parameter, the following steps are performed: performing a power operation on the first cryptographic model parameter based on the first verification challenge parameter to obtain a first power operation result, for example, assuming that the first verification challenge parameter is x and the first cryptographic model parameter is h, the first power operation result is hx
Step A20, performing exponentiation operation on the second verification challenge parameter and each encryption model parameter respectively to obtain a second exponentiation operation result corresponding to each encryption model parameter;
in this embodiment, the second verification challenge parameter and each encryption model parameter are respectively subjected to exponentiation operation to obtain a second exponentiation operation result corresponding to each encryption model parameter, and specifically, for each encryption model parameter, the following steps are performed: and performing power operation on the second encryption model parameter based on the second verification challenge parameter to obtain a second power operation result.
Step a30, generating each of the first zero knowledge verification results based on each of the first exponentiation results and each of the second exponentiation results.
In this embodiment, each of the first zero knowledge verification results is generated based on each of the first exponentiation results and each of the second exponentiation results, specifically, a product of the first exponentiation result and the second exponentiation result is found, and the product is taken as the first zero knowledge verification result.
Step S212, based on a preset coordinator public key and each verification random parameter, performing encryption processing on each verification model parameter to obtain each second zero knowledge verification result.
In this embodiment, it should be noted that each of the participants is a valid participant, and then the public key of the participant is consistent with the public key of the preset coordinator, the verification random parameter includes a third verification random parameter, and the third verification random parameter is obtained by calculation based on the first verification random parameter, the second verification random parameter, the first verification challenge parameter and the second verification challenge parameter, for example, assuming that the first verification random parameter is r1, the second verification random parameter is r2, the first verification challenge parameter is x1, and the second verification challenge parameter is x2, the third verification challenge parameter is
Figure BDA0002504869740000151
Additionally, the verification model parameters are calculated based on a first verification challenge parameter, a second verification challenge parameter, a current model parameter and a previous model parameter, for example, assuming that the first verification challenge parameter is x1, the second verification challenge parameter is x2, the current model parameter is m, and the previous model parameter is m0Then the verification modelThe parameter is n ═ m0x1+mx2
Based on a preset coordinator public key and each verification random parameter, performing encryption processing on each verification model parameter to obtain each second zero knowledge verification result, and specifically, executing the following steps for each verification model parameter:
homomorphic encryption is performed on the verification model parameter based on a preset coordinator public key and the third verification challenge parameter to obtain the second zero knowledge verification result, for example, assuming the third verification challenge parameter
Figure BDA0002504869740000152
Wherein the first verification random parameter is r1, the second verification random parameter is r2, the first verification challenge parameter is x1, the second verification challenge parameter is x2, and the verification model parameter is n-m0x1+mx2If the public key of the coordinator is P, then
Figure BDA0002504869740000161
Further, if the participating party does not maliciously encrypt the encryption model parameters, for example, maliciously tamper with the encryption algorithm, maliciously tamper with the encrypted parameters, etc., the first zero knowledge proof result is the same as the second zero knowledge proof result, that is, the encryption model parameters provided by the participating party is the trusted model parameters, and if the participating party maliciously encrypts the encryption model parameters, the first zero knowledge proof result is different from the second zero knowledge proof result, that is, the encryption model parameters provided by the participating party is the false encryption model parameters.
In this embodiment, validity verification is performed on each encryption model parameter based on a preset verification challenge parameter to obtain each first zero knowledge verification result, and then encryption processing is performed on each verification model parameter based on a preset coordinator public key and each verification random parameter to obtain each second zero knowledge verification result. That is, the embodiment provides a method for calculating the first zero knowledge proof result and the second zero knowledge proof result, and then after the first zero knowledge proof result and the second zero knowledge proof result are obtained through calculation, the first zero knowledge proof result and the second zero knowledge proof result are only compared to determine whether the encryption model parameters are the false encryption model parameters, so that a foundation is laid for determining the false encryption model parameters in each encryption model parameter, and a foundation is laid for solving the technical problems of low federal learning modeling efficiency and poor accuracy.
Further, referring to fig. 3, based on the first embodiment and the second embodiment in the present application, in another embodiment of the present application, the federal learning modeling method is applied to a second device, and the federal learning modeling method includes:
step B10, obtaining a model training parameter and a first verification random parameter, and encrypting the model training parameter based on the first verification random parameter and a preset public key to obtain an encrypted model parameter;
in this embodiment, the federal learning modeling includes at least one federal, and in each federal, the second device performs iterative training on a local training model until a preset iteration threshold is reached, then sends model parameters of the local training model to the first device, receives aggregation parameters fed back by the first device based on the model parameters, updates the local training model based on the aggregation parameters, and uses the local training model as an initial model of the next federal until the local training model reaches a preset training end condition, where the preset training end condition includes reaching of a maximum iteration number, convergence of a loss function, and the like.
Obtaining a model training parameter and a first verification random parameter, and encrypting the model training parameter based on the first verification random parameter and a preset public key to obtain an encrypted model parameter, specifically, when the local training model reaches a preset iteration threshold, extracting the model parameter of the local training model as the model training parameter, and obtaining the first verification random parameter, and further based on the first verification random parameter and the preset public key,performing homomorphic encryption processing on the model training parameter to obtain an encrypted model parameter, for example, assuming that the model training parameter is m and the first verification random parameter is r1If the preset public key is P, the encryption model parameter is hm=Enc(P,m,r1) And Enc is a homomorphic encryption symbol.
Wherein the model training parameters comprise current model parameters and auxiliary model parameters,
the step of obtaining model training parameters comprises:
step B11, performing iterative training on the local training model corresponding to the model training parameters until the local training model reaches a preset iteration threshold value, and acquiring the current model parameters of the local training model;
in this embodiment, it should be noted that the current model parameter is a current iteration model parameter of the local training model when the current iteration model reaches a preset iteration threshold in the current federation.
And step B12, acquiring the prior model parameters of the local training model, and generating the auxiliary model parameters based on the prior model parameters.
In this embodiment, a previous model parameter of the local training model is obtained, and the auxiliary model parameter is generated based on the previous model parameter, specifically, each previous iteration model parameter of a previous round of federation corresponding to the current round of federation is obtained, and each previous iteration model parameter is weighted and averaged to obtain the auxiliary model parameter, for example, if each previous iteration model parameter is a, b, c, the weight corresponding to a is 20%, the weight corresponding to b is 30%, and the weight corresponding to c is 50%, the auxiliary model parameter m is the auxiliary model parameter m0=a*20%+b*30%+c*50%。
Step B20, generating verification model parameters and second verification random parameters based on the first verification random parameters, the model training parameters and preset verification challenge parameters;
in this embodiment, it should be noted that, in a possible implementation, the predetermined experiment is performedThe verification challenge parameter may be calculated by the coordinator according to each previous encryption model parameter and a preset hash function sent by each participant in the previous round federation, for example, if there are 10 participants, the corresponding 10 previous encryption model parameters are freely combined, and then n results of the free combination are input into the preset hash function to obtain a verification challenge parameter x1、x2。xnAnd a specific predetermined verification challenge parameter x1、x2。xnThe generation mode and the number of the method are not limited.
Generating a verification model parameter and a second verification random parameter based on the first verification random parameter, the model training parameter and a preset verification challenge parameter, specifically, performing a power operation on the first verification random parameter and the preset verification challenge parameter to obtain a second verification random parameter, and generating a verification model parameter based on the model training parameter and the preset verification challenge parameter, for example, assuming that the first verification random parameter is r1The model training parameter is m, and the preset verification challenge parameter is x1And x2The verification model parameter n ═ m × x1+m*x2The second verification random parameter
Figure BDA0002504869740000181
Step B30, sending the encryption model parameter, the verification model parameter and the second verification random parameter to a first device for the first device to perform zero knowledge verification to obtain a zero knowledge verification result;
in this embodiment, the encryption model parameter, the verification model parameter, and the second verification random parameter are sent to a first device for the first device to perform zero-knowledge verification to obtain a zero-knowledge verification result, and specifically, the encryption model parameter, the verification model parameter, and the second verification random parameter are sent to a first device associated with a second device for the first device to calculate a first zero-knowledge proof result and a second zero-knowledge proof result based on the encryption model parameter, the verification model parameter, and the second verification random parameter, and determine whether the encryption model parameter is a false encryption model parameter based on the first zero-knowledge proof result and the second zero-knowledge proof result to obtain a determination result, and record the determination result in the zero-knowledge verification result, and the zero knowledge verification result comprises a determination result corresponding to each second device.
And step B40, receiving the aggregation parameters fed back by the first equipment based on the zero knowledge verification result and the encryption model parameters, and updating the local training model corresponding to the model training parameters based on the aggregation parameters until the local training model reaches the preset training end condition.
In this embodiment, the aggregation parameters fed back by the first device based on the zero knowledge verification result and the encryption model parameters are received, and the local training model corresponding to the model training parameters is updated based on the aggregation parameters until the local training model reaches a preset training end condition, specifically, after the first device obtains a zero knowledge proof result, false encryption model parameters are removed from the encryption model parameters sent by each second device based on the zero knowledge proof result to obtain each trusted model parameter, and aggregation processing is performed on each trusted model parameter, where the aggregation processing includes summing, weighting and averaging, to obtain an aggregation parameter, and the aggregation parameters are respectively fed back to each second device, and further, after the second device ends the aggregation parameters, based on a preset private key corresponding to the preset public key, decrypting the aggregation parameters to obtain decrypted aggregation parameters, updating the local training model based on the decrypted aggregation parameters, and using the updated local training model as an initial model of the next round of federation until the local training model reaches a preset training end condition, wherein the training end condition comprises reaching of maximum iteration times, convergence of a loss function and the like.
The present embodiment obtains the model training parameter and the first verification random parameter, and based on the first verification random parameter and the preset public key, encrypting the model training parameters to obtain encrypted model parameters, generating verification model parameters and second verification random parameters based on the first verification random parameters, the model training parameters and preset verification challenge parameters, further sending the encryption model parameter, the verification model parameter and the second verification random parameter to a first device for the first device to perform zero knowledge verification to obtain a zero knowledge verification result, further receiving an aggregation parameter fed back by the first device based on the zero knowledge verification result and the encryption model parameter, and based on the aggregation parameter, and updating the local training model corresponding to the model training parameters until the local training model reaches a preset training ending condition. That is, this embodiment provides a federate learning modeling method based on zero knowledge certification, that is, when a model training parameter is encrypted as an encryption model parameter, a verification model parameter and a second verification random parameter are generated at the same time, and then the encryption model parameter, the verification model parameter and the second verification random parameter are sent to a first device, so that the first device performs zero knowledge verification to obtain a zero knowledge verification result, and then the first device determines and eliminates a false encryption model parameter in each second device, and then an aggregation parameter received by the second device is obtained by aggregating the first device based on a trusted encryption model parameter, and then a local training model is updated based on the aggregation parameter to complete federate learning modeling, thereby avoiding updating the local training model based on an aggregation parameter aggregated by each encryption model parameter mixed with the false encryption model parameter, the situation that the local training model is difficult to reach the preset training end condition and the local training model is determined to be low occurs, so that the efficiency and the accuracy of the federal learning modeling are improved, and the technical problems that the federal learning modeling is low in efficiency and poor in accuracy are solved.
Referring to fig. 4, fig. 4 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
As shown in fig. 4, the federal learning modeling apparatus may include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002. The communication bus 1002 is used for realizing connection communication between the processor 1001 and the memory 1005. The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a memory device separate from the processor 1001 described above.
Optionally, the federal learning modeling apparatus may further include a rectangular user interface, a network interface, a camera, RF (Radio Frequency) circuits, a sensor, an audio circuit, a WiFi module, and the like. The rectangular user interface may comprise a Display screen (Display), an input sub-module such as a Keyboard (Keyboard), and the optional rectangular user interface may also comprise a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
Those skilled in the art will appreciate that the federated learning modeling apparatus architecture shown in FIG. 4 does not constitute a limitation of the federated learning modeling apparatus, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
As shown in fig. 4, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, and a federal learning modeling method program. The operating system is a program that manages and controls the hardware and software resources of the Federal learning modeling device, and supports the operation of the Federal learning modeling method program as well as other software and/or programs. The network communication module is used for realizing communication among components in the memory 1005 and communication with other hardware and software in the system of the federal learning modeling method.
In the federal learning modeling apparatus shown in fig. 4, the processor 1001 is configured to execute the program of the federal learning modeling method stored in the memory 1005, and implement the steps of any one of the above-mentioned federal learning modeling methods.
The specific implementation of the federal learning modeling device of the application is basically the same as that of each embodiment of the federal learning modeling method, and is not described herein again.
The embodiment of the present application further provides a federal learning modeling device, which is applied to the first device, and the federal learning modeling device includes:
the receiving module is used for receiving the encryption model parameters sent by each second device and the verification parameters corresponding to the encryption model parameters;
the zero knowledge verification module is used for respectively performing zero knowledge verification on each encryption model parameter based on each verification parameter so as to determine a false encryption model parameter in each encryption model parameter and obtain a zero knowledge verification result;
and the coordination module is used for coordinating each second device to carry out federated learning modeling based on the zero knowledge verification result and each encryption model parameter.
Optionally, the coordination module comprises:
the eliminating submodule is used for eliminating the false encryption model parameters from the encryption model parameters based on the zero-knowledge verification result to obtain the credible model parameters;
and the aggregation sub-module is used for aggregating the credible model parameters to obtain aggregation parameters, and feeding the aggregation parameters back to the second equipment respectively so that the second equipment can update the local training models thereof until the local training models reach the preset training end conditions.
Optionally, the zero knowledge verification module comprises:
the calculation submodule is used for respectively calculating a first zero knowledge proof result and a second zero knowledge proof result corresponding to each verification parameter;
and the zero knowledge verification submodule is used for respectively verifying whether each encryption model parameter is a false encryption model parameter or not based on each first zero knowledge proof result and each second zero knowledge proof result so as to obtain a zero knowledge verification result.
Optionally, the computation submodule includes:
the validity verification unit is used for verifying the validity of each encryption model parameter based on a preset verification challenge parameter to obtain each first zero knowledge verification result;
and the encryption unit is used for encrypting each verification model parameter based on a preset coordinator public key and each verification random parameter to obtain each second zero knowledge verification result.
Optionally, the validity verifying unit includes:
the first power operation subunit is configured to perform power operation on the first verification challenge parameter and each encryption model parameter, respectively, to obtain a first power operation result corresponding to each encryption model parameter;
a second exponentiation subunit, configured to perform exponentiation operation on the second verification challenge parameter and each encryption model parameter, respectively, to obtain a second exponentiation operation result corresponding to each encryption model parameter;
a generating subunit, configured to generate each first zero knowledge verification result based on each first power operation result and each second power operation result.
Optionally, the zero knowledge verification sub-module includes:
a comparison unit, configured to compare the first zero knowledge proof result and the second zero knowledge proof result corresponding to each encryption model parameter respectively;
a first determining unit, configured to determine that the cryptographic model parameter is the dummy cryptographic model parameter if the first zero knowledge proof result and the second zero knowledge proof result corresponding to the cryptographic model parameter are not consistent;
a second determining unit, configured to determine that the cryptographic model parameter is not the dummy cryptographic model parameter if the first zero knowledge proof result and the second zero knowledge proof result corresponding to the cryptographic model parameter are consistent.
The specific implementation of the federal learning modeling apparatus of the application is basically the same as that of each embodiment of the federal learning modeling method, and is not described herein again.
In order to achieve the above object, this embodiment further provides a federal learning modeling apparatus, where the federal learning modeling apparatus is applied to a second device, and the federal learning modeling apparatus includes:
the encryption module is used for acquiring a model training parameter and a first verification random parameter, and encrypting the model training parameter based on the first verification random parameter and a preset public key to acquire an encrypted model parameter;
the generation module is used for generating verification model parameters and second verification random parameters based on the first verification random parameters, the model training parameters and preset verification challenge parameters;
the sending module is used for sending the encryption model parameter, the verification model parameter and the second verification random parameter to first equipment so as to enable the first equipment to carry out zero knowledge verification and obtain a zero knowledge verification result;
and the model updating module is used for receiving the aggregation parameters fed back by the first equipment based on the zero knowledge verification result and the encryption model parameters, and updating the local training model corresponding to the model training parameters based on the aggregation parameters until the local training model reaches a preset training ending condition.
Optionally, the encryption module includes:
the obtaining submodule is used for carrying out iterative training on a local training model corresponding to the model training parameters until the local training model reaches a preset iteration threshold value, and obtaining the current model parameters of the local training model;
and the generation submodule is used for acquiring the previous model parameters of the local training model and generating the auxiliary model parameters based on the previous model parameters.
The specific implementation of the federal learning modeling apparatus of the application is basically the same as that of each embodiment of the federal learning modeling method, and is not described herein again.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

1. The federated learning modeling method is applied to first equipment, and comprises the following steps:
receiving encryption model parameters sent by each second device and verification parameters corresponding to the encryption model parameters;
based on each verification parameter, respectively carrying out zero knowledge verification on each encryption model parameter so as to determine a false encryption model parameter in each encryption model parameter and obtain a zero knowledge verification result;
and coordinating each second device to carry out federated learning modeling based on the zero knowledge verification result and each encryption model parameter.
2. The federal learning modeling method as claimed in claim 1, wherein said step of coordinating each of said second devices for federal learning modeling based on said zero knowledge validation result and each of said cryptographic model parameters comprises:
based on the zero-knowledge verification result, eliminating the false encryption model parameters from the encryption model parameters to obtain the credible model parameters;
and aggregating the credible model parameters to obtain aggregated parameters, and feeding the aggregated parameters back to the second equipment respectively so that the second equipment can update the local training models thereof until the local training models reach the preset training ending condition.
3. The federal learning modeling method as claimed in claim 1, wherein the zero knowledge verification step of performing zero knowledge verification on each cryptographic model parameter based on each verification parameter, respectively, to determine a false cryptographic model parameter in each cryptographic model parameter, and obtaining a zero knowledge verification result comprises:
respectively calculating a first zero knowledge proof result and a second zero knowledge proof result corresponding to each verification parameter;
and respectively verifying whether each encryption model parameter is a false encryption model parameter or not based on each first zero knowledge proof result and each second zero knowledge proof result to obtain a zero knowledge verification result.
4. The federated learning modeling method of claim 3, wherein the verification parameters include verification model parameters and verification random parameters,
the step of calculating a first zero knowledge proof result and a second zero knowledge proof result corresponding to each verification parameter respectively includes:
carrying out validity verification on each encryption model parameter based on a preset verification challenge parameter to obtain each first zero knowledge verification result;
and encrypting each verification model parameter based on a preset coordinator public key and each verification random parameter to obtain each second zero knowledge verification result.
5. The federal learning modeling method of claim 4, wherein the preset verification challenge parameters include a first verification challenge parameter and a second verification challenge parameter;
the step of performing validity verification on each encryption model parameter based on a preset verification challenge parameter to obtain each first zero knowledge verification result comprises:
performing exponentiation operation on the first verification challenge parameter and each encryption model parameter respectively to obtain a first exponentiation operation result corresponding to each encryption model parameter;
performing exponentiation operation on the second verification challenge parameter and each encryption model parameter respectively to obtain a second exponentiation operation result corresponding to each encryption model parameter;
generating each of the first zero knowledge verification results based on each of the first exponentiation results and each of the second exponentiation results.
6. The federal learning modeling method as claimed in claim 3, wherein said step of verifying whether each of said cryptographic model parameters is a dummy cryptographic model parameter based on each of said first zero knowledge proof results and each of said second zero knowledge proof results, respectively, comprises:
comparing the first zero knowledge proof result and the second zero knowledge proof result corresponding to each encryption model parameter respectively;
if the first zero knowledge proof result and the second zero knowledge proof result corresponding to the encryption model parameter are not consistent, determining that the encryption model parameter is the false encryption model parameter;
and if the first zero knowledge proof result and the second zero knowledge proof result corresponding to the encryption model parameter are consistent, judging that the encryption model parameter is not the false encryption model parameter.
7. The federated learning modeling method is applied to second equipment, and comprises the following steps:
acquiring a model training parameter and a first verification random parameter, and encrypting the model training parameter based on the first verification random parameter and a preset public key to obtain an encrypted model parameter;
generating a verification model parameter and a second verification random parameter based on the first verification random parameter, the model training parameter and a preset verification challenge parameter;
sending the encryption model parameter, the verification model parameter and the second verification random parameter to first equipment so that the first equipment can carry out zero knowledge verification to obtain a zero knowledge verification result;
and receiving an aggregation parameter fed back by the first equipment based on the zero knowledge verification result and the encryption model parameter, and updating a local training model corresponding to the model training parameter based on the aggregation parameter until the local training model reaches a preset training end condition.
8. The federated learning modeling method of claim 7, wherein the model training parameters include current model parameters and auxiliary model parameters,
the step of obtaining model training parameters comprises:
performing iterative training on a local training model corresponding to the model training parameters until the local training model reaches a preset iteration threshold value, and acquiring the current model parameters of the local training model;
and acquiring the prior model parameters of the local training model, and generating the auxiliary model parameters based on the prior model parameters.
9. The federal learning modeling apparatus is characterized in that the federal learning modeling apparatus includes: a memory, a processor, and a program stored on the memory for implementing the federated learning modeling method,
the memory is used for storing a program for realizing the federal learning modeling method;
the processor is configured to execute a program implementing the federal learning modeling method to implement the steps of the federal learning modeling method as claimed in any of claims 1 to 6 or 7 to 8.
10. A readable storage medium having stored thereon a program for implementing a federal learning modeling method, the program being executed by a processor to implement the steps of the federal learning modeling method as claimed in any of claims 1 to 6 or 7 to 8.
CN202010445868.8A 2020-05-22 2020-05-22 Federal learning modeling method, device and readable storage medium Active CN111598254B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010445868.8A CN111598254B (en) 2020-05-22 2020-05-22 Federal learning modeling method, device and readable storage medium
PCT/CN2020/135032 WO2021232754A1 (en) 2020-05-22 2020-12-09 Federated learning modeling method and device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010445868.8A CN111598254B (en) 2020-05-22 2020-05-22 Federal learning modeling method, device and readable storage medium

Publications (2)

Publication Number Publication Date
CN111598254A true CN111598254A (en) 2020-08-28
CN111598254B CN111598254B (en) 2021-10-08

Family

ID=72189770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010445868.8A Active CN111598254B (en) 2020-05-22 2020-05-22 Federal learning modeling method, device and readable storage medium

Country Status (2)

Country Link
CN (1) CN111598254B (en)
WO (1) WO2021232754A1 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132277A (en) * 2020-09-21 2020-12-25 平安科技(深圳)有限公司 Federal learning model training method and device, terminal equipment and storage medium
CN112381000A (en) * 2020-11-16 2021-02-19 深圳前海微众银行股份有限公司 Face recognition method, device, equipment and storage medium based on federal learning
CN112434620A (en) * 2020-11-26 2021-03-02 新智数字科技有限公司 Scene character recognition method, device, equipment and computer readable medium
CN112434818A (en) * 2020-11-19 2021-03-02 脸萌有限公司 Model construction method, device, medium and electronic equipment
CN112434619A (en) * 2020-11-26 2021-03-02 新智数字科技有限公司 Case information extraction method, case information extraction device, case information extraction equipment and computer readable medium
CN112446025A (en) * 2020-11-23 2021-03-05 平安科技(深圳)有限公司 Federal learning defense method and device, electronic equipment and storage medium
CN112632636A (en) * 2020-12-23 2021-04-09 深圳前海微众银行股份有限公司 Method and device for proving and verifying ciphertext data comparison result
CN112860800A (en) * 2021-02-22 2021-05-28 深圳市星网储区块链有限公司 Trusted network application method and device based on block chain and federal learning
CN112949760A (en) * 2021-03-30 2021-06-11 平安科技(深圳)有限公司 Model precision control method and device based on federal learning and storage medium
CN113111124A (en) * 2021-03-24 2021-07-13 广州大学 Block chain-based federal learning data auditing system and method
CN113420886A (en) * 2021-06-21 2021-09-21 平安科技(深圳)有限公司 Training method, device, equipment and storage medium for longitudinal federated learning model
CN113435121A (en) * 2021-06-30 2021-09-24 平安科技(深圳)有限公司 Model training verification method, device, equipment and medium based on federal learning
WO2021232754A1 (en) * 2020-05-22 2021-11-25 深圳前海微众银行股份有限公司 Federated learning modeling method and device, and computer-readable storage medium
CN113849805A (en) * 2021-09-23 2021-12-28 国网山东省电力公司济宁供电公司 Mobile user credibility authentication method and device, electronic equipment and storage medium
CN115277197A (en) * 2022-07-27 2022-11-01 深圳前海微众银行股份有限公司 Model ownership verification method, electronic device, medium, and program product
CN117575291A (en) * 2024-01-15 2024-02-20 湖南科技大学 Federal learning data collaborative management method based on edge parameter entropy
CN112632636B (en) * 2020-12-23 2024-06-04 深圳前海微众银行股份有限公司 Ciphertext data comparison result proving and verifying method and device

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114239857B (en) * 2021-12-29 2022-11-22 湖南工商大学 Data right determining method, device, equipment and medium based on federal learning
CN114800545B (en) * 2022-01-18 2023-10-27 泉州华中科技大学智能制造研究院 Robot control method based on federal learning
CN114466358B (en) * 2022-01-30 2023-10-31 全球能源互联网研究院有限公司 User identity continuous authentication method and device based on zero trust
CN114760023A (en) * 2022-04-19 2022-07-15 光大科技有限公司 Model training method and device based on federal learning and storage medium
CN115174046B (en) * 2022-06-10 2024-04-30 湖北工业大学 Federal learning bidirectional verifiable privacy protection method and system in vector space
CN115292738B (en) * 2022-10-08 2023-01-17 豪符密码检测技术(成都)有限责任公司 Method for detecting security and correctness of federated learning model and data
CN117972802A (en) * 2024-03-29 2024-05-03 苏州元脑智能科技有限公司 Field programmable gate array chip, aggregation method, device, equipment and medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635462A (en) * 2018-12-17 2019-04-16 深圳前海微众银行股份有限公司 Model parameter training method, device, equipment and medium based on federation's study
KR20190103090A (en) * 2019-08-15 2019-09-04 엘지전자 주식회사 Method and apparatus for learning a model to generate poi data using federated learning
CN110378487A (en) * 2019-07-18 2019-10-25 深圳前海微众银行股份有限公司 Laterally model parameter verification method, device, equipment and medium in federal study
CN110443375A (en) * 2019-08-16 2019-11-12 深圳前海微众银行股份有限公司 A kind of federation's learning method and device
CN110490335A (en) * 2019-08-07 2019-11-22 深圳前海微众银行股份有限公司 A kind of method and device calculating participant's contribution rate
US10490066B2 (en) * 2016-12-29 2019-11-26 X Development Llc Dynamic traffic control
WO2020029589A1 (en) * 2018-08-10 2020-02-13 深圳前海微众银行股份有限公司 Model parameter acquisition method and system based on federated learning, and readable storage medium
CN110874484A (en) * 2019-10-16 2020-03-10 众安信息技术服务有限公司 Data processing method and system based on neural network and federal learning
US20200092288A1 (en) * 2018-09-18 2020-03-19 Cyral Inc. Federated identity management for data repositories
CN110908893A (en) * 2019-10-08 2020-03-24 深圳逻辑汇科技有限公司 Sandbox mechanism for federal learning
CN110991655A (en) * 2019-12-17 2020-04-10 支付宝(杭州)信息技术有限公司 Method and device for processing model data by combining multiple parties
CN111178524A (en) * 2019-12-24 2020-05-19 中国平安人寿保险股份有限公司 Data processing method, device, equipment and medium based on federal learning

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263936B (en) * 2019-06-14 2023-04-07 深圳前海微众银行股份有限公司 Horizontal federal learning method, device, equipment and computer storage medium
CN110503207A (en) * 2019-08-28 2019-11-26 深圳前海微众银行股份有限公司 Federation's study credit management method, device, equipment and readable storage medium storing program for executing
CN110572253B (en) * 2019-09-16 2023-03-24 济南大学 Method and system for enhancing privacy of federated learning training data
CN110797124B (en) * 2019-10-30 2024-04-12 腾讯科技(深圳)有限公司 Model multiterminal collaborative training method, medical risk prediction method and device
CN110955907B (en) * 2019-12-13 2022-03-25 支付宝(杭州)信息技术有限公司 Model training method based on federal learning
CN110912713B (en) * 2019-12-20 2023-06-23 支付宝(杭州)信息技术有限公司 Method and device for processing model data by multi-party combination
CN111598254B (en) * 2020-05-22 2021-10-08 深圳前海微众银行股份有限公司 Federal learning modeling method, device and readable storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10490066B2 (en) * 2016-12-29 2019-11-26 X Development Llc Dynamic traffic control
WO2020029589A1 (en) * 2018-08-10 2020-02-13 深圳前海微众银行股份有限公司 Model parameter acquisition method and system based on federated learning, and readable storage medium
US20200092288A1 (en) * 2018-09-18 2020-03-19 Cyral Inc. Federated identity management for data repositories
CN109635462A (en) * 2018-12-17 2019-04-16 深圳前海微众银行股份有限公司 Model parameter training method, device, equipment and medium based on federation's study
CN110378487A (en) * 2019-07-18 2019-10-25 深圳前海微众银行股份有限公司 Laterally model parameter verification method, device, equipment and medium in federal study
CN110490335A (en) * 2019-08-07 2019-11-22 深圳前海微众银行股份有限公司 A kind of method and device calculating participant's contribution rate
KR20190103090A (en) * 2019-08-15 2019-09-04 엘지전자 주식회사 Method and apparatus for learning a model to generate poi data using federated learning
CN110443375A (en) * 2019-08-16 2019-11-12 深圳前海微众银行股份有限公司 A kind of federation's learning method and device
CN110908893A (en) * 2019-10-08 2020-03-24 深圳逻辑汇科技有限公司 Sandbox mechanism for federal learning
CN110874484A (en) * 2019-10-16 2020-03-10 众安信息技术服务有限公司 Data processing method and system based on neural network and federal learning
CN110991655A (en) * 2019-12-17 2020-04-10 支付宝(杭州)信息技术有限公司 Method and device for processing model data by combining multiple parties
CN111178524A (en) * 2019-12-24 2020-05-19 中国平安人寿保险股份有限公司 Data processing method, device, equipment and medium based on federal learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ARJUN NITIN BHAGOJI ET AL.: "Analyzing Federated Learning through an Adversarial Lens", 《ARXIV》 *
何英哲 等: "机器学习***的隐私和安全问题综述", 《计算机研究与发展》 *
颜春辉 等: "基于区块链的安全投票***设计与实现", 《通信技术》 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021232754A1 (en) * 2020-05-22 2021-11-25 深圳前海微众银行股份有限公司 Federated learning modeling method and device, and computer-readable storage medium
CN112132277A (en) * 2020-09-21 2020-12-25 平安科技(深圳)有限公司 Federal learning model training method and device, terminal equipment and storage medium
WO2021159753A1 (en) * 2020-09-21 2021-08-19 平安科技(深圳)有限公司 Federated learning model training method and apparatus, terminal device, and storage medium
CN112381000A (en) * 2020-11-16 2021-02-19 深圳前海微众银行股份有限公司 Face recognition method, device, equipment and storage medium based on federal learning
CN112434818A (en) * 2020-11-19 2021-03-02 脸萌有限公司 Model construction method, device, medium and electronic equipment
CN112434818B (en) * 2020-11-19 2023-09-26 脸萌有限公司 Model construction method, device, medium and electronic equipment
WO2022108529A1 (en) * 2020-11-19 2022-05-27 脸萌有限公司 Model construction method and apparatus, and medium and electronic device
CN112446025A (en) * 2020-11-23 2021-03-05 平安科技(深圳)有限公司 Federal learning defense method and device, electronic equipment and storage medium
CN112434620A (en) * 2020-11-26 2021-03-02 新智数字科技有限公司 Scene character recognition method, device, equipment and computer readable medium
CN112434619A (en) * 2020-11-26 2021-03-02 新智数字科技有限公司 Case information extraction method, case information extraction device, case information extraction equipment and computer readable medium
CN112434619B (en) * 2020-11-26 2024-03-26 新奥新智科技有限公司 Case information extraction method, apparatus, device and computer readable medium
CN112434620B (en) * 2020-11-26 2024-03-01 新奥新智科技有限公司 Scene text recognition method, device, equipment and computer readable medium
CN112632636A (en) * 2020-12-23 2021-04-09 深圳前海微众银行股份有限公司 Method and device for proving and verifying ciphertext data comparison result
CN112632636B (en) * 2020-12-23 2024-06-04 深圳前海微众银行股份有限公司 Ciphertext data comparison result proving and verifying method and device
CN112860800A (en) * 2021-02-22 2021-05-28 深圳市星网储区块链有限公司 Trusted network application method and device based on block chain and federal learning
CN113111124A (en) * 2021-03-24 2021-07-13 广州大学 Block chain-based federal learning data auditing system and method
CN112949760B (en) * 2021-03-30 2024-05-10 平安科技(深圳)有限公司 Model precision control method, device and storage medium based on federal learning
CN112949760A (en) * 2021-03-30 2021-06-11 平安科技(深圳)有限公司 Model precision control method and device based on federal learning and storage medium
CN113420886A (en) * 2021-06-21 2021-09-21 平安科技(深圳)有限公司 Training method, device, equipment and storage medium for longitudinal federated learning model
CN113420886B (en) * 2021-06-21 2024-05-10 平安科技(深圳)有限公司 Training method, device, equipment and storage medium for longitudinal federal learning model
CN113435121A (en) * 2021-06-30 2021-09-24 平安科技(深圳)有限公司 Model training verification method, device, equipment and medium based on federal learning
CN113435121B (en) * 2021-06-30 2023-08-22 平安科技(深圳)有限公司 Model training verification method, device, equipment and medium based on federal learning
CN113849805A (en) * 2021-09-23 2021-12-28 国网山东省电力公司济宁供电公司 Mobile user credibility authentication method and device, electronic equipment and storage medium
CN115277197A (en) * 2022-07-27 2022-11-01 深圳前海微众银行股份有限公司 Model ownership verification method, electronic device, medium, and program product
CN115277197B (en) * 2022-07-27 2024-01-16 深圳前海微众银行股份有限公司 Model ownership verification method, electronic device, medium and program product
CN117575291A (en) * 2024-01-15 2024-02-20 湖南科技大学 Federal learning data collaborative management method based on edge parameter entropy
CN117575291B (en) * 2024-01-15 2024-05-10 湖南科技大学 Federal learning data collaborative management method based on edge parameter entropy

Also Published As

Publication number Publication date
CN111598254B (en) 2021-10-08
WO2021232754A1 (en) 2021-11-25

Similar Documents

Publication Publication Date Title
CN111598254B (en) Federal learning modeling method, device and readable storage medium
WO2020177392A1 (en) Federated learning-based model parameter training method, apparatus and device, and medium
WO2020015478A1 (en) Model-based prediction method and device
US11349648B2 (en) Pre-calculation device, method, computer-readable recording medium, vector multiplication device, and method
CN112749392B (en) Method and system for detecting abnormal nodes in federated learning
WO2013031414A1 (en) Signature verification device, signature verification method, program, and recording medium
CN108292341A (en) Method for the execution integrality for verifying the application in destination apparatus
US8260721B2 (en) Network resource access control methods and systems using transactional artifacts
CN111614679B (en) Federal learning qualification recovery method, device and readable storage medium
Yu et al. Identity‐Based Proxy Signcryption Protocol with Universal Composability
US20100306543A1 (en) Method of efficient secure function evaluation using resettable tamper-resistant hardware tokens
CN111027981A (en) Method and device for multi-party joint training of risk assessment model for IoT (Internet of things) machine
CN113965331B (en) Secret state prediction verification method, device, equipment and storage medium
CN115277010A (en) Identity authentication method, system, computer device and storage medium
Hu et al. Privacy-preserving combinatorial auction without an auctioneer
CN117134945A (en) Data processing method, system, device, computer equipment and storage medium
CN114221753B (en) Key data processing method and electronic equipment
CN115361196A (en) Service interaction method based on block chain network
CN113420886B (en) Training method, device, equipment and storage medium for longitudinal federal learning model
US20090300352A1 (en) Secure session identifiers
EP3917076A1 (en) A zero knowledge proof method for content engagement
Ren et al. SM9-based traceable and accountable access control for secure multi-user cloud storage
Eigner et al. Achieving optimal utility for distributed differential privacy using secure multiparty computation
CN109218016B (en) Data transmission method and device, server, computer equipment and storage medium
CN112769766B (en) Safe aggregation method and system for data of power edge internet of things based on federal learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant