WO2021228110A1 - Federated modeling method, device, equipment, and computer-readable storage medium - Google Patents

Federated modeling method, device, equipment, and computer-readable storage medium Download PDF

Info

Publication number
WO2021228110A1
WO2021228110A1 PCT/CN2021/093153 CN2021093153W WO2021228110A1 WO 2021228110 A1 WO2021228110 A1 WO 2021228110A1 CN 2021093153 W CN2021093153 W CN 2021093153W WO 2021228110 A1 WO2021228110 A1 WO 2021228110A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
trained
public
private
federated modeling
Prior art date
Application number
PCT/CN2021/093153
Other languages
French (fr)
Chinese (zh)
Inventor
张天豫
范力欣
吴锦和
Original Assignee
深圳前海微众银行股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海微众银行股份有限公司 filed Critical 深圳前海微众银行股份有限公司
Publication of WO2021228110A1 publication Critical patent/WO2021228110A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Definitions

  • This application relates to the field of federated learning, in particular to a federated modeling method, device, equipment and computer-readable storage medium.
  • the training method of federated learning is mainly carried out in the form of gradient sharing. All models with the same structure share their local gradients to jointly train a global model. If the gradient of a model is leaked or tapped during the gradient propagation process, the chain rule and the leaked gradient can be used to restore the input data.
  • the gradient is often protected by means of differential privacy protection, gradient quantization, and gradient clipping.
  • the differential privacy protection is to protect the gradient by adding a certain amount of random noise to the gradient that needs to be propagated;
  • Gradient quantization is to approximate the gradient to an integer value, such as (0, 1) or (-1, 0, 1), etc.;
  • gradient clipping is to clip the gradient value at certain positions to 0.
  • the main purpose of this application is to provide a federated modeling method, device, equipment, and computer-readable storage medium, aiming to solve the technical problem of the balance between the difficulty of realizing the protection gradient and the model convergence or model accuracy in the existing federated learning .
  • the federated modeling method includes the following steps:
  • the first participant inputs the data to be trained into the public model in the model to be trained for model training to obtain the first public model parameters corresponding to the public model, and inputs the data to be trained into the private model in the model to be trained for model training , To obtain the private model parameters corresponding to the private model;
  • a target model is determined.
  • the step of sending the first public model parameters to a coordinator so that the coordinator can determine and feed back global model parameters based on the first public model parameters includes:
  • the first public model parameter is sent to the coordinator, where the coordinator obtains the second public model parameters sent by a plurality of second participants, and determines based on each second public model parameter and the first public model parameter Global model parameters, and feed back the global model parameters to the first participant.
  • the first public model parameter includes a public gradient
  • the private model parameter includes a private gradient
  • the first participant inputs the data to be trained into the public model in the model to be trained for model training, so as to obtain the public
  • the first public model parameter corresponding to the model, and inputting the data to be trained into the private model in the model to be trained for model training, so as to obtain the private model parameters corresponding to the private model include:
  • the first participant inputs the data to be trained into the public model in the model to be trained for model training to obtain the public loss function value corresponding to the data to be trained, and inputs the data to be trained into the private model in the model to be trained for model training , To obtain the private loss function value corresponding to the data to be trained;
  • the public gradient is determined based on the public loss function value, and the private gradient is determined based on the private loss function value.
  • the step of determining a target model based on the private model parameters, the global model parameters, and the model to be trained includes:
  • the target model is determined based on the updated model to be trained.
  • step of updating the public model in the model to be trained based on the global model parameters includes:
  • the public model in the model to be trained is updated.
  • step of determining the target model based on the updated model to be trained includes:
  • the updated model to be trained does not converge, use the updated model to be trained as the model to be trained, and return to execution.
  • the first participant inputs the data to be trained into the public model in the model to be trained for model training to obtain The first public model parameter corresponding to the public model, and the step of inputting the data to be trained into the private model in the model to be trained for model training, so as to obtain the private model parameters corresponding to the private model.
  • step of determining the target model based on the updated model to be trained includes:
  • the updated model to be trained is used as the target model
  • the updated model to be trained is used as the model to be trained, and the execution returns to the first participant to input the data to be trained into the public model in the model to be trained for model training.
  • the present application also provides a federated modeling device, the federated modeling device including:
  • the training module is used to input the data to be trained into the public model in the model to be trained for model training, to obtain the first public model parameters corresponding to the public model, and to input the data to be trained into the private model in the model to be trained for model training Training to obtain private model parameters corresponding to the private model;
  • a sending module configured to send the first public model parameters to a coordinator, so that the coordinator can determine and feed back global model parameters based on the first public model parameters;
  • the determining module is configured to determine a target model based on the private model parameters, the global model parameters, and the model to be trained.
  • this application also provides a federated modeling device, the federated modeling device comprising: a memory, a processor, and a federated modeling device stored in the memory and running on the processor A program for implementing the steps of the aforementioned federated modeling method when the federated modeling program is executed by the processor.
  • this application also provides a computer-readable storage medium with a federated modeling program stored on the computer-readable storage medium, and the federated modeling program is executed by a processor to realize the aforementioned federated building The steps of the model method.
  • the first participant inputs the data to be trained into the public model in the model to be trained for model training, so as to obtain the first public model parameters corresponding to the public model, and inputs the data to be trained into the private model in the model to be trained Perform model training to obtain the private model parameters corresponding to the private model; then send the first public model parameters to the coordinator, so that the coordinator can determine and feed back the global model based on the first public model parameters Parameters; then based on the private model parameters, the global model parameters and the model to be trained, determine the target model; by modeling according to the public model and the private model in the model to be trained, there is no need to model the gradient in transmission Modification of parameters avoids privacy leakage caused by insufficient noise and low model training accuracy caused by excessive noise, and achieves a balance between privacy protection of model parameters such as gradients and model convergence or model accuracy; private model parameters do not participate In the training of the shared federated model, the attacker can only obtain the first public model parameters, and thus cannot recover the input data according to the chain rule of
  • FIG. 1 is a schematic diagram of the structure of a federated modeling device in a hardware operating environment involved in a solution of an embodiment of the present application;
  • FIG. 2 is a schematic flowchart of the first embodiment of the federal modeling method of this application.
  • FIG. 3 is a schematic diagram of the model structure in an embodiment of the federal modeling method of this application.
  • Fig. 4 is a schematic diagram of functional modules of an embodiment of a federal modeling device according to the present application.
  • Fig. 1 is a schematic diagram of the structure of a federated modeling device in the hardware operating environment involved in the solution of the embodiment of the present application.
  • the federal modeling device in the embodiment of this application can be a PC, or a smart phone, a tablet computer, an e-book reader, MP3 (Moving Picture Experts Group Audio Layer) III. Moving Picture Experts Compression Standard Audio Layer 3) Players, MP4 (Moving Picture Experts Group Audio Layer IV, Moving Picture Experts Group Audio Layer IV) Players, portable computers and other portable terminal devices with display functions.
  • MP3 Moving Picture Experts Group Audio Layer
  • MP4 Motion Picture Experts Group Audio Layer IV, Moving Picture Experts Group Audio Layer IV
  • portable computers and other portable terminal devices with display functions can be a PC, or a smart phone, a tablet computer, an e-book reader, MP3 (Moving Picture Experts Group Audio Layer) III. Moving Picture Experts Compression Standard Audio Layer 3) Players, MP4 (Moving Picture Experts Group Audio Layer IV, Moving Picture Experts Group Audio Layer IV) Players, portable computers and other portable terminal devices with display functions.
  • MP3 Moving Picture Experts Group Audio Layer III. Moving Picture Experts Compression Standard Audio Layer
  • the federation modeling device may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, and a communication bus 1002.
  • the communication bus 1002 is used to implement connection and communication between these components.
  • the user interface 1003 may include a display screen (Display) and an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • the network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the memory 1005 may be a high-speed RAM memory, or a stable memory (non-volatile memory), such as a magnetic disk memory.
  • the memory 1005 may also be a storage device independent of the aforementioned processor 1001.
  • the federal modeling equipment may also include a camera, RF (Radio Frequency (radio frequency) circuits, sensors, audio circuits, WiFi modules, etc.
  • sensors such as light sensors, motion sensors and other sensors.
  • the federal modeling equipment can also be equipped with other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which will not be repeated here.
  • FIG. 1 does not constitute a limitation on the federated modeling device, and may include more or fewer parts than shown in the figure, or combine certain parts, or be different.
  • the layout of the components does not constitute a limitation on the federated modeling device, and may include more or fewer parts than shown in the figure, or combine certain parts, or be different. The layout of the components.
  • the memory 1005 which is a computer storage medium, may include an operating system, a network communication module, a user interface module, and a federated modeling program.
  • the network interface 1004 is mainly used to connect to the back-end server and communicate with the back-end server; the user interface 1003 is mainly used to connect to the client (user side) and communicate with the client;
  • the processor 1001 can be used to call the federation modeling program stored in the memory 1005.
  • the federation modeling device includes a memory 1005, a processor 1001, and a federation modeling program stored on the memory 1005 and running on the processor 1001, wherein the processor 1001 calls the memory 1005
  • the federated modeling program is stored in, and execute the steps of the federated modeling method in the following embodiments.
  • This application also provides a federated modeling method.
  • FIG. 2 is a schematic flowchart of the first embodiment of the federated modeling method of this application.
  • the federation modeling method includes the following steps:
  • Step S100 the first participant inputs the data to be trained into the public model in the model to be trained for model training to obtain the first public model parameters corresponding to the public model, and inputs the data to be trained into the private model in the model to be trained Performing model training to obtain private model parameters corresponding to the private model;
  • the first participant is any participant (participant) in the federated learning system
  • the coordinator is the coordinator in the federated learning system
  • the second participant is the first participant in the federated learning system.
  • Other parties The data to be trained is the private training data or the local training data in the first participant.
  • the second participant all stores the private training data or the local training data.
  • the models to be trained include public models and private models.
  • the first participant inputs the data to be trained into the public model in the model to be trained for model training, and obtains the first public model parameters through the trained public model.
  • the first public model parameters include the gradient or the public model before and after training. Specifically, if the first public model parameter is a gradient, the first participant determines the first public model parameter, that is, the public gradient, according to the training result. If the first public model parameter is the model parameter change, then The first participant determines the first public model parameter, that is, the amount of change of the model parameter that has changed in the public model, according to the public model before training and the public model trained with the training data.
  • the data to be trained is input into the private model in the model to be trained for model training to obtain the private model parameters corresponding to the private model.
  • the private model parameters include the gradient or the model parameter changes of the private model before and after training. Specifically, if If the private model parameter is a gradient, the first participant determines the private model parameter, that is, the private gradient, according to the training result. If the private model parameter is the variation of the model parameter, the first participant trains according to the private model before training and the data to be trained The latter private model determines the private model parameters, that is, the amount of change of the model parameters that have changed in the private model.
  • Step S200 sending the first public model parameters to a coordinator, so that the coordinator can determine and feed back global model parameters based on the first public model parameters;
  • the coordinator is the coordinator in federated learning, and the coordinator can receive model parameters sent by each participant including the first participant.
  • the first participant after acquiring the first public model parameters and the private model parameters, the first participant sends the first public model parameters to the coordinator for the coordinator to determine and feed back the global model parameters based on the first public model parameters ,
  • the coordinator receives or obtains the model parameters of each participant other than the first participant in the federated learning, and based on the model parameters of the other participants and the first A public model parameter, determine the global model parameter, and feed back the global model parameter value to the first participant.
  • the coordinator feeds back the global model parameters to the other parties at the same time.
  • Step S300 Determine a target model based on the private model parameters, the global model parameters, and the model to be trained.
  • the target model is determined based on the first public model parameters, private model parameters, global model parameters, and the model to be trained, specifically, based on the first public model parameters and the global model parameters.
  • the model parameters update the public model in the model to be trained, and the private model in the model to be trained is updated based on the private model parameters to obtain the updated model to be trained, and the target model is determined according to the updated model to be trained.
  • step S200 includes:
  • the first public model parameter is sent to the coordinator, where the coordinator obtains the second public model parameters sent by a plurality of second participants, and determines based on each second public model parameter and the first public model parameter Global model parameters, and feed back the global model parameters to the first participant.
  • the coordinator receives the first public model parameters sent by the first participant, and at the same time, the coordinator acquires or receives second public model parameters sent by multiple second participants, where the second participant is For each participant in the federated learning system except the first participant, the second public model parameter is the model parameter obtained by the second participant inputting their respective training data into the public model of the model to be trained for training. Federation The models to be trained are the same for all participants in the learning system, and the data to be trained for each participant is its own private data or local data.
  • the coordinator determines the global model parameters based on each of the second public model parameters and the first public model parameters, and feeds back the global model parameters to the first participant. At the same time, the coordinator feeds back the global model parameters.
  • the model parameters are sent to each second participant, or the global model of the coordinator is updated based on the global model parameters.
  • the second public model parameter and the first public model parameter are both gradients, then the second public model parameter and the first public model parameter are added (vector addition) to obtain the global model parameter, if the second public model parameter is The public model parameters and the first public model parameters are both the variation of the model parameters, and the parameter average value between each second public model parameter and the first public model parameter is calculated, and the parameter average value is used as the global model parameter.
  • the private training data in Figure 3 is the data to be trained, the participant’s local model is the first participant’s to be trained model, and the collector is the coordinator; the participant’s local model includes a public model and a private model.
  • the public model is used for model training to obtain the first public model parameters
  • the private model is used for model training to obtain the private model parameters, that is, the output of the participant's local model includes the private model parameters And the first public model parameter; then, the private gradient is calculated by the private model parameters, and the private model is updated according to the private gradient.
  • the public gradient is calculated by the first public model parameter, and the public gradient is uploaded to the collector.
  • the collector is based on the public gradient Calculate the global gradient with the gradient uploaded by other participants, and feed back the global gradient to the first participant.
  • the first participant updates the public model in the participant’s local model according to the global gradient to obtain the updated participant’s local model, and Determine the target model based on the updated local model of the participant.
  • the first participant inputs the data to be trained into the public model in the model to be trained for model training, so as to obtain the first public model parameters corresponding to the public model, and combine the data to be trained Input the private model in the model to be trained for model training to obtain the private model parameters corresponding to the private model; then send the first public model parameters to the coordinator for the coordinator based on the first public model
  • the model parameters are determined and fed back to the global model parameters; then based on the private model parameters, the global model parameters, and the model to be trained, the target model is determined; by modeling according to the public model and the private model in the model to be trained, There is no need to modify model parameters such as gradients in transmission, avoiding privacy leakage caused by insufficient noise and low model training accuracy caused by excessive noise, and realizing the privacy protection of model parameters such as gradients and model convergence or model accuracy.
  • Equalization because the private model parameters do not participate in the training of the shared federated model, the attacker can only obtain the first public model parameters, and thus cannot recover the input data according to the chain rule by stealing the complete model parameters; and there is no gradient clipping ratio or noise ratio Part of the information is restored when it is too low, which can completely prevent information leakage and improve the security of data in federated learning.
  • Step S100 includes:
  • Step S110 the first participant inputs the data to be trained into the public model in the model to be trained for model training to obtain the public loss function value corresponding to the data to be trained, and inputs the data to be trained into the private model in the model to be trained Performing model training to obtain the private loss function value corresponding to the data to be trained;
  • Step S120 Determine the public gradient based on the public loss function value, and determine the private gradient based on the private loss function value.
  • the first participant inputs the data to be trained into the public model in the model to be trained for model training, and obtains the value of the public loss function.
  • the first participant inputs the data to be trained into the private model in the model to be trained for model training.
  • Model training to obtain the private loss function value among them, the public loss function value can be the average square estimation loss function, cross entropy loss function or polarization loss function, and the private loss function value can be the average square estimation loss function, cross entropy loss function Or polarization loss function.
  • the public gradient is determined based on the public loss function value
  • the private gradient is determined based on the private loss function value.
  • the first participant inputs the data to be trained into the public model in the model to be trained for model training, so as to obtain the public loss function value corresponding to the data to be trained.
  • the private gradient can be accurately obtained through model training.
  • the public gradient and the private gradient are obtained separately. Since the private gradient does not participate in the training of the shared federation model, the attacker can only obtain the public gradient, and cannot steal the complete
  • the model parameters are used to recover the input data through the chain rule, which further improves the efficiency and safety of model training.
  • step S300 includes:
  • Step S310 update the private model in the to-be-trained model based on the private model parameters, and update the public model in the to-be-trained model based on the global model parameters to obtain an updated model to be trained;
  • Step S320 Determine the target model based on the updated model to be trained.
  • the private model in the model to be trained is updated based on the private model parameters
  • the public model in the model to be trained is updated based on the global model parameters to obtain the updated model to be trained.
  • the model is trained, and then the target model is determined based on the updated model to be trained.
  • the global model parameters and the private model parameters are both gradients, first calculate a target model parameter according to the private model parameters and the global model parameters, and update the public model according to the target model parameters; if the global model parameters and the private model parameters are both models The amount of parameter change is directly based on the global model parameters to update the public model.
  • Step S310 includes:
  • Step S311 acquiring a first weight corresponding to the first public model parameter and a second weight corresponding to the global model parameter;
  • Step S312 determining target model parameters based on the first weight, the second weight, the first public model parameter, and the global model parameter;
  • Step S313 based on the target model parameters, update the public model in the model to be trained.
  • the model parameters are all gradients.
  • the first weight corresponding to the first public model parameter and the second weight corresponding to the global model parameter are acquired, and are based on the first weight, second weight,
  • the first public model parameter and the global model parameter determine the target model parameter.
  • the target model parameter the first weight * the first public model parameter + the second weight * the global model parameter. Then, based on the target model parameters, the public model in the model to be trained is updated.
  • the first participant may also use other methods to fuse the first public model parameter with the global model parameter to obtain the model parameter, for example, combine the first public model parameter with the global model parameter Get model parameters.
  • step S320 includes:
  • Step S321 Determine whether the updated model to be trained converges
  • step S322 if the updated model to be trained converges, the updated model to be trained is used as the target model;
  • Step S323 if the updated model to be trained does not converge, the updated model to be trained is used as the model to be trained, and the execution returns to the first participant to input the data to be trained into the public model in the model to be trained for model training To obtain the first public model parameter corresponding to the public model, and input the to-be-trained data into the private model in the to-be-trained model for model training, so as to obtain the private model parameter corresponding to the private model.
  • the updated model to be trained After the updated model to be trained is obtained, it is determined whether the updated model to be trained converges. Specifically, it is determined whether the loss function of the public model in the updated model to be trained is less than the first preset value, and the update Whether the loss function of the private model in the later model to be trained is less than the second preset value, if the loss function of the public model in the updated model to be trained is less than the first preset value, and the value of the private model in the updated model to be trained If the loss function is less than the second preset value, it is determined that the updated model to be trained converges, and the updated model to be trained is used as the target model; otherwise, the updated model to be trained is used as the model to be trained, and Return to step S100 to obtain a convergent target model.
  • step S320 includes:
  • Step S324 update the update times corresponding to the model to be trained
  • Step S325 if the number of updates reaches the preset number of times, use the updated model to be trained as the target model;
  • Step S326 If the number of updates does not reach the preset number of times, the updated model to be trained is used as the model to be trained, and the execution returns to the first participant to input the data to be trained into the public model in the model to be trained to perform the model Training to obtain the first public model parameter corresponding to the public model, and input the to-be-trained data into the private model in the to-be-trained model for model training to obtain the private model parameter corresponding to the private model.
  • the number of updates corresponding to the model to be trained is updated, and then it is determined whether the number of updates reaches the preset number, and if the number of updates reaches the preset number, the updated model to be trained is updated As the target model, and reset the number of updates; otherwise, use the updated model to be trained as the model to be trained, and return to step S100 to implement the preset number of updates to the model to be trained.
  • the preset times can be set reasonably, and the initial value of the update times can be set to 0.
  • the current update times are increased by 1 to obtain the new update times.
  • the federated modeling method proposed in this embodiment updates the private model in the model to be trained based on the private model parameters, and updates the public model in the model to be trained based on the first public model parameters and the global model parameters.
  • the parameters do not participate in the training of the shared federation model. The attacker can only obtain the first public model parameter, and thus cannot recover the input data according to the chain rule of stealing the complete model parameter, which further improves the security of the data in federated learning.
  • An embodiment of the present application also provides a federated modeling device.
  • the federated modeling device includes:
  • the training module 100 is used to input the data to be trained into the public model in the model to be trained for model training, to obtain the first public model parameters corresponding to the public model, and to input the data to be trained into the private model in the model to be trained for model training.
  • the sending module 200 is configured to send the first public model parameters to a coordinator, so that the coordinator can determine and feed back global model parameters based on the first public model parameters;
  • the determining module 300 is configured to determine a target model based on the private model parameters, the global model parameters, and the model to be trained.
  • sending module 200 is also used for:
  • the first public model parameter is sent to the coordinator, where the coordinator obtains the second public model parameters sent by a plurality of second participants, and determines based on each second public model parameter and the first public model parameter Global model parameters, and feed back the global model parameters to the first participant.
  • training module 100 is also used for:
  • the first participant inputs the data to be trained into the public model in the model to be trained for model training to obtain the public loss function value corresponding to the data to be trained, and inputs the data to be trained into the private model in the model to be trained for model training , To obtain the private loss function value corresponding to the data to be trained;
  • the public gradient is determined based on the public loss function value, and the private gradient is determined based on the private loss function value.
  • determining module 300 is also used for:
  • the target model is determined based on the updated model to be trained.
  • determining module 300 is also used for:
  • the public model in the model to be trained is updated.
  • determining module 300 is also used for:
  • the updated model to be trained does not converge, use the updated model to be trained as the model to be trained, and return to execution.
  • the first participant inputs the data to be trained into the public model in the model to be trained for model training to obtain The first public model parameter corresponding to the public model, and the step of inputting the data to be trained into the private model in the model to be trained for model training, so as to obtain the private model parameters corresponding to the private model.
  • determining module 300 is also used for:
  • the updated model to be trained is used as the target model
  • the updated model to be trained is used as the model to be trained, and the execution returns to the first participant to input the data to be trained into the public model in the model to be trained for model training.
  • an embodiment of the present application also proposes a computer-readable storage medium, the computer-readable storage medium stores a federated modeling program, and the federated modeling program is executed by a processor to realize the federated modeling as described above. Method steps.
  • the method of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. ⁇
  • the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM) as described above. , Magnetic disk, optical disk), including several instructions to make a terminal device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the method described in each embodiment of the present application.
  • a terminal device which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A federated modeling method, comprising: a first participant inputs data to be trained into a public model among models to be trained for model training to acquire a first public model parameter, and inputs said data into a private model among the models to be trained for model training to acquire a private model parameter (S100); the first public model parameter is transmitted to a coordinator to allow the coordinator to determine and feed back a global model parameter on the basis of the first public model parameter (S200); and a target model is determined on the basis of the private model parameter, of the global model parameter, and of the models to be trained (S300).

Description

联邦建模方法、装置、设备及计算机可读存储介质Federal modeling method, device, equipment and computer readable storage medium
本申请要求2020年5月14日申请的,申请号为202010409900.7,名称为“联邦建模方法、装置、设备及计算机可读存储介质”的中国专利申请的优先权,在此将其全文引入作为参考。This application claims the priority of the Chinese patent application filed on May 14, 2020, with the application number 202010409900.7, titled "Federal Modeling Method, Apparatus, Equipment, and Computer-readable Storage Medium", which is hereby incorporated in its entirety as refer to.
技术领域Technical field
本申请涉及联邦学习领域,尤其涉及一种联邦建模方法、装置、设备及计算机可读存储介质。This application relates to the field of federated learning, in particular to a federated modeling method, device, equipment and computer-readable storage medium.
背景技术Background technique
目前,联邦学习的训练方式主要是以梯度共享的形式进行。各个具有相同构造的模型通过分享其本地梯度,共同训练一个全局模型。若某个模型在梯度传播过程中梯度被泄漏或窃听,则可以使用链式法则以及被泄漏的梯度还原输入数据。At present, the training method of federated learning is mainly carried out in the form of gradient sharing. All models with the same structure share their local gradients to jointly train a global model. If the gradient of a model is leaked or tapped during the gradient propagation process, the chain rule and the leaked gradient can be used to restore the input data.
目前,为避免梯度被泄露或窃听,往往通过差分隐私保护、梯度量化以及梯度裁剪等方式对梯度进行保护,其中,差分隐私保护是通过对需要传播的梯度附加一定量的随机噪声来保护梯度;梯度量化是将梯度近似成为整型数值,例如(0,1)或(-1,0,1)等;梯度裁剪是通过将某些位置的梯度数值裁剪为0。At present, in order to prevent the gradient from being leaked or eavesdropped, the gradient is often protected by means of differential privacy protection, gradient quantization, and gradient clipping. Among them, the differential privacy protection is to protect the gradient by adding a certain amount of random noise to the gradient that needs to be propagated; Gradient quantization is to approximate the gradient to an integer value, such as (0, 1) or (-1, 0, 1), etc.; gradient clipping is to clip the gradient value at certain positions to 0.
但是,差分隐私保护、梯度量化以及梯度裁剪等方式,均需要通过对梯度添加扰动,如果扰动过大则会影响全局模型的收敛效果,导致模型无法收敛或最终全局模型的精度较低,如果扰动不足则无法有效保护梯度。However, methods such as differential privacy protection, gradient quantization, and gradient clipping all need to add disturbance to the gradient. If the disturbance is too large, it will affect the convergence effect of the global model, resulting in the model cannot converge or the final global model accuracy is low. If it is insufficient, the gradient cannot be effectively protected.
上述内容仅用于辅助理解本申请的技术方案,并不代表承认上述内容是现有技术。The above content is only used to assist the understanding of the technical solutions of this application, and does not mean that the above content is recognized as prior art.
技术问题technical problem
本申请的主要目的在于提供一种联邦建模方法、装置、设备及计算机可读存储介质,旨在解决现有联邦学习中难以实现保护梯度,与模型收敛或模型精度之间的均衡的技术问题。The main purpose of this application is to provide a federated modeling method, device, equipment, and computer-readable storage medium, aiming to solve the technical problem of the balance between the difficulty of realizing the protection gradient and the model convergence or model accuracy in the existing federated learning .
技术解决方案Technical solutions
为实现上述目的,本申请提供一种联邦建模方法,所述联邦建模方法包括以下步骤:In order to achieve the above objective, the present application provides a federated modeling method. The federated modeling method includes the following steps:
第一参与者将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述公有模型对应的第一公有模型参数,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述私有模型对应的私有模型参数;The first participant inputs the data to be trained into the public model in the model to be trained for model training to obtain the first public model parameters corresponding to the public model, and inputs the data to be trained into the private model in the model to be trained for model training , To obtain the private model parameters corresponding to the private model;
将所述第一公有模型参数发送至协调者,以供所述协调者基于所述第一公有模型参数,确定并反馈全局模型参数;Sending the first public model parameter to a coordinator, so that the coordinator can determine and feed back global model parameters based on the first public model parameter;
基于所述私有模型参数、所述全局模型参数以及所述待训练模型,确定目标模型。Based on the private model parameters, the global model parameters, and the model to be trained, a target model is determined.
进一步地,所述将所述第一公有模型参数发送至协调者,以供所述协调者基于所述第一公有模型参数,确定并反馈全局模型参数的步骤包括:Further, the step of sending the first public model parameters to a coordinator so that the coordinator can determine and feed back global model parameters based on the first public model parameters includes:
将所述第一公有模型参数发送至协调者,其中,所述协调者获取多个第二参与者发送的第二公有模型参数,基于各个第二公有模型参数以及所述第一公有模型参数确定全局模型参数,并反馈所述全局模型参数至所述第一参与者。The first public model parameter is sent to the coordinator, where the coordinator obtains the second public model parameters sent by a plurality of second participants, and determines based on each second public model parameter and the first public model parameter Global model parameters, and feed back the global model parameters to the first participant.
进一步地,所述第一公有模型参数包括公有梯度,所述私有模型参数包括私有梯度,所述第一参与者将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述公有模型对应的第一公有模型参数,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述私有模型对应的私有模型参数的步骤包括:Further, the first public model parameter includes a public gradient, the private model parameter includes a private gradient, and the first participant inputs the data to be trained into the public model in the model to be trained for model training, so as to obtain the public The first public model parameter corresponding to the model, and inputting the data to be trained into the private model in the model to be trained for model training, so as to obtain the private model parameters corresponding to the private model include:
第一参与者将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述待训练数据对应的公有损失函数值,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述待训练数据对应的私有损失函数值;The first participant inputs the data to be trained into the public model in the model to be trained for model training to obtain the public loss function value corresponding to the data to be trained, and inputs the data to be trained into the private model in the model to be trained for model training , To obtain the private loss function value corresponding to the data to be trained;
基于所述公有损失函数值确定所述公有梯度,并基于所述私有损失函数值确定所述私有梯度。The public gradient is determined based on the public loss function value, and the private gradient is determined based on the private loss function value.
进一步地,所述基于所述私有模型参数、所述全局模型参数以及所述待训练模型,确定目标模型的步骤包括:Further, the step of determining a target model based on the private model parameters, the global model parameters, and the model to be trained includes:
基于所述私有模型参数更新所述待训练模型中的私有模型,并基于所述全局模型参数更新所述待训练模型中的公有模型,以获得更新后的待训练模型;Update the private model in the to-be-trained model based on the private model parameters, and update the public model in the to-be-trained model based on the global model parameters to obtain an updated model to be trained;
基于更新后的待训练模型确定所述目标模型。The target model is determined based on the updated model to be trained.
进一步地,所述基于所述全局模型参数更新所述待训练模型中的公有模型的步骤包括:Further, the step of updating the public model in the model to be trained based on the global model parameters includes:
获取所述第一公有模型参数对应的第一权重以及所述全局模型参数对应的第二权重;Acquiring a first weight corresponding to the first public model parameter and a second weight corresponding to the global model parameter;
基于所述第一权重、所述第二权重、所述第一公有模型参数以及所述全局模型参数,确定目标模型参数;Determining target model parameters based on the first weight, the second weight, the first public model parameter, and the global model parameter;
基于所述目标模型参数,更新所述待训练模型中的公有模型。Based on the target model parameters, the public model in the model to be trained is updated.
进一步地,所述基于更新后的待训练模型确定所述目标模型的步骤包括:Further, the step of determining the target model based on the updated model to be trained includes:
确定更新后的待训练模型是否收敛;Determine whether the updated model to be trained converges;
若更新后的待训练模型收敛,则将更新后的待训练模型作为所述目标模型;If the updated model to be trained converges, use the updated model to be trained as the target model;
若更新后的待训练模型未收敛,则将更新后的待训练模型作为所述待训练模型,并返回执行第一参与者将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述公有模型对应的第一公有模型参数,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述私有模型对应的私有模型参数的步骤。If the updated model to be trained does not converge, use the updated model to be trained as the model to be trained, and return to execution. The first participant inputs the data to be trained into the public model in the model to be trained for model training to obtain The first public model parameter corresponding to the public model, and the step of inputting the data to be trained into the private model in the model to be trained for model training, so as to obtain the private model parameters corresponding to the private model.
进一步地,所述基于更新后的待训练模型确定所述目标模型的步骤包括:Further, the step of determining the target model based on the updated model to be trained includes:
更新所述待训练模型对应的更新次数;Update the update times corresponding to the model to be trained;
若所述更新次数达到预设次数,则将更新后的待训练模型作为所述目标模型;If the number of updates reaches the preset number, the updated model to be trained is used as the target model;
若所述更新次数未达到预设次数,则将更新后的待训练模型作为所述待训练模型,并返回执行第一参与者将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述公有模型对应的第一公有模型参数,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述私有模型对应的私有模型参数的步骤。If the number of updates does not reach the preset number, the updated model to be trained is used as the model to be trained, and the execution returns to the first participant to input the data to be trained into the public model in the model to be trained for model training. The step of obtaining the first public model parameter corresponding to the public model, and inputting the to-be-trained data into the private model in the to-be-trained model for model training, so as to obtain the private model parameter corresponding to the private model.
此外,为实现上述目的,本申请还提供一种联邦建模装置,所述联邦建模装置包括:In addition, in order to achieve the above purpose, the present application also provides a federated modeling device, the federated modeling device including:
训练模块,用于将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述公有模型对应的第一公有模型参数,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述私有模型对应的私有模型参数;The training module is used to input the data to be trained into the public model in the model to be trained for model training, to obtain the first public model parameters corresponding to the public model, and to input the data to be trained into the private model in the model to be trained for model training Training to obtain private model parameters corresponding to the private model;
发送模块,用于将所述第一公有模型参数发送至协调者,以供所述协调者基于所述第一公有模型参数,确定并反馈全局模型参数;A sending module, configured to send the first public model parameters to a coordinator, so that the coordinator can determine and feed back global model parameters based on the first public model parameters;
确定模块,用于基于所述私有模型参数、所述全局模型参数以及所述待训练模型,确定目标模型。The determining module is configured to determine a target model based on the private model parameters, the global model parameters, and the model to be trained.
此外,为实现上述目的,本申请还提供一种联邦建模设备,所述联邦建模设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的联邦建模程序,所述联邦建模程序被所述处理器执行时实现前述的联邦建模方法的步骤。In addition, in order to achieve the above object, this application also provides a federated modeling device, the federated modeling device comprising: a memory, a processor, and a federated modeling device stored in the memory and running on the processor A program for implementing the steps of the aforementioned federated modeling method when the federated modeling program is executed by the processor.
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有联邦建模程序,所述联邦建模程序被处理器执行时实现前述的联邦建模方法的步骤。In addition, in order to achieve the above object, this application also provides a computer-readable storage medium with a federated modeling program stored on the computer-readable storage medium, and the federated modeling program is executed by a processor to realize the aforementioned federated building The steps of the model method.
有益效果Beneficial effect
本申请通过第一参与者将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述公有模型对应的第一公有模型参数,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述私有模型对应的私有模型参数;接着将所述第一公有模型参数发送至协调者,以供所述协调者基于所述第一公有模型参数,确定并反馈全局模型参数;而后基于所述私有模型参数、所述全局模型参数以及所述待训练模型,确定目标模型;通过根据待训练模型中的公有模型以及私有模型进行建模,无需对传输中的梯度等模型参数进行修改,避免了噪声不足导致的隐私泄露以及噪声过高造成的模型训练精度较低,实现了梯度等模型参数的隐私保护与模型收敛或模型精度之间的均衡;由于私有模型参数不参与共享联邦模型的训练,攻击者只能获取第一公有模型参数,进而无法根据窃取完整的模型参数进行链式法则恢复输入数据;且不存在梯度裁剪比例或噪声比例过低时被还原部分信息的情况,进而能够完全阻止信息泄漏,提高联邦学习中数据的安全性。In this application, the first participant inputs the data to be trained into the public model in the model to be trained for model training, so as to obtain the first public model parameters corresponding to the public model, and inputs the data to be trained into the private model in the model to be trained Perform model training to obtain the private model parameters corresponding to the private model; then send the first public model parameters to the coordinator, so that the coordinator can determine and feed back the global model based on the first public model parameters Parameters; then based on the private model parameters, the global model parameters and the model to be trained, determine the target model; by modeling according to the public model and the private model in the model to be trained, there is no need to model the gradient in transmission Modification of parameters avoids privacy leakage caused by insufficient noise and low model training accuracy caused by excessive noise, and achieves a balance between privacy protection of model parameters such as gradients and model convergence or model accuracy; private model parameters do not participate In the training of the shared federated model, the attacker can only obtain the first public model parameters, and thus cannot recover the input data according to the chain rule of stealing the complete model parameters; and there is no partial information restoration when the gradient clipping ratio or the noise ratio is too low Circumstances, in turn, can completely prevent information leakage and improve the security of data in federated learning.
附图说明Description of the drawings
图1是本申请实施例方案涉及的硬件运行环境中联邦建模设备结构示意图;FIG. 1 is a schematic diagram of the structure of a federated modeling device in a hardware operating environment involved in a solution of an embodiment of the present application;
图2为本申请联邦建模方法第一实施例的流程示意图;FIG. 2 is a schematic flowchart of the first embodiment of the federal modeling method of this application;
图3为本申请联邦建模方法一实施例中的模型结构示意图;FIG. 3 is a schematic diagram of the model structure in an embodiment of the federal modeling method of this application;
图4为本申请联邦建模装置一实施例的功能模块示意图。Fig. 4 is a schematic diagram of functional modules of an embodiment of a federal modeling device according to the present application.
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The realization, functional characteristics, and advantages of the purpose of this application will be further described in conjunction with the embodiments and with reference to the accompanying drawings.
本发明的实施方式Embodiments of the present invention
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。It should be understood that the specific embodiments described here are only used to explain the present application, and are not used to limit the present application.
如图1所示,图1是本申请实施例方案涉及的硬件运行环境中联邦建模设备结构示意图。As shown in Fig. 1, Fig. 1 is a schematic diagram of the structure of a federated modeling device in the hardware operating environment involved in the solution of the embodiment of the present application.
本申请实施例联邦建模设备可以是PC,也可以是智能手机、平板电脑、电子书阅读器、MP3(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)播放器、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、便携计算机等具有显示功能的可移动式终端设备。The federal modeling device in the embodiment of this application can be a PC, or a smart phone, a tablet computer, an e-book reader, MP3 (Moving Picture Experts Group Audio Layer) III. Moving Picture Experts Compression Standard Audio Layer 3) Players, MP4 (Moving Picture Experts Group Audio Layer IV, Moving Picture Experts Group Audio Layer IV) Players, portable computers and other portable terminal devices with display functions.
如图1所示,该联邦建模设备可以包括:处理器1001,例如CPU,网络接口1004,用户接口1003,存储器1005,通信总线1002。其中,通信总线1002用于实现这些组件之间的连接通信。用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储装置。As shown in FIG. 1, the federation modeling device may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, and a communication bus 1002. Among them, the communication bus 1002 is used to implement connection and communication between these components. The user interface 1003 may include a display screen (Display) and an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface. The network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface). The memory 1005 may be a high-speed RAM memory, or a stable memory (non-volatile memory), such as a magnetic disk memory. Optionally, the memory 1005 may also be a storage device independent of the aforementioned processor 1001.
可选地,联邦建模设备还可以包括摄像头、RF(Radio Frequency,射频)电路,传感器、音频电路、WiFi模块等等。其中,传感器比如光传感器、运动传感器以及其他传感器。当然,联邦建模设备还可配置陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。Optionally, the federal modeling equipment may also include a camera, RF (Radio Frequency (radio frequency) circuits, sensors, audio circuits, WiFi modules, etc. Among them, sensors such as light sensors, motion sensors and other sensors. Of course, the federal modeling equipment can also be equipped with other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which will not be repeated here.
本领域技术人员可以理解,图1中示出的联邦建模设备结构并不构成对联邦建模设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。Those skilled in the art can understand that the structure of the federated modeling device shown in FIG. 1 does not constitute a limitation on the federated modeling device, and may include more or fewer parts than shown in the figure, or combine certain parts, or be different. The layout of the components.
如图1所示,作为一种计算机存储介质的存储器1005中可以包括操作***、网络通信模块、用户接口模块以及联邦建模程序。As shown in FIG. 1, the memory 1005, which is a computer storage medium, may include an operating system, a network communication module, a user interface module, and a federated modeling program.
在图1所示的联邦建模设备中,网络接口1004主要用于连接后台服务器,与后台服务器进行数据通信;用户接口1003主要用于连接客户端(用户端),与客户端进行数据通信;而处理器1001可以用于调用存储器1005中存储的联邦建模程序。In the federation modeling device shown in FIG. 1, the network interface 1004 is mainly used to connect to the back-end server and communicate with the back-end server; the user interface 1003 is mainly used to connect to the client (user side) and communicate with the client; The processor 1001 can be used to call the federation modeling program stored in the memory 1005.
在本实施例中,联邦建模设备包括:存储器1005、处理器1001及存储在所述存储器1005上并可在所述处理器1001上运行的联邦建模程序,其中,处理器1001调用存储器1005中存储的联邦建模程序时,并执行以下各个实施例中联邦建模方法的步骤。In this embodiment, the federation modeling device includes a memory 1005, a processor 1001, and a federation modeling program stored on the memory 1005 and running on the processor 1001, wherein the processor 1001 calls the memory 1005 When the federated modeling program is stored in, and execute the steps of the federated modeling method in the following embodiments.
本申请还提供一种联邦建模方法,参照图2,图2为本申请联邦建模方法第一实施例的流程示意图。This application also provides a federated modeling method. Refer to FIG. 2, which is a schematic flowchart of the first embodiment of the federated modeling method of this application.
本实施例中,该联邦建模方法包括以下步骤:In this embodiment, the federation modeling method includes the following steps:
步骤S100,第一参与者将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述公有模型对应的第一公有模型参数,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述私有模型对应的私有模型参数;Step S100, the first participant inputs the data to be trained into the public model in the model to be trained for model training to obtain the first public model parameters corresponding to the public model, and inputs the data to be trained into the private model in the model to be trained Performing model training to obtain private model parameters corresponding to the private model;
其中,第一参与者为联邦学习***中的任一参与方(参与者),协调者为联邦学习***中的协调方,第二参与者为联邦学习***的各个参与方中除第一参与者之外的其他参与方。待训练数据为第一参与者中的私有训练数据或者本地训练数据,当然,第二参与者均存储有私有训练数据或者本地训练数据。待训练模型包括公有模型以及私有模型。Among them, the first participant is any participant (participant) in the federated learning system, the coordinator is the coordinator in the federated learning system, and the second participant is the first participant in the federated learning system. Other parties. The data to be trained is the private training data or the local training data in the first participant. Of course, the second participant all stores the private training data or the local training data. The models to be trained include public models and private models.
本实施例中,第一参与者将待训练数据输入待训练模型中的公有模型进行模型训练,通过训练后的公有模型获得第一公有模型参数,第一公有模型参数包括梯度或训练前后公有模型的模型参数变化量,具体地,若第一公有模型参数为梯度,则第一参与者根据训练结果确定该第一公有模型参数即公有梯度,若第一公有模型参数为模型参数变化量,则第一参与者根据训练前的公有模型以及通过待训练数据训练后的公有模型,确定第一公有模型参数即公有模型中发生变化的模型参数的变化量。In this embodiment, the first participant inputs the data to be trained into the public model in the model to be trained for model training, and obtains the first public model parameters through the trained public model. The first public model parameters include the gradient or the public model before and after training. Specifically, if the first public model parameter is a gradient, the first participant determines the first public model parameter, that is, the public gradient, according to the training result. If the first public model parameter is the model parameter change, then The first participant determines the first public model parameter, that is, the amount of change of the model parameter that has changed in the public model, according to the public model before training and the public model trained with the training data.
同时,将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述私有模型对应的私有模型参数,私有模型参数包括梯度或训练前后私有模型的模型参数变化量,具体地,若私有模型参数为梯度,则第一参与者根据训练结果确定该私有模型参数即私有梯度,若私有模型参数为模型参数变化量,则第一参与者根据训练前的私有模型以及通过待训练数据训练后的私有模型,确定私有模型参数即私有模型中发生变化的模型参数的变化量。At the same time, the data to be trained is input into the private model in the model to be trained for model training to obtain the private model parameters corresponding to the private model. The private model parameters include the gradient or the model parameter changes of the private model before and after training. Specifically, if If the private model parameter is a gradient, the first participant determines the private model parameter, that is, the private gradient, according to the training result. If the private model parameter is the variation of the model parameter, the first participant trains according to the private model before training and the data to be trained The latter private model determines the private model parameters, that is, the amount of change of the model parameters that have changed in the private model.
步骤S200,将所述第一公有模型参数发送至协调者,以供所述协调者基于所述第一公有模型参数,确定并反馈全局模型参数;Step S200, sending the first public model parameters to a coordinator, so that the coordinator can determine and feed back global model parameters based on the first public model parameters;
需要说明的是,协调者为联邦学习中的协调方,该协调者可接收包括第一参与者在内的各个参与方发送的模型参数。It should be noted that the coordinator is the coordinator in federated learning, and the coordinator can receive model parameters sent by each participant including the first participant.
本实施例中,在获取到第一公有模型参数以及私有模型参数后,第一参与者将第一公有模型参数发送至协调者,以供协调者基于第一公有模型参数确定并反馈全局模型参数,其中,协调者在得到第一公有模型参数时,该协调者接收或获取联邦学习中除第一参与者之外的其他各个参与方的模型参数,并基于其他各个参与方的模型参数以及第一公有模型参数,确定全局模型参数,并反馈该全局模型参数值给第一参与者。当然,协调者同时反馈该全局模型参数至其他各个参与方。In this embodiment, after acquiring the first public model parameters and the private model parameters, the first participant sends the first public model parameters to the coordinator for the coordinator to determine and feed back the global model parameters based on the first public model parameters , Where, when the coordinator obtains the first public model parameter, the coordinator receives or obtains the model parameters of each participant other than the first participant in the federated learning, and based on the model parameters of the other participants and the first A public model parameter, determine the global model parameter, and feed back the global model parameter value to the first participant. Of course, the coordinator feeds back the global model parameters to the other parties at the same time.
步骤S300,基于所述私有模型参数、所述全局模型参数以及所述待训练模型,确定目标模型。Step S300: Determine a target model based on the private model parameters, the global model parameters, and the model to be trained.
本实施例中,在获取到协调者反馈的全局模型参数时,基于第一公有模型参数、私有模型参数、全局模型参数以及待训练模型确定目标模型,具体地,基于第一公有模型参数以及全局模型参数更新待训练模型中的公有模型,并基于私有模型参数更新待训练模型中的私有模型,得到更新后的待训练模型,并根据该更新后的待训练模型确定目标模型。In this embodiment, when the global model parameters fed back by the coordinator are obtained, the target model is determined based on the first public model parameters, private model parameters, global model parameters, and the model to be trained, specifically, based on the first public model parameters and the global model parameters. The model parameters update the public model in the model to be trained, and the private model in the model to be trained is updated based on the private model parameters to obtain the updated model to be trained, and the target model is determined according to the updated model to be trained.
进一步地,一实施例中,步骤S200包括:Further, in an embodiment, step S200 includes:
将所述第一公有模型参数发送至协调者,其中,所述协调者获取多个第二参与者发送的第二公有模型参数,基于各个第二公有模型参数以及所述第一公有模型参数确定全局模型参数,并反馈所述全局模型参数至所述第一参与者。The first public model parameter is sent to the coordinator, where the coordinator obtains the second public model parameters sent by a plurality of second participants, and determines based on each second public model parameter and the first public model parameter Global model parameters, and feed back the global model parameters to the first participant.
本实施例中,该协调者接收第一参与者发送的第一公有模型参数,同时,协调者获取或者接收多个第二参与者发送的第二公有模型参数,其中,该第二参与者为联邦学习***中除第一参与者之外的其他各个参与方,第二公有模型参数为第二参与者分别将各自的待训练数据输入待训练模型中的公有模型进行训练得到的模型参数,联邦学习***中各个参与方的待训练模型相同,各个参与方的待训练数据为其自身的私有数据或者本地数据。In this embodiment, the coordinator receives the first public model parameters sent by the first participant, and at the same time, the coordinator acquires or receives second public model parameters sent by multiple second participants, where the second participant is For each participant in the federated learning system except the first participant, the second public model parameter is the model parameter obtained by the second participant inputting their respective training data into the public model of the model to be trained for training. Federation The models to be trained are the same for all participants in the learning system, and the data to be trained for each participant is its own private data or local data.
在获取到第二公有模型参数后,协调者基于各个第二公有模型参数以及所述第一公有模型参数确定全局模型参数,并反馈全局模型参数至第一参与者,同时,协调者反馈该全局模型参数至各个第二参与者,或者基于全局模型参数更新所述协调者的全局模型。具体地,若第二公有模型参数以及所述第一公有模型参数均为梯度,则将第二公有模型参数以及所述第一公有模型参数相加(向量加法)得到全局模型参数,若第二公有模型参数以及所述第一公有模型参数均为模型参数的变化量,则计算各个第二公有模型参数以及第一公有模型参数之间的参数均值,并将该参数均值作为该全局模型参数。After obtaining the second public model parameters, the coordinator determines the global model parameters based on each of the second public model parameters and the first public model parameters, and feeds back the global model parameters to the first participant. At the same time, the coordinator feeds back the global model parameters. The model parameters are sent to each second participant, or the global model of the coordinator is updated based on the global model parameters. Specifically, if the second public model parameter and the first public model parameter are both gradients, then the second public model parameter and the first public model parameter are added (vector addition) to obtain the global model parameter, if the second public model parameter is The public model parameters and the first public model parameters are both the variation of the model parameters, and the parameter average value between each second public model parameter and the first public model parameter is calculated, and the parameter average value is used as the global model parameter.
参照图3,图3中的私有训练数据为待训练数据,参与方本地模型为第一参与者的待训练模型,收集器为协调者;该参与方本地模型包括公有模型以及私有模型,第一参与者将私有训练数据输入参与方本地模型后,通过公有模型进行模型训练,得到第一公有模型参数,通过私有模型进行模型训练,得到私有模型参数,即参与方本地模型的输出包括私有模型参数以及第一公有模型参数;而后,通过私有模型参数计算私有梯度,并根据私有梯度更新私有模型,同时,通过第一公有模型参数计算公有梯度,将公有梯度上传至收集器,收集器根据公有梯度以及其他参与方上传的梯度计算全局梯度,并反馈该全局梯度至第一参与者,第一参与者根据该全局梯度更新参与方本地模型中的公有模型,得到更新后的参与方本地模型,并根据更新后的参与方本地模型确定目标模型。Referring to Figure 3, the private training data in Figure 3 is the data to be trained, the participant’s local model is the first participant’s to be trained model, and the collector is the coordinator; the participant’s local model includes a public model and a private model. After the participant inputs the private training data into the participant's local model, the public model is used for model training to obtain the first public model parameters, and the private model is used for model training to obtain the private model parameters, that is, the output of the participant's local model includes the private model parameters And the first public model parameter; then, the private gradient is calculated by the private model parameters, and the private model is updated according to the private gradient. At the same time, the public gradient is calculated by the first public model parameter, and the public gradient is uploaded to the collector. The collector is based on the public gradient Calculate the global gradient with the gradient uploaded by other participants, and feed back the global gradient to the first participant. The first participant updates the public model in the participant’s local model according to the global gradient to obtain the updated participant’s local model, and Determine the target model based on the updated local model of the participant.
本实施例提出的联邦建模方法,通过第一参与者将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述公有模型对应的第一公有模型参数,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述私有模型对应的私有模型参数;接着将所述第一公有模型参数发送至协调者,以供所述协调者基于所述第一公有模型参数,确定并反馈全局模型参数;而后基于所述私有模型参数、所述全局模型参数以及所述待训练模型,确定目标模型;通过根据待训练模型中的公有模型以及私有模型进行建模,无需对传输中的梯度等模型参数进行修改,避免了噪声不足导致的隐私泄露以及噪声过高造成的模型训练精度较低,实现了梯度等模型参数的隐私保护与模型收敛或模型精度之间的均衡;由于私有模型参数不参与共享联邦模型的训练,攻击者只能获取第一公有模型参数,进而无法根据窃取完整的模型参数进行链式法则恢复输入数据;且不存在梯度裁剪比例或噪声比例过低时被还原部分信息的情况,进而能够完全阻止信息泄漏,提高联邦学习中数据的安全性。In the federated modeling method proposed in this embodiment, the first participant inputs the data to be trained into the public model in the model to be trained for model training, so as to obtain the first public model parameters corresponding to the public model, and combine the data to be trained Input the private model in the model to be trained for model training to obtain the private model parameters corresponding to the private model; then send the first public model parameters to the coordinator for the coordinator based on the first public model The model parameters are determined and fed back to the global model parameters; then based on the private model parameters, the global model parameters, and the model to be trained, the target model is determined; by modeling according to the public model and the private model in the model to be trained, There is no need to modify model parameters such as gradients in transmission, avoiding privacy leakage caused by insufficient noise and low model training accuracy caused by excessive noise, and realizing the privacy protection of model parameters such as gradients and model convergence or model accuracy. Equalization; because the private model parameters do not participate in the training of the shared federated model, the attacker can only obtain the first public model parameters, and thus cannot recover the input data according to the chain rule by stealing the complete model parameters; and there is no gradient clipping ratio or noise ratio Part of the information is restored when it is too low, which can completely prevent information leakage and improve the security of data in federated learning.
基于第一实施例,提出本申请联邦建模方法的第二实施例,在本实施例中,所述第一公有模型参数包括公有梯度,所述私有模型参数包括私有梯度,步骤S100包括:Based on the first embodiment, a second embodiment of the federal modeling method of the present application is proposed. In this embodiment, the first public model parameter includes a public gradient, and the private model parameter includes a private gradient. Step S100 includes:
步骤S110,第一参与者将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述待训练数据对应的公有损失函数值,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述待训练数据对应的私有损失函数值;Step S110, the first participant inputs the data to be trained into the public model in the model to be trained for model training to obtain the public loss function value corresponding to the data to be trained, and inputs the data to be trained into the private model in the model to be trained Performing model training to obtain the private loss function value corresponding to the data to be trained;
步骤S120,基于所述公有损失函数值确定所述公有梯度,并基于所述私有损失函数值确定所述私有梯度。Step S120: Determine the public gradient based on the public loss function value, and determine the private gradient based on the private loss function value.
本实施例中,第一参与者将待训练数据输入待训练模型中的公有模型进行模型训练,得到公有损失函数值,同时,第一参与者将待训练数据输入待训练模型中的私有模型进行模型训练,以获得私有损失函数值;其中,公有损失函数值可以为平均平方估计损失函数、交叉熵损失函数或者极化损失函数,私有损失函数值可以为平均平方估计损失函数、交叉熵损失函数或者极化损失函数。而后,基于所述公有损失函数值确定所述公有梯度,并基于所述私有损失函数值确定所述私有梯度。In this embodiment, the first participant inputs the data to be trained into the public model in the model to be trained for model training, and obtains the value of the public loss function. At the same time, the first participant inputs the data to be trained into the private model in the model to be trained for model training. Model training to obtain the private loss function value; among them, the public loss function value can be the average square estimation loss function, cross entropy loss function or polarization loss function, and the private loss function value can be the average square estimation loss function, cross entropy loss function Or polarization loss function. Then, the public gradient is determined based on the public loss function value, and the private gradient is determined based on the private loss function value.
本实施例提出的联邦建模方法,通过第一参与者将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述待训练数据对应的公有损失函数值,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述待训练数据对应的私有损失函数值;接着基于所述公有损失函数值确定所述公有梯度,并基于所述私有损失函数值确定所述私有梯度,能够通过模型训练准确得到公有梯度与私有梯度;通过分开获取公有梯度与私有梯度,由于私有梯度不参与共享联邦模型的训练,攻击者只能获取公有梯度,进而无法根据窃取完整的模型参数进行链式法则恢复输入数据,进一步提高模型训练的效率以及安全性。In the federated modeling method proposed in this embodiment, the first participant inputs the data to be trained into the public model in the model to be trained for model training, so as to obtain the public loss function value corresponding to the data to be trained. Input the private model in the to-be-trained model for model training to obtain the private loss function value corresponding to the to-be-trained data; then determine the public gradient based on the public loss function value, and determine the all based on the private loss function value The private gradient can be accurately obtained through model training. The public gradient and the private gradient are obtained separately. Since the private gradient does not participate in the training of the shared federation model, the attacker can only obtain the public gradient, and cannot steal the complete The model parameters are used to recover the input data through the chain rule, which further improves the efficiency and safety of model training.
基于上述各个实施例,提出本申请联邦建模方法的第三实施例,在本实施例中,步骤S300包括:Based on the foregoing embodiments, a third embodiment of the federal modeling method of the present application is proposed. In this embodiment, step S300 includes:
步骤S310,基于所述私有模型参数更新所述待训练模型中的私有模型,并基于所述全局模型参数更新所述待训练模型中的公有模型,以获得更新后的待训练模型;Step S310, update the private model in the to-be-trained model based on the private model parameters, and update the public model in the to-be-trained model based on the global model parameters to obtain an updated model to be trained;
步骤S320,基于更新后的待训练模型确定所述目标模型。Step S320: Determine the target model based on the updated model to be trained.
本实施例中,在获取到协调者反馈的全局模型参数时,基于私有模型参数更新待训练模型中的私有模型,并基于全局模型参数更新待训练模型中的公有模型,以获得更新后的待训练模型,而后基于更新后的待训练模型确定所述目标模型。In this embodiment, when the global model parameters fed back by the coordinator are obtained, the private model in the model to be trained is updated based on the private model parameters, and the public model in the model to be trained is updated based on the global model parameters to obtain the updated model to be trained. The model is trained, and then the target model is determined based on the updated model to be trained.
具体地,若全局模型参数以及私有模型参数均为梯度,则先根据私有模型参数以及全局模型参数计算一目标模型参数,根据目标模型参数更新公有模型;若全局模型参数以及私有模型参数均为模型参数的变化量,则直接基于全局模型参数更新公有模型,例如,先确定全局模型参数对应的待训练模型中公有模型(未经过待训练数据进行训练时的公有模型)的待更新模型参数,基于待更新模型参数与全局模型参数确定新的模型参数,并将待更新模型参数设置为新的模型参数设置,以获得更新后的公有模型,其中,新的模型参数=待更新模型参数+全局模型参数,或者,根据待更新模型参数对应的权重、全局模型参数对应的权重、以及待更新模型参数、全局模型参数计算得到新的模型参数。Specifically, if the global model parameters and the private model parameters are both gradients, first calculate a target model parameter according to the private model parameters and the global model parameters, and update the public model according to the target model parameters; if the global model parameters and the private model parameters are both models The amount of parameter change is directly based on the global model parameters to update the public model. For example, first determine the to-be-updated model parameters of the public model in the model to be trained corresponding to the global model parameters (the public model that has not been trained with the data to be trained), based on The model parameters to be updated and the global model parameters determine the new model parameters, and the model parameters to be updated are set as the new model parameter settings to obtain the updated public model, where the new model parameters = the model parameters to be updated + the global model Parameter, or calculate the new model parameter according to the weight corresponding to the model parameter to be updated, the weight corresponding to the global model parameter, and the model parameter to be updated and the global model parameter.
进一步地,在一实施例中,第一公有模型参数包括公有梯度,全局模型参数包括第三梯度,步骤S310包括:Further, in an embodiment, the first public model parameter includes a public gradient, and the global model parameter includes a third gradient. Step S310 includes:
步骤S311,获取所述第一公有模型参数对应的第一权重以及所述全局模型参数对应的第二权重;Step S311, acquiring a first weight corresponding to the first public model parameter and a second weight corresponding to the global model parameter;
步骤S312,基于所述第一权重、所述第二权重、所述第一公有模型参数以及所述全局模型参数,确定目标模型参数;Step S312, determining target model parameters based on the first weight, the second weight, the first public model parameter, and the global model parameter;
步骤S313,基于所述目标模型参数,更新所述待训练模型中的公有模型。Step S313, based on the target model parameters, update the public model in the model to be trained.
本实施例中,模型参数均为梯度,在获取到全局模型参数时,获取第一公有模型参数对应的第一权重以及全局模型参数对应的第二权重,并基于第一权重、第二权重、第一公有模型参数以及全局模型参数,确定目标模型参数,具体地,目标模型参数=第一权重*第一公有模型参数+第二权重*全局模型参数。而后,基于目标模型参数,更新所述待训练模型中的公有模型。In this embodiment, the model parameters are all gradients. When the global model parameters are acquired, the first weight corresponding to the first public model parameter and the second weight corresponding to the global model parameter are acquired, and are based on the first weight, second weight, The first public model parameter and the global model parameter determine the target model parameter. Specifically, the target model parameter = the first weight * the first public model parameter + the second weight * the global model parameter. Then, based on the target model parameters, the public model in the model to be trained is updated.
需要说明的是,在其他实施例中,第一参与者还可以采用其他方式将第一公有模型参数与全局模型参数进行融合,以获得模型参数,例如,将第一公有模型参数与全局模型参数得到模型参数。It should be noted that in other embodiments, the first participant may also use other methods to fuse the first public model parameter with the global model parameter to obtain the model parameter, for example, combine the first public model parameter with the global model parameter Get model parameters.
进一步地,在一实施例中,步骤S320包括:Further, in an embodiment, step S320 includes:
步骤S321,确定更新后的待训练模型是否收敛;Step S321: Determine whether the updated model to be trained converges;
步骤S322,若更新后的待训练模型收敛,则将更新后的待训练模型作为所述目标模型;In step S322, if the updated model to be trained converges, the updated model to be trained is used as the target model;
步骤S323,若更新后的待训练模型未收敛,则将更新后的待训练模型作为所述待训练模型,并返回执行第一参与者将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述公有模型对应的第一公有模型参数,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述私有模型对应的私有模型参数的步骤。Step S323, if the updated model to be trained does not converge, the updated model to be trained is used as the model to be trained, and the execution returns to the first participant to input the data to be trained into the public model in the model to be trained for model training To obtain the first public model parameter corresponding to the public model, and input the to-be-trained data into the private model in the to-be-trained model for model training, so as to obtain the private model parameter corresponding to the private model.
本实施例中,在得到更新后的待训练模型,确定更新后的待训练模型是否收敛,具体地,确定更新后的待训练模型中公有模型的损失函数是否小于第一预设值,以及更新后的待训练模型中私有模型的损失函数是否小于第二预设值,若更新后的待训练模型中公有模型的损失函数小于第一预设值,且更新后的待训练模型中私有模型的损失函数小于第二预设值,则确定更新后的待训练模型收敛,进而将更新后的待训练模型作为所述目标模型,否则,将更新后的待训练模型作为所述待训练模型,并返回执行步骤S100,以获得收敛的目标模型。In this embodiment, after the updated model to be trained is obtained, it is determined whether the updated model to be trained converges. Specifically, it is determined whether the loss function of the public model in the updated model to be trained is less than the first preset value, and the update Whether the loss function of the private model in the later model to be trained is less than the second preset value, if the loss function of the public model in the updated model to be trained is less than the first preset value, and the value of the private model in the updated model to be trained If the loss function is less than the second preset value, it is determined that the updated model to be trained converges, and the updated model to be trained is used as the target model; otherwise, the updated model to be trained is used as the model to be trained, and Return to step S100 to obtain a convergent target model.
进一步地,又一实施例中,步骤S320包括:Further, in another embodiment, step S320 includes:
步骤S324,更新所述待训练模型对应的更新次数;Step S324, update the update times corresponding to the model to be trained;
步骤S325,若所述更新次数达到预设次数,则将更新后的待训练模型作为所述目标模型;Step S325, if the number of updates reaches the preset number of times, use the updated model to be trained as the target model;
步骤S326,若所述更新次数未达到预设次数,则将更新后的待训练模型作为所述待训练模型,并返回执行第一参与者将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述公有模型对应的第一公有模型参数,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述私有模型对应的私有模型参数的步骤。Step S326: If the number of updates does not reach the preset number of times, the updated model to be trained is used as the model to be trained, and the execution returns to the first participant to input the data to be trained into the public model in the model to be trained to perform the model Training to obtain the first public model parameter corresponding to the public model, and input the to-be-trained data into the private model in the to-be-trained model for model training to obtain the private model parameter corresponding to the private model.
本实施例中,在更新待训练模型之后,更新待训练模型对应的更新次数,而后,判断更新次数是否达到预设次数,若所述更新次数达到预设次数,则将更新后的待训练模型作为所述目标模型,并重置更新次数;否则,则将更新后的待训练模型作为所述待训练模型,并返回执行步骤S100,以实现对待训练模型进行预设次数的更新。In this embodiment, after the model to be trained is updated, the number of updates corresponding to the model to be trained is updated, and then it is determined whether the number of updates reaches the preset number, and if the number of updates reaches the preset number, the updated model to be trained is updated As the target model, and reset the number of updates; otherwise, use the updated model to be trained as the model to be trained, and return to step S100 to implement the preset number of updates to the model to be trained.
其中,预设次数可进行合理设置,更新次数的初始值可设置为0,更新该更新次数时,将当前的更新次数加1得到新的更新次数。Among them, the preset times can be set reasonably, and the initial value of the update times can be set to 0. When the update times are updated, the current update times are increased by 1 to obtain the new update times.
本实施例提出的联邦建模方法,通过基于所述私有模型参数更新所述待训练模型中的私有模型,并基于第一公有模型参数以及所述全局模型参数更新所述待训练模型中的公有模型,以获得更新后的待训练模型;基于更新后的待训练模型确定所述目标模型,通过分别更新待训练模型中的公有模型以及私有模型,实现根据待训练模型进行建模,由于私有模型参数不参与共享联邦模型的训练,攻击者只能获取第一公有模型参数,进而无法根据窃取完整的模型参数进行链式法则恢复输入数据,进一步提高联邦学习中数据的安全性。The federated modeling method proposed in this embodiment updates the private model in the model to be trained based on the private model parameters, and updates the public model in the model to be trained based on the first public model parameters and the global model parameters. Model to obtain an updated model to be trained; the target model is determined based on the updated model to be trained, and the public model and the private model in the model to be trained are updated separately to achieve modeling based on the model to be trained. The parameters do not participate in the training of the shared federation model. The attacker can only obtain the first public model parameter, and thus cannot recover the input data according to the chain rule of stealing the complete model parameter, which further improves the security of the data in federated learning.
本申请实施例还提供一种联邦建模装置,参照图4,所述联邦建模装置包括:An embodiment of the present application also provides a federated modeling device. Referring to FIG. 4, the federated modeling device includes:
训练模块100,用于将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述公有模型对应的第一公有模型参数,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述私有模型对应的私有模型参数;The training module 100 is used to input the data to be trained into the public model in the model to be trained for model training, to obtain the first public model parameters corresponding to the public model, and to input the data to be trained into the private model in the model to be trained for model training. Model training to obtain private model parameters corresponding to the private model;
发送模块200,用于将所述第一公有模型参数发送至协调者,以供所述协调者基于所述第一公有模型参数,确定并反馈全局模型参数;The sending module 200 is configured to send the first public model parameters to a coordinator, so that the coordinator can determine and feed back global model parameters based on the first public model parameters;
确定模块300,用于基于所述私有模型参数、所述全局模型参数以及所述待训练模型,确定目标模型。The determining module 300 is configured to determine a target model based on the private model parameters, the global model parameters, and the model to be trained.
进一步地,发送模块200还用于:Further, the sending module 200 is also used for:
将所述第一公有模型参数发送至协调者,其中,所述协调者获取多个第二参与者发送的第二公有模型参数,基于各个第二公有模型参数以及所述第一公有模型参数确定全局模型参数,并反馈所述全局模型参数至所述第一参与者。The first public model parameter is sent to the coordinator, where the coordinator obtains the second public model parameters sent by a plurality of second participants, and determines based on each second public model parameter and the first public model parameter Global model parameters, and feed back the global model parameters to the first participant.
进一步地,训练模块100还用于:Further, the training module 100 is also used for:
第一参与者将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述待训练数据对应的公有损失函数值,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述待训练数据对应的私有损失函数值;The first participant inputs the data to be trained into the public model in the model to be trained for model training to obtain the public loss function value corresponding to the data to be trained, and inputs the data to be trained into the private model in the model to be trained for model training , To obtain the private loss function value corresponding to the data to be trained;
基于所述公有损失函数值确定所述公有梯度,并基于所述私有损失函数值确定所述私有梯度。The public gradient is determined based on the public loss function value, and the private gradient is determined based on the private loss function value.
进一步地,确定模块300还用于:Further, the determining module 300 is also used for:
基于所述私有模型参数更新所述待训练模型中的私有模型,并基于所述全局模型参数更新所述待训练模型中的公有模型,以获得更新后的待训练模型;Update the private model in the to-be-trained model based on the private model parameters, and update the public model in the to-be-trained model based on the global model parameters to obtain an updated model to be trained;
基于更新后的待训练模型确定所述目标模型。The target model is determined based on the updated model to be trained.
进一步地,确定模块300还用于:Further, the determining module 300 is also used for:
获取所述第一公有模型参数对应的第一权重以及所述全局模型参数对应的第二权重;Acquiring a first weight corresponding to the first public model parameter and a second weight corresponding to the global model parameter;
基于所述第一权重、所述第二权重、所述第一公有模型参数以及所述全局模型参数,确定目标模型参数;Determining target model parameters based on the first weight, the second weight, the first public model parameter, and the global model parameter;
基于所述目标模型参数,更新所述待训练模型中的公有模型。Based on the target model parameters, the public model in the model to be trained is updated.
进一步地,确定模块300还用于:Further, the determining module 300 is also used for:
确定更新后的待训练模型是否收敛;Determine whether the updated model to be trained converges;
若更新后的待训练模型收敛,则将更新后的待训练模型作为所述目标模型;If the updated model to be trained converges, use the updated model to be trained as the target model;
若更新后的待训练模型未收敛,则将更新后的待训练模型作为所述待训练模型,并返回执行第一参与者将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述公有模型对应的第一公有模型参数,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述私有模型对应的私有模型参数的步骤。If the updated model to be trained does not converge, use the updated model to be trained as the model to be trained, and return to execution. The first participant inputs the data to be trained into the public model in the model to be trained for model training to obtain The first public model parameter corresponding to the public model, and the step of inputting the data to be trained into the private model in the model to be trained for model training, so as to obtain the private model parameters corresponding to the private model.
进一步地,确定模块300还用于:Further, the determining module 300 is also used for:
更新所述待训练模型对应的更新次数;Update the update times corresponding to the model to be trained;
若所述更新次数达到预设次数,则将更新后的待训练模型作为所述目标模型;If the number of updates reaches the preset number, the updated model to be trained is used as the target model;
若所述更新次数未达到预设次数,则将更新后的待训练模型作为所述待训练模型,并返回执行第一参与者将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述公有模型对应的第一公有模型参数,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述私有模型对应的私有模型参数的步骤。If the number of updates does not reach the preset number, the updated model to be trained is used as the model to be trained, and the execution returns to the first participant to input the data to be trained into the public model in the model to be trained for model training. The step of obtaining the first public model parameter corresponding to the public model, and inputting the to-be-trained data into the private model in the to-be-trained model for model training, so as to obtain the private model parameter corresponding to the private model.
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质上存储有联邦建模程序,所述联邦建模程序被处理器执行时实现如上所述的联邦建模方法的步骤。In addition, an embodiment of the present application also proposes a computer-readable storage medium, the computer-readable storage medium stores a federated modeling program, and the federated modeling program is executed by a processor to realize the federated modeling as described above. Method steps.
其中,在所述处理器上运行的联邦建模程序被执行时所实现的方法可参照本申请联邦建模方法各个实施例,此处不再赘述。For the method implemented when the federated modeling program running on the processor is executed, please refer to the various embodiments of the federated modeling method of this application, which will not be repeated here.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者***不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者***所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者***中还存在另外的相同要素。It should be noted that in this article, the terms "include", "include" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article or system including a series of elements not only includes those elements, It also includes other elements that are not explicitly listed, or elements inherent to the process, method, article, or system. If there are no more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other identical elements in the process, method, article, or system that includes the element.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the foregoing embodiments of the present application are only for description, and do not represent the superiority or inferiority of the embodiments.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the method of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better.的实施方式。 Based on this understanding, the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM) as described above. , Magnetic disk, optical disk), including several instructions to make a terminal device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the method described in each embodiment of the present application.
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。The above are only the preferred embodiments of this application, and do not limit the scope of this application. Any equivalent structure or equivalent process transformation made using the content of the description and drawings of this application, or directly or indirectly used in other related technical fields , The same reason is included in the scope of patent protection of this application.

Claims (20)

  1. 一种联邦建模方法,其中,所述联邦建模方法包括以下步骤:A federated modeling method, wherein the federated modeling method includes the following steps:
    第一参与者将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述公有模型对应的第一公有模型参数,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述私有模型对应的私有模型参数;The first participant inputs the data to be trained into the public model in the model to be trained for model training to obtain the first public model parameters corresponding to the public model, and inputs the data to be trained into the private model in the model to be trained for model training , To obtain the private model parameters corresponding to the private model;
    将所述第一公有模型参数发送至协调者,以供所述协调者基于所述第一公有模型参数,确定并反馈全局模型参数;Sending the first public model parameter to a coordinator, so that the coordinator can determine and feed back global model parameters based on the first public model parameter;
    基于所述私有模型参数、所述全局模型参数以及所述待训练模型,确定目标模型。Based on the private model parameters, the global model parameters, and the model to be trained, a target model is determined.
  2. 如权利要求1所述的联邦建模方法,其中,所述将所述第一公有模型参数发送至协调者,以供所述协调者基于所述第一公有模型参数,确定并反馈全局模型参数的步骤包括:The federation modeling method according to claim 1, wherein said sending said first public model parameters to a coordinator, so that said coordinator can determine and feed back global model parameters based on said first public model parameters The steps include:
    将所述第一公有模型参数发送至协调者,其中,所述协调者获取多个第二参与者发送的第二公有模型参数,基于各个第二公有模型参数以及所述第一公有模型参数确定全局模型参数,并反馈所述全局模型参数至所述第一参与者。The first public model parameter is sent to the coordinator, where the coordinator obtains the second public model parameters sent by a plurality of second participants, and determines based on each second public model parameter and the first public model parameter Global model parameters, and feed back the global model parameters to the first participant.
  3. 如权利要求1所述的联邦建模方法,其中,所述第一公有模型参数包括公有梯度,所述私有模型参数包括私有梯度,所述第一参与者将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述公有模型对应的第一公有模型参数,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述私有模型对应的私有模型参数的步骤包括:The federation modeling method according to claim 1, wherein the first public model parameter includes a public gradient, the private model parameter includes a private gradient, and the first participant inputs the data to be trained into the model to be trained. The step of performing model training on the public model to obtain the first public model parameter corresponding to the public model, and inputting the data to be trained into the private model in the model to be trained for model training to obtain the private model parameters corresponding to the private model include:
    第一参与者将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述待训练数据对应的公有损失函数值,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述待训练数据对应的私有损失函数值;The first participant inputs the data to be trained into the public model in the model to be trained for model training to obtain the public loss function value corresponding to the data to be trained, and inputs the data to be trained into the private model in the model to be trained for model training , To obtain the private loss function value corresponding to the data to be trained;
    基于所述公有损失函数值确定所述公有梯度,并基于所述私有损失函数值确定所述私有梯度。The public gradient is determined based on the public loss function value, and the private gradient is determined based on the private loss function value.
  4. 如权利要求1至3任一项所述的联邦建模方法,其中,所述基于所述私有模型参数、所述全局模型参数以及所述待训练模型,确定目标模型的步骤包括:The federated modeling method according to any one of claims 1 to 3, wherein the step of determining a target model based on the private model parameters, the global model parameters, and the model to be trained comprises:
    基于所述私有模型参数更新所述待训练模型中的私有模型,并基于所述全局模型参数更新所述待训练模型中的公有模型,以获得更新后的待训练模型;Update the private model in the to-be-trained model based on the private model parameters, and update the public model in the to-be-trained model based on the global model parameters to obtain an updated model to be trained;
    基于更新后的待训练模型确定所述目标模型。The target model is determined based on the updated model to be trained.
  5. 如权利要求4所述的联邦建模方法,其中,所述基于所述全局模型参数更新所述待训练模型中的公有模型的步骤包括:5. The federated modeling method according to claim 4, wherein the step of updating the public model in the model to be trained based on the global model parameters comprises:
    获取所述第一公有模型参数对应的第一权重以及所述全局模型参数对应的第二权重;Acquiring a first weight corresponding to the first public model parameter and a second weight corresponding to the global model parameter;
    基于所述第一权重、所述第二权重、所述第一公有模型参数以及所述全局模型参数,确定目标模型参数;Determining target model parameters based on the first weight, the second weight, the first public model parameter, and the global model parameter;
    基于所述目标模型参数,更新所述待训练模型中的公有模型。Based on the target model parameters, the public model in the model to be trained is updated.
  6. 如权利要求4所述的联邦建模方法,其中,所述基于更新后的待训练模型确定所述目标模型的步骤包括:5. The federated modeling method according to claim 4, wherein the step of determining the target model based on the updated model to be trained comprises:
    确定更新后的待训练模型是否收敛;Determine whether the updated model to be trained converges;
    若更新后的待训练模型收敛,则将更新后的待训练模型作为所述目标模型;If the updated model to be trained converges, use the updated model to be trained as the target model;
    若更新后的待训练模型未收敛,则将更新后的待训练模型作为所述待训练模型,并返回执行第一参与者将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述公有模型对应的第一公有模型参数,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述私有模型对应的私有模型参数的步骤。If the updated model to be trained does not converge, use the updated model to be trained as the model to be trained, and return to execution. The first participant inputs the data to be trained into the public model in the model to be trained for model training to obtain The first public model parameter corresponding to the public model, and the step of inputting the data to be trained into the private model in the model to be trained for model training, so as to obtain the private model parameters corresponding to the private model.
  7. 如权利要求4所述的联邦建模方法,其中,所述基于更新后的待训练模型确定所述目标模型的步骤包括:5. The federated modeling method according to claim 4, wherein the step of determining the target model based on the updated model to be trained comprises:
    更新所述待训练模型对应的更新次数;Update the update times corresponding to the model to be trained;
    若所述更新次数达到预设次数,则将更新后的待训练模型作为所述目标模型;If the number of updates reaches the preset number, the updated model to be trained is used as the target model;
    若所述更新次数未达到预设次数,则将更新后的待训练模型作为所述待训练模型,并返回执行第一参与者将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述公有模型对应的第一公有模型参数,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述私有模型对应的私有模型参数的步骤。If the number of updates does not reach the preset number, the updated model to be trained is used as the model to be trained, and the execution returns to the first participant to input the data to be trained into the public model in the model to be trained for model training. The step of obtaining the first public model parameter corresponding to the public model, and inputting the to-be-trained data into the private model in the to-be-trained model for model training, so as to obtain the private model parameter corresponding to the private model.
  8. 一种联邦建模装置,其中,所述联邦建模装置包括:A federated modeling device, wherein the federated modeling device includes:
    训练模块,用于将待训练数据输入待训练模型中的公有模型进行模型训练,以获得所述公有模型对应的第一公有模型参数,并将待训练数据输入待训练模型中的私有模型进行模型训练,以获得所述私有模型对应的私有模型参数;The training module is used to input the data to be trained into the public model in the model to be trained for model training, to obtain the first public model parameters corresponding to the public model, and to input the data to be trained into the private model in the model to be trained for model training Training to obtain private model parameters corresponding to the private model;
    发送模块,用于将所述第一公有模型参数发送至协调者,以供所述协调者基于所述第一公有模型参数,确定并反馈全局模型参数;A sending module, configured to send the first public model parameters to a coordinator, so that the coordinator can determine and feed back global model parameters based on the first public model parameters;
    确定模块,用于基于所述私有模型参数、所述全局模型参数以及所述待训练模型,确定目标模型。The determining module is configured to determine a target model based on the private model parameters, the global model parameters, and the model to be trained.
  9. 一种联邦建模设备,其中,所述联邦建模设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的联邦建模程序,所述联邦建模程序被所述处理器执行时实现如权利要求1所述的联邦建模方法的步骤。A federated modeling device, wherein the federated modeling device includes a memory, a processor, and a federated modeling program stored in the memory and capable of running on the processor, and the federated modeling program is The processor implements the steps of the federated modeling method according to claim 1 when executed.
  10. 一种联邦建模设备,其中,所述联邦建模设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的联邦建模程序,所述联邦建模程序被所述处理器执行时实现如权利要求2所述的联邦建模方法的步骤。A federated modeling device, wherein the federated modeling device includes a memory, a processor, and a federated modeling program stored in the memory and capable of running on the processor, and the federated modeling program is The processor implements the steps of the federated modeling method according to claim 2 when executed.
  11. 一种联邦建模设备,其中,所述联邦建模设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的联邦建模程序,所述联邦建模程序被所述处理器执行时实现如权利要求3所述的联邦建模方法的步骤。A federated modeling device, wherein the federated modeling device includes a memory, a processor, and a federated modeling program stored in the memory and capable of running on the processor, and the federated modeling program is The processor implements the steps of the federated modeling method according to claim 3 when executed.
  12. 一种联邦建模设备,其中,所述联邦建模设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的联邦建模程序,所述联邦建模程序被所述处理器执行时实现如权利要求4所述的联邦建模方法的步骤。A federated modeling device, wherein the federated modeling device includes a memory, a processor, and a federated modeling program stored in the memory and capable of running on the processor, and the federated modeling program is The processor implements the steps of the federated modeling method according to claim 4 when executed.
  13. 一种联邦建模设备,其中,所述联邦建模设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的联邦建模程序,所述联邦建模程序被所述处理器执行时实现如权利要求5所述的联邦建模方法的步骤。A federated modeling device, wherein the federated modeling device includes a memory, a processor, and a federated modeling program stored in the memory and capable of running on the processor, and the federated modeling program is The processor implements the steps of the federated modeling method according to claim 5 when executed.
  14. 一种联邦建模设备,其中,所述联邦建模设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的联邦建模程序,所述联邦建模程序被所述处理器执行时实现如权利要求6所述的联邦建模方法的步骤。A federated modeling device, wherein the federated modeling device includes a memory, a processor, and a federated modeling program stored in the memory and capable of running on the processor, and the federated modeling program is The processor implements the steps of the federated modeling method according to claim 6 when executed.
  15. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有联邦建模程序,所述联邦建模程序被处理器执行时实现如权利要求1所述的联邦建模方法的步骤。A computer-readable storage medium, wherein a federated modeling program is stored on the computer-readable storage medium, and when the federated modeling program is executed by a processor, the steps of the federated modeling method according to claim 1 are realized .
  16. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有联邦建模程序,所述联邦建模程序被处理器执行时实现如权利要求2所述的联邦建模方法的步骤。A computer-readable storage medium, wherein a federated modeling program is stored on the computer-readable storage medium, and when the federated modeling program is executed by a processor, the steps of the federated modeling method according to claim 2 are realized .
  17. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有联邦建模程序,所述联邦建模程序被处理器执行时实现如权利要求3所述的联邦建模方法的步骤。A computer-readable storage medium, wherein a federated modeling program is stored on the computer-readable storage medium, and the federated modeling program is executed by a processor to implement the steps of the federated modeling method according to claim 3 .
  18. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有联邦建模程序,所述联邦建模程序被处理器执行时实现如权利要求4所述的联邦建模方法的步骤。A computer-readable storage medium, wherein a federated modeling program is stored on the computer-readable storage medium, and the federated modeling program is executed by a processor to implement the steps of the federated modeling method according to claim 4 .
  19. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有联邦建模程序,所述联邦建模程序被处理器执行时实现如权利要求5所述的联邦建模方法的步骤。A computer-readable storage medium, wherein a federated modeling program is stored on the computer-readable storage medium, and the federated modeling program is executed by a processor to implement the steps of the federated modeling method according to claim 5 .
  20. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有联邦建模程序,所述联邦建模程序被处理器执行时实现如权利要求6所述的联邦建模方法的步骤。A computer-readable storage medium, wherein a federated modeling program is stored on the computer-readable storage medium, and the federated modeling program is executed by a processor to implement the steps of the federated modeling method according to claim 6 .
PCT/CN2021/093153 2020-05-14 2021-05-11 Federated modeling method, device, equipment, and computer-readable storage medium WO2021228110A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010409900.7A CN111582504A (en) 2020-05-14 2020-05-14 Federal modeling method, device, equipment and computer readable storage medium
CN202010409900.7 2020-05-14

Publications (1)

Publication Number Publication Date
WO2021228110A1 true WO2021228110A1 (en) 2021-11-18

Family

ID=72121064

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/093153 WO2021228110A1 (en) 2020-05-14 2021-05-11 Federated modeling method, device, equipment, and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN111582504A (en)
WO (1) WO2021228110A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114707606A (en) * 2022-04-11 2022-07-05 中国电信股份有限公司 Data processing method and device based on federal learning, equipment and storage medium
CN116029367A (en) * 2022-12-26 2023-04-28 东北林业大学 Fault diagnosis model optimization method based on personalized federal learning

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582504A (en) * 2020-05-14 2020-08-25 深圳前海微众银行股份有限公司 Federal modeling method, device, equipment and computer readable storage medium
CN112235062A (en) * 2020-10-10 2021-01-15 中国科学技术大学 Federal learning method and system for resisting communication noise
CN112288097B (en) * 2020-10-29 2024-04-02 平安科技(深圳)有限公司 Federal learning data processing method, federal learning data processing device, computer equipment and storage medium
CN112651511B (en) * 2020-12-04 2023-10-03 华为技术有限公司 Model training method, data processing method and device
CN115081640A (en) * 2020-12-06 2022-09-20 支付宝(杭州)信息技术有限公司 Federal learning method and device based on differential privacy and electronic equipment
CN113850396B (en) * 2021-09-28 2022-04-19 北京邮电大学 Privacy enhanced federal decision method, device, system and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263936A (en) * 2019-06-14 2019-09-20 深圳前海微众银行股份有限公司 Laterally federation's learning method, device, equipment and computer storage medium
CN110399742A (en) * 2019-07-29 2019-11-01 深圳前海微众银行股份有限公司 A kind of training, prediction technique and the device of federation's transfer learning model
US20190370490A1 (en) * 2018-06-05 2019-12-05 Medical Informatics Corporation Rapid research using distributed machine learning
CN110572253A (en) * 2019-09-16 2019-12-13 济南大学 Method and system for enhancing privacy of federated learning training data
CN111582504A (en) * 2020-05-14 2020-08-25 深圳前海微众银行股份有限公司 Federal modeling method, device, equipment and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190370490A1 (en) * 2018-06-05 2019-12-05 Medical Informatics Corporation Rapid research using distributed machine learning
CN110263936A (en) * 2019-06-14 2019-09-20 深圳前海微众银行股份有限公司 Laterally federation's learning method, device, equipment and computer storage medium
CN110399742A (en) * 2019-07-29 2019-11-01 深圳前海微众银行股份有限公司 A kind of training, prediction technique and the device of federation's transfer learning model
CN110572253A (en) * 2019-09-16 2019-12-13 济南大学 Method and system for enhancing privacy of federated learning training data
CN111582504A (en) * 2020-05-14 2020-08-25 深圳前海微众银行股份有限公司 Federal modeling method, device, equipment and computer readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114707606A (en) * 2022-04-11 2022-07-05 中国电信股份有限公司 Data processing method and device based on federal learning, equipment and storage medium
CN114707606B (en) * 2022-04-11 2023-12-22 中国电信股份有限公司 Data processing method and device based on federal learning, equipment and storage medium
CN116029367A (en) * 2022-12-26 2023-04-28 东北林业大学 Fault diagnosis model optimization method based on personalized federal learning

Also Published As

Publication number Publication date
CN111582504A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
WO2021228110A1 (en) Federated modeling method, device, equipment, and computer-readable storage medium
JP6878700B2 (en) Cross-blockchain authentication methods, devices, and electronic devices
JP6929497B2 (en) Cross-blockchain interaction methods, devices, systems, and electronic devices
JP6874224B2 (en) Cross blockchain authentication method and equipment
US9979497B2 (en) Audio playing method and apparatus based on Bluetooth connection
US20190184283A1 (en) Method of controlling information processing device, information processing device and non-transitory computer-readable recording medium storing program for information processing
US20140297823A1 (en) Cloud based virtual mobile device
WO2017041531A1 (en) Timeout wait duration update method and device
WO2022048195A1 (en) Longitudinal federation modeling method, apparatus, and device, and computer readable storage medium
CN106992953A (en) System information acquisition method and device
CN107433040A (en) Game data changes method and system
US20230353555A1 (en) Iot device and method for onboarding iot device to server
WO2019214706A1 (en) Access control method, message broadcast method, and related device
US20170161928A1 (en) Method and Electronic Device for Displaying Virtual Device Image
CN109635422A (en) Joint modeling method, device, equipment and computer readable storage medium
CN111431841A (en) Internet of things security sensing system and Internet of things data security transmission method
WO2019076002A1 (en) Right control method and apparatus for terminal device
CN109766705B (en) Circuit-based data verification method and device and electronic equipment
WO2023124909A1 (en) Group information processing method, apparatus, device, and medium
CN109451011B (en) Information storage method based on block chain and mobile terminal
CN106576106A (en) Method, apparatus and system for exchanging sensor information with middleware
CN115549889A (en) Decryption method, related device and storage medium
WO2018000621A1 (en) Communication data synchronization method and electronic device
CN111292224B (en) Image processing method and electronic equipment
US9536199B1 (en) Recommendations based on device usage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21804232

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21804232

Country of ref document: EP

Kind code of ref document: A1