CN116628755A - Personalized federal learning method based on privacy protection - Google Patents

Personalized federal learning method based on privacy protection Download PDF

Info

Publication number
CN116628755A
CN116628755A CN202310773918.9A CN202310773918A CN116628755A CN 116628755 A CN116628755 A CN 116628755A CN 202310773918 A CN202310773918 A CN 202310773918A CN 116628755 A CN116628755 A CN 116628755A
Authority
CN
China
Prior art keywords
model
privacy
user
noise
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310773918.9A
Other languages
Chinese (zh)
Inventor
许嘉杰
李林襁
陈超
范锦茜
代盈盈
杨子毅
柴欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu University of Information Technology
Original Assignee
Chengdu University of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu University of Information Technology filed Critical Chengdu University of Information Technology
Priority to CN202310773918.9A priority Critical patent/CN116628755A/en
Publication of CN116628755A publication Critical patent/CN116628755A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Storage Device Security (AREA)

Abstract

The invention relates to the technical field of user privacy protection, and discloses a personalized federal learning method based on privacy protection, which comprises the following steps of S1, establishing a model; s2, selecting a model; s3, encrypting the model; s4, establishing a privacy algorithm; s5, privacy risk analysis; s6, establishing user identity re-verification; s7, training and optimizing a model S8 and deploying the model. According to the personalized federal learning method based on privacy protection, a global model or a global model set is established, a noise algorithm for user privacy protection is added to the global model, noise is added to user data in the global model or the global model set, risk analysis and verification are carried out on the user data after the noise is added, the user data is deployed by the verified global model or the global model set, the identity of a user is verified again before deployment, authenticity of an authenticated user is ensured, and the user privacy is prevented from being infringed when user information is uploaded, so that user information and JIANLI information are leaked in the process of being on the model.

Description

Personalized federal learning method based on privacy protection
Technical Field
The invention relates to the technical field of user privacy protection, in particular to a personalized federal learning method based on privacy protection.
Background
With the advancement of artificial intelligence technology and the increasing popularity of smart devices, federal learning (Federated Learning) is becoming a new machine learning approach, and deep learning is receiving great attention in both academia and industry today. As the performance of the deep learning is greatly improved compared with that of the traditional algorithm, the deep learning is widely applied to various fields, such as machine translation, image recognition, unmanned driving, natural language processing and the like. Deep learning is changing our lifestyle. Its success depends on powerful computers and the availability of large amounts of data. However, the learning system that needs to input all data into the learning model running on the central server brings about serious privacy problems, and with the rise of the internet of things and edge computing, big data is often not limited to a single whole, but is distributed in many aspects, and how to safely and effectively realize the updating and sharing of the model among multiple sites is a new challenge faced by various computing methods. Due to the consideration of data privacy security, a data owner cannot directly share data to realize a co-training deep learning model. To solve the problem of data islanding and user privacy, federal learning has been developed as a very potential solution, which transfers the model training process from a centralized server to the user equipment, performs model training locally on the equipment, and then uploads the training results to the server for model updating.
Since the training data is stored on the user device, the privacy of the user is violated during the process of uploading the model update, resulting in leakage of the user information during the process of uploading the model.
Disclosure of Invention
(one) solving the technical problems
Aiming at the defects of the prior art, the invention provides a personalized federal learning method based on privacy protection, which has the advantages of establishing a global model or a global model set, adding a noise algorithm for user privacy protection to the global model or the global model set, adding noise to user data in the global model or the global model set, carrying out risk analysis and verification after adding the noise, deploying the verified global model or the global model set, and carrying out re-verification on the identity of a user before deployment, thereby ensuring the authenticity of authenticated users, further avoiding the privacy of users from being infringed when user information is uploaded, causing the user information to be leaked in the process of the model, and the like.
(II) technical scheme
In order to achieve the above purpose, the present invention provides the following technical solutions: a personalized federal learning method based on privacy protection comprises the following steps:
s1, establishing a model;
s2, selecting a model;
s3, encrypting the model;
s4, establishing a privacy algorithm;
s5, privacy risk analysis;
s6, establishing user identity re-verification;
s7, training and optimizing a model
S8, deploying a model;
wherein, the modeling further comprises data collection and feature establishment;
selecting a model, wherein one model suitable for conforming to the environment used by the equipment is selected based on the established multiple models;
encrypting the model, and protecting personal privacy by increasing noise based on the selected model;
establishing a privacy algorithm, establishing a proper algorithm according to the model, and encrypting the model or the model set participating in uploading;
the privacy risk analysis is carried out, before uploading the model or the model set, an algorithm for encrypting the model or the model set is analyzed, so that the data in the model or the model set cannot be reconstructed again;
establishing user identity re-verification, and re-verifying the participant and the used identity in the federal learning process to ensure that the current user has operation rights;
training and optimizing the model, repeatedly testing the model or the model set, and evaluating and optimizing the test result.
And deploying the model, and applying the trained model or the model collection to the actual scene.
Preferably, the modeling comprises data collection and data preprocessing, wherein the data collection collects user information which can be used for modeling when a user is authorized to log in, the data preprocessing carries out preprocessing on the collected user information used for modeling, carries out feature extraction and data conversion on the user information, and establishes a plurality of models according to the features of the user information.
Preferably, the selection model forms a model set according to one or more of a plurality of models which are already established selected by the target user, and adds a noise algorithm for protecting the privacy of the user to the selected model or model set.
Preferably, the noise algorithm calculates the formula:
where Zs is the final value of the function added to the model or set of models plus noise,refers to distributed noise, and represents mean value Jz and variance ++>Jz ranges to a value of 0 or close to 0, the value of Jz being selected according to the characteristics of the acquired user data set, +.>F (Yh) is the original value of the function on the user data set Yh, which is the standard deviation of the noise.
Preferably, the noise algorithm calculation formula further includes:
where Δf is the sensitivity of the user dataset, ε privacy budget, X is the random variable coefficient,set as noise parameter, ">For distributed noise, f (Yh) is the original value of the function on the user data set Yh, zs 1 The final noise value is added to the function added to the model or set of models.
Preferably, the privacy risk analysis is to test the influence of the final value of the noise on the protection of the user privacy information under the condition of simulating privacy disclosure, and the simulation verification calculation formula is as follows:
wherein ΔYz represents a privacy loss value, α 2 And combining the sensitivity with the final noise value to obtain the influence of the variance of the final noise value on the privacy loss value as the sensitivity system parameter.
Preferably, the privacy risk analysis is to test the influence of the final value of the noise on the protection of the user privacy information under the condition of simulating privacy disclosure, and the other simulation verification calculation formula is as follows:
where y is the privacy loss value, f (Mn) is the assumed simulated loss data set, Δf is the maximum difference in output values, f (Zs) 1 ) And obtaining the strength of privacy protection as the parameter of the final value of the noise.
Preferably, the establishing the user authentication further includes dynamic password authentication, wherein the dynamic password authentication process is as follows:
kl=f (Sec, time); wherein, kl dynamic password, f is a dynamic password function, sec is secret information shared among application, equipment and user, time is the current time cut-off point, and the verified Kl value is consistent with the value provided by the dynamic password provided by the user.
Preferably, the training and optimizing the model further includes adjusting parameters, by simulating the privacy disclosure situation for a plurality of times, and according to a verification calculation formula used by the model or the model set, taking the simulation numerical value into the verification calculation formula to calculate to obtain a privacy loss value, combining the privacy loss values for a plurality of times to perform weighted average, training the model, and adjusting the parameters according to the calculation result.
Preferably, the deployment model comprises the steps of selecting a proper computing platform and a proper deployment environment for the model or the model set after training and optimizing, optimizing according to the deployed computing platform or the deployment environment, monitoring and maintaining in real time, and monitoring the running state and performance of the deployed model or model set in real time.
Compared with the prior art, the invention provides a personalized federal learning method based on privacy protection, which has the following beneficial effects:
1. according to the invention, by establishing the global model or the global model set and adding a noise algorithm for protecting the user privacy to the global model or the global model, noise is added to the user data in the global model or the global model set, after the noise is added, risk analysis and verification are carried out on the user data, the verified global model or the global model set is deployed, and before deployment, the identity of the user is verified again, so that the authenticity of the authenticated user is ensured, and further, the user privacy is prevented from being infringed when the user information is uploaded, and the user information is prevented from being leaked in the process of being on the model.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a personalized federal learning method based on privacy protection includes the following steps:
s1, establishing a model;
s2, selecting a model;
s3, encrypting the model;
s4, establishing a privacy algorithm;
s5, privacy risk analysis;
s6, establishing user identity re-verification;
s7, training and optimizing a model
S8, deploying a model;
wherein, the modeling further comprises data collection and feature establishment;
selecting a model, wherein one model suitable for conforming to the environment used by the equipment is selected based on the established multiple models;
encrypting the model, and protecting personal privacy by increasing noise based on the selected model;
establishing a privacy algorithm, establishing a proper algorithm according to the model, and encrypting the model or the model set participating in uploading;
the privacy risk analysis is carried out, before uploading the model or the model set, an algorithm for encrypting the model or the model set is analyzed, so that the data in the model or the model set cannot be reconstructed again;
establishing user identity re-verification, and re-verifying the participant and the used identity in the federal learning process to ensure that the current user has operation rights;
training and optimizing the model, repeatedly testing the model or the model set, and evaluating and optimizing the test result.
And deploying the model, and applying the trained model or the model collection to the actual scene.
Further, the modeling comprises data collection and data preprocessing, wherein the data collection collects user information which can be used for modeling when a user logs in through authorization, the data preprocessing carries out preprocessing on the collected user information which is used for modeling, feature extraction and data conversion are carried out on the user information, a plurality of models are built according to the user information features, irrelevant information is cleaned through collecting the user information and preprocessing the local user information and data, relevant information of the user is subjected to feature extraction, a local model is built according to the local user information, a plurality of local models are generated through a model aggregation method, a plurality of global models are generated, one or more models which are built are selected by a model according to a target user to form a model set, a noise algorithm for protecting the user privacy is added to the selected model or the model set, a model set is formed according to one or more models in the global model, a single global model or the global model set is selected by the target user, and a noise algorithm is added to ensure that the single global model or model set uploaded by a participant.
Embodiment 1,
The noise algorithm calculates the formula:
where Zs is the final value of the function added to the model or set of models plus noise,refers to distributed noise, and represents mean value Jz and variance ++>Jz ranges to a value of 0 or close to 0, the value of Jz being selected according to the characteristics of the acquired user data set, +.>For the standard deviation of noise, f (Yh) is the original value of the function on the user data set Yh, and the influence of the final value of the test noise on the protection of the user privacy information under the condition of simulating privacy disclosure is privacy risk analysis, and the simulation verification calculation formula is as follows: />
Wherein ΔYz represents a privacy loss value, α 2 In order to obtain the influence of the variance of the final noise value on the privacy loss value, the original user data and the noise are combined according to a calculation formula to obtain the final noise value, the final noise value is brought into a selected global model or global model set, user information is further encrypted, the privacy of the user information is enhanced, the user information is prevented from being leaked in the process of uploading the user data, meanwhile, the final noise value is detected under the condition of simulating privacy leakage before uploading, and the protection degree of the user information is detected under the condition of privacy leakage by the final noise value, so that the final noise value can be corrected in time, and the user information can be protected under the condition of privacy leakage.
Embodiment II,
The noise algorithm calculation formula further includes:
where Δf is the sensitivity of the user dataset, ε privacy budget, X is the random variable coefficient,set as noise parameter, ">For distributed noise, f (Yh) is the original value of the function on the user data set Yh, zs 1 In order to add the final noise value to the function added in the model or the model set, the influence of the final noise value on the protection of the user privacy information is tested under the condition that privacy risk analysis simulates privacy disclosure, and another simulation verification calculation formula is as follows:
where y is the privacy loss value, f (Mn) is the assumed simulated loss data set, Δf is the maximum difference in output values, f (Zs) 1 ) The method comprises the steps of obtaining the strength of privacy protection for parameters of a final value of noise, combining original user data and noise according to a calculation formula to obtain the final value of noise, and bringing the final value of noise into a selected global model or a global model set, so that user information is encrypted, the privacy of the user information is enhanced, the user information is prevented from being leaked in a communication process in uploading of the user data, meanwhile, the final noise value is detected under the condition of simulating privacy leakage before uploading, and the protection strength of the user information is detected under the condition of privacy leakage of the final noise value, so that the final noise value can be corrected timely, and the user information can be protected under the condition of privacy leakage.
Further, after selecting the global model or the global model set with the noise algorithm calculation formula of the first embodiment or the second embodiment, establishing the user identity re-authentication includes dynamic password authentication, wherein the dynamic password authentication process is as follows:
kl=f (Sec, time); wherein, kl dynamic password, f is a dynamic password function, sec is secret information shared between application, equipment and user, time is the current time cut-off point, the verified Kl value is consistent with the value provided by the dynamic password provided by the user, a re-user identity re-verification process is established, user information re-verification is carried out on the logged-in user before uploading the global model or the global model set, the identity authenticity of the authenticated user is enhanced, malicious attack or fraud is reduced, and further the risk of user information leakage is reduced.
Further, the deployment model comprises the steps of selecting a proper computing platform and a proper deployment environment for the model or the model set after training and optimizing, optimizing according to the deployed computing platform or the deployment environment, monitoring and maintaining in real time, monitoring the running state and performance of the deployed model or model set in real time, monitoring and maintaining the performance and running state of the global model or global model set in real time after the global model or global model set is deployed, and tracking and monitoring the stability and the usability of the machine learning global model or global model set and deployment in the production environment in real time in a log record, performance monitoring and error reporting mode.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (10)

1. A personalized federal learning method based on privacy protection is characterized in that: the method comprises the following steps:
s1, establishing a model;
s2, selecting a model;
s3, encrypting the model;
s4, establishing a privacy algorithm;
s5, privacy risk analysis;
s6, establishing user identity re-verification;
s7, training and optimizing a model
S8, deploying a model;
wherein, the modeling further comprises data collection and feature establishment;
selecting a model, wherein one model suitable for conforming to the environment used by the equipment is selected based on the established multiple models;
encrypting the model, and protecting personal privacy by increasing noise based on the selected model;
establishing a privacy algorithm, establishing a proper algorithm according to the model, and encrypting the model or the model set participating in uploading;
the privacy risk analysis is carried out, before uploading the model or the model set, an algorithm for encrypting the model or the model set is analyzed, so that the data in the model or the model set cannot be reconstructed again;
establishing user identity re-verification, and re-verifying the participant and the used identity in the federal learning process to ensure that the current user has operation rights;
training and optimizing the model, repeatedly testing the model or the model set, and evaluating and optimizing the test result.
And deploying the model, and applying the trained model or the model collection to the actual scene.
2. The privacy preserving-based personalized federal learning method of claim 1, wherein: the model establishment comprises data collection and data preprocessing, wherein the data collection collects user information which can be used for establishing the model when a user logs in through authorization, the data preprocessing carries out preprocessing on the collected user information used for establishing the model, characteristic extraction and data conversion are carried out on the user information, and a plurality of models are established according to the characteristics of the user information.
3. A personalized federal learning method based on privacy protection according to claim 2, wherein: the selection model forms a model set according to one or more of a plurality of models which are established by the target user selection, and adds a noise algorithm for protecting the privacy of the user to the selected model or model set.
4. A personalized federal learning method based on privacy protection according to claim 3, wherein: the noise algorithm calculation formula:
where Zs is the final value of the function added to the model or set of models plus noise,refers to distributed noise, and represents mean value Jz and variance ++>Jz ranges to a value of 0 or close to 0, the value of Jz being selected according to the characteristics of the acquired user data set, +.>F (Yh) is the original value of the function on the user data set Yh, which is the standard deviation of the noise.
5. A personalized federal learning method based on privacy protection according to claim 3, wherein: the noise algorithm calculation formula further comprises:
where Δf is the sensitivity of the user dataset, ε privacy budget, X is the random variable coefficient,the noise parameter is set to be the noise parameter,for distributed noise, f (Yh) is the original value of the function on the user data set Yh, zs 1 The final noise value is added to the function added to the model or set of models.
6. The privacy preserving-based personalized federal learning method of claim 4, wherein: the privacy risk analysis is to test the influence of the final value of noise on the protection of the user privacy information under the condition of simulating privacy disclosure, and the simulation verification calculation formula is as follows:
wherein ΔYz represents a privacy loss value, α 2 And combining the sensitivity with the final noise value to obtain the influence of the variance of the final noise value on the privacy loss value as the sensitivity system parameter.
7. The privacy preserving-based personalized federal learning method of claim 5, wherein: the privacy risk analysis is to test the influence of the final value of noise on the protection of the user privacy information under the condition of simulating privacy disclosure, and the other simulation verification calculation formula is as follows:
where y is the privacy loss value, f (Mn) is the assumed simulated loss data set, Δf is the maximum difference in output values, f (Zs) 1 ) And obtaining the strength of privacy protection as the parameter of the final value of the noise.
8. A personalized federal learning method based on privacy protection according to claim 2, wherein: the user identity re-verification establishment comprises dynamic password verification, wherein the dynamic password verification process comprises the following steps:
kl=f (Sec, time); wherein, kl dynamic password, f is a dynamic password function, sec is secret information shared among application, equipment and user, time is the current time cut-off point, and the verified Kl value is consistent with the value provided by the dynamic password provided by the user.
9. A personalized federal learning method based on privacy protection according to claim 6 or 7, wherein: the training and optimizing model further comprises adjusting parameters, the privacy leakage condition is simulated for a plurality of times, the simulation numerical value is carried into the model or the model set according to a verification calculation formula used by the model or the model set to calculate the privacy loss value, the privacy loss value is combined with the privacy loss value for weighted average, the model is trained, and the parameters are adjusted according to the calculation result.
10. The privacy preserving-based personalized federal learning method of claim 9, wherein: the deployment model comprises the steps of selecting a proper computing platform and a deployment environment for the model or the model set after training and optimizing, optimizing according to the deployed computing platform or the deployment environment, monitoring and maintaining in real time, and monitoring the running state and performance of the deployed model or model set in real time.
CN202310773918.9A 2023-06-28 2023-06-28 Personalized federal learning method based on privacy protection Pending CN116628755A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310773918.9A CN116628755A (en) 2023-06-28 2023-06-28 Personalized federal learning method based on privacy protection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310773918.9A CN116628755A (en) 2023-06-28 2023-06-28 Personalized federal learning method based on privacy protection

Publications (1)

Publication Number Publication Date
CN116628755A true CN116628755A (en) 2023-08-22

Family

ID=87610035

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310773918.9A Pending CN116628755A (en) 2023-06-28 2023-06-28 Personalized federal learning method based on privacy protection

Country Status (1)

Country Link
CN (1) CN116628755A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117113418A (en) * 2023-10-18 2023-11-24 武汉大学 Anti-image enhancement data desensitization method and system based on iterative optimization

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117113418A (en) * 2023-10-18 2023-11-24 武汉大学 Anti-image enhancement data desensitization method and system based on iterative optimization
CN117113418B (en) * 2023-10-18 2024-01-16 武汉大学 Anti-image enhancement data desensitization method and system based on iterative optimization

Similar Documents

Publication Publication Date Title
CN111340008B (en) Method and system for generation of counterpatch, training of detection model and defense of counterpatch
CN105868678B (en) The training method and device of human face recognition model
CN112185395B (en) Federal voiceprint recognition method based on differential privacy
Viet et al. Using deep learning model for network scanning detection
CN112052761A (en) Method and device for generating confrontation face image
CN114363043B (en) Asynchronous federal learning method based on verifiable aggregation and differential privacy in peer-to-peer network
CN116628755A (en) Personalized federal learning method based on privacy protection
AU2019100349A4 (en) Face - Password Certification Based on Convolutional Neural Network
Bitton et al. Evaluating the information security awareness of smartphone users
Polakis et al. Faces in the distorting mirror: Revisiting photo-based social authentication
CN104881606A (en) Formalized modeling based software security requirement acquisition method
JP5196013B2 (en) Biometric authentication device, biometric authentication method, and biometric authentication program
He et al. Finger vein image deblurring using neighbors-based binary-GAN (NB-GAN)
CN116776386A (en) Cloud service data information security management method and system
Buriro et al. SWIPEGAN: swiping data augmentation using generative adversarial networks for smartphone user authentication
CN115238172A (en) Federal recommendation method based on generation of countermeasure network and social graph attention network
CN113807258A (en) Encrypted face recognition method based on neural network and DCT (discrete cosine transformation)
CN116896452B (en) Computer network information security management method and system based on data processing
CN117131490A (en) Power distribution network wireless terminal equipment identity authentication method based on equipment hardware fingerprint
CN109856979B (en) Environment adjusting method, system, terminal and medium
CN116227547A (en) Federal learning model optimization method and device based on self-adaptive differential privacy
CN112272195B (en) Dynamic detection authentication system and method thereof
Ryu et al. Continuous multibiometric authentication for online exam with machine learning
CN109818755A (en) A kind of transparent two-factor authentication system and method
Zhong et al. Steganographer detection via multi-scale embedding probability estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination