CN117668874B - Data privacy protection method based on deep learning training process - Google Patents

Data privacy protection method based on deep learning training process Download PDF

Info

Publication number
CN117668874B
CN117668874B CN202311670970.8A CN202311670970A CN117668874B CN 117668874 B CN117668874 B CN 117668874B CN 202311670970 A CN202311670970 A CN 202311670970A CN 117668874 B CN117668874 B CN 117668874B
Authority
CN
China
Prior art keywords
data
deep learning
data set
server
type server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311670970.8A
Other languages
Chinese (zh)
Other versions
CN117668874A (en
Inventor
向涛
肖宏飞
陈泌文
张巧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202311670970.8A priority Critical patent/CN117668874B/en
Publication of CN117668874A publication Critical patent/CN117668874A/en
Application granted granted Critical
Publication of CN117668874B publication Critical patent/CN117668874B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Storage Device Security (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the field of deep learning, in particular to a data privacy protection method based on a deep learning training process, which comprises the following steps: generating a data solicitation request in the first type server according to the training task of the deep learning model, and sending the data solicitation request to the second type server; verifying a data acquisition request in a second type server, and acquiring a first data set according to a verification result; encrypting the first data set and transmitting the encrypted first data set to a third type server; generating a second data set with the same data distribution as the first data set by using the encrypted first data set in the third type server, and sending the second data set to the first type server; training a deep learning model in the first type server using the second data set, and identifying sample data tags using the trained deep learning model. The invention aims to provide a safer deep learning training environment and provide effective guarantee for privacy sensitive data processing in cloud computing.

Description

Data privacy protection method based on deep learning training process
Technical Field
The invention relates to the field of deep learning, in particular to a data privacy protection method based on a deep learning training process.
Background
In the current digital age, the generation and processing of large amounts of sensitive data poses serious challenges for data privacy protection. Particularly in the training process of deep learning models, a large amount of raw data is required, and the data may contain personal privacy information. The traditional data privacy protection method can influence the performance of the deep learning model while protecting data, and has contradiction between privacy and utility.
Because of the high dependence of the deep learning model on the original data, direct model training using the original data may lead to leakage of private information, especially for data containing personal identity and sensitive information; in order to protect privacy, the traditional method often adopts means such as encryption, but the distribution of the encrypted data may be different from that of the original data, so that the deep learning model does not perform well in practical application. Meanwhile, in some cloud server operation environments, the transmission and storage of data may be threatened, and additional measures need to be taken to ensure the security of the data.
Disclosure of Invention
Aiming at the demands of practical application, the invention provides a data privacy protection method based on a deep learning training process, aiming at solving the problem of data privacy protection based on the deep learning training process.
In a first aspect, the present invention provides a data privacy protection method based on a deep learning training process, where the data privacy protection method based on the deep learning training process includes the following steps: generating a data solicitation request according to a training task of a deep learning model in a first type server, and sending the data solicitation request to a second type server; in the second type server, verifying the data request, acquiring a first data set according to a verification result, encrypting the first data set, and transmitting the encrypted first data set to a third type server; generating a second data set with the same data distribution as the first data set by using the encrypted first data set in the third type server, and sending the second data set to the first type server; and training the deep learning model by using the second data set in the first type server, and identifying a sample data tag by using the trained deep learning model.
According to the invention, by introducing a multi-server interaction mechanism, the problem of data privacy leakage in the deep learning training process is effectively solved. Firstly, verifying a data request from a deep learning model on a first type server through a third type server, and acquiring and encrypting first data according to a verification structure; the second class server then uses the encrypted data to generate a second data set that is identical to the original data distribution. And finally, transmitting the data subjected to the multiple privacy processing back to the first server for training the deep learning model in the first server. The method provided by the invention has wide application potential in cloud computing environment, provides a feasible privacy protection scheme for training the deep learning model, ensures the safety of data in the training process, and simultaneously maintains the training effect.
Optionally, the memory of the first type server stores first program instructions of one or more deep learning models, second program instructions for training or executing any one of the deep learning models, and third program instructions for generating a data request of any one of the deep learning models. The first type of server provided by the selectable item has multi-model management and execution capability, can efficiently train and execute a plurality of models on the same platform, and simultaneously realizes effective protection of data privacy.
Optionally, the first type server includes a plurality of first type sub-servers, and a memory of any one of the first type sub-servers stores a first program instruction of a deep learning model, stores a second program instruction for training or executing a corresponding deep learning model, and stores a third program instruction for generating a data request corresponding to the deep learning model. The architecture division is carried out on the first type of server in the selectable items, the expandability and the parallelism of the first type of server are improved, a plurality of models can be trained and executed independently at the same time, and the overall performance is improved. The distributed first-class server provided by the selectable item can effectively optimize the efficiency and flexibility of the deep learning training process while protecting the data privacy, and provides powerful support for large-scale deep learning application.
Optionally, in the second type server, the data request is verified, a first data set is obtained according to the verification result, the first data set is encrypted, and the encrypted first data set is sent to a third type server, including the following steps: verifying the validity of the data acquisition request through the data acquisition identity information in the data acquisition request; acquiring a first data set meeting the data demand specification according to the data demand specification in the data request; encrypting any sample data in the first data set, and sending the encrypted first data set to a third type server. The selectable item comprehensively utilizes identity verification, data customization and enhanced encryption, and comprehensively ensures the privacy security of data required by deep learning training.
Optionally, the encrypting any sample data in the first data set includes the following steps: expanding the sample data into one or more channel vectors according to the channel information of the sample data, and constructing a characterization vector of the sample data based on the channel vectors, wherein the characterization vector meets the following characterization model: Wherein L 1×(α(m×n)) represents a characterization vector of sample data with the number of channels alpha and the size of (m x n)/>, and First channel vector representing sample data,/>A second channel vector representing sample data,/>Third channel vector representing sample data,/>An alpha channel vector representing sample data; based on the channel information and the data size of the sample data, constructing a random deformation matrix, wherein the random deformation matrix meets the following characterization model: Θ (α(m×n))×(α(m×n)), where α (m×n) represents the number of rows or columns of the random deformation matrix; and encrypting the sample data by combining the random deformation matrix and the characterization vector, wherein the encrypted sample data meets the following characterization model: Φ=l 1×(α(m×n))·Θ(α(m×n))×(α(m×n)), Φ represents the matrix of encrypted sample data, and L 1×(α(m×n)) represents the token vector of the sample data. The selectable item is unfolded into channel vectors through the channel information of the sample data, and characterization vectors corresponding to the sample data are constructed, so that the dimension and the structure of the data are not damaged; a random deformation matrix is created based on the channel information and the data size, which conforms to α (m×n) in the number of rows or columns. The random deformation matrix is further combined with the token vector to perform an encryption operation on the sample data. The encryption operation provided by the selectable item effectively enhances the security of the data and prevents potential privacy disclosure risks.
Optionally, the random deformation matrix satisfies the following data distribution: Epsilon= (alpha (m x N))/omega, wherein theta ε×ε represents an initial random deformation matrix, theta represents a random deformation matrix, epsilon represents the number of rows or columns of the initial random deformation matrix, m represents the number of rows of sample data, N represents the number of columns of sample data, omega epsilon N +,ω<<α(m×n),N+ represents a positive integer, theta (x,y) represents data of the random deformation matrix in the x-th row and x-th column, theta x-Nε,y-Nε represents data of the initial random deformation matrix theta ε×ε in the x-N epsilon row and y-N epsilon column, and N < epsilon, and N represents an integer.
Optionally, the third type of server is a cloud server. The selectable item ensures high-efficiency computing resources, flexible storage capacity and strong network connection, and can adapt to different scales and changing task demands.
Optionally, the third class of servers includes one or more data generation models, any of which is used to generate a second data set that is identical to the data distribution in the first data set. The method has the advantages that the flexibility and the adaptability of the application of the method are improved by using a plurality of data generation models, different types of data can be processed at the same time, the diversity of data generation is improved, and a more comprehensive solution is provided for privacy protection.
Optionally, the data generation model includes generating an countermeasure network model or generating a condition generation countermeasure network model. The present option allows for selection of different types of data generation models as required to suit various complex application scenarios.
Optionally, in the first type server, training the deep learning model by using the second data set, and identifying a sample data tag by using the trained deep learning model, and further including the following steps: transmitting the first convolution layer in the trained deep learning model to a second type server; in the second type server, the first convolution layer is enhanced by utilizing the random deformation matrix, and the enhanced first convolution layer is sent to the first type server; in the first type of server, an updated deep learning model is obtained by replacing the original first convolution layer with the enhanced first convolution layer, and the sample data tag is identified by using the updated deep learning model. The first convolution layer in the trained deep learning model is enhanced by the selectable item, and the adaptability of the deep learning model to encrypted data is improved, so that the sample data label can be more accurately identified while the data privacy is protected.
Drawings
Fig. 1 is a flowchart of a data privacy protection method based on a deep learning training process according to an embodiment of the present invention;
Fig. 2 is a schematic distribution diagram of a first type of server, a second type of server, and a third type of server according to an embodiment of the present invention.
Detailed Description
Specific embodiments of the invention will be described in detail below, it being noted that the embodiments described herein are for illustration only and are not intended to limit the invention. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one of ordinary skill in the art that: no such specific details are necessary to practice the invention. In other instances, well-known circuits, software, or methods have not been described in detail in order not to obscure the invention.
Throughout the specification, references to "one embodiment," "an embodiment," "one example," or "an example" mean: a particular feature, structure, or characteristic described in connection with the embodiment or example is included within at least one embodiment of the invention. Thus, the appearances of the phrases "in one embodiment," "in an embodiment," "one example," or "an example" in various places throughout this specification are not necessarily all referring to the same embodiment or example. Furthermore, the particular features, structures, or characteristics may be combined in any suitable combination and/or sub-combination in one or more embodiments or examples. Moreover, those of ordinary skill in the art will appreciate that the illustrations provided herein are for illustrative purposes and that the illustrations are not necessarily drawn to scale.
In one embodiment, please refer to fig. 1, fig. 1 is a flowchart of a data privacy protection method based on a deep learning training process according to an embodiment of the present invention. As shown in fig. 1, the data privacy protection method based on the deep learning training process includes the following steps: and S01, in the first type of server, generating a data request according to a training task of the deep learning model, and sending the data request to the second type of server.
In this embodiment, the memory of the first server stores one or more first program instructions of the deep learning model, stores second program instructions for training or executing any one of the deep learning models, and stores third program instructions for generating a data request of any one of the deep learning models.
The first program instructions are for defining one or more deep learning models. Specifically, the method comprises the steps of defining a deep learning model structure, setting network parameters, configuring a loss function, optimizing and the like. The first program instruction defines a network hierarchy and a network connection mode of the deep learning model, wherein the network hierarchy and the network connection mode can be based on a convolutional neural network; the method can also be based on a hierarchical structure and a connection mode of a cyclic neural network; but also can be a hierarchical structure and a connection mode of the neural network based on other typical neural networks or built according to requirements.
The second program instructions are for training or executing a deep learning model. Specifically, the second program instructions are used for executing training steps of the model, including weight updating, gradient computing, model training results saving and the like. For the case of having a plurality of deep learning models, the second program instructions include a plurality of second subroutine instructions, any one of which is used to train or execute a corresponding deep learning model.
The third program instructions are for generating a data solicitation request for the deep learning model. Specifically, the data retrieval request includes: the data retrieval method comprises the steps of acquiring data retrieval identity information and data requirement specification, wherein the data retrieval identity information comprises corresponding first-class server IP or other identity marks, and the data retrieval identity information is used for verifying the validity of a data retrieval request and ensuring that only an authorized server can initiate the request. The data requirement specification comprises detailed specifications of required training data, such as data types, numbers, characteristics and the like, and the data requirement specification is helpful for a third type of server to understand and generate a data set meeting requirements, so that the generated data meets the specific requirements of deep learning training. And the same is true. For the case of having a plurality of deep learning models, the third program instructions include a plurality of third subroutine instructions, any one of which generates a data solicitation request for the corresponding deep learning model.
In an alternative embodiment, please refer to fig. 2, fig. 2 is a schematic diagram illustrating a distribution of a first type of server, a second type of server, and a third type of server according to an embodiment of the present invention. As shown in fig. 2, the first type server in step S01 includes a plurality of first type sub-servers, and a first program instruction of a deep learning model is stored in a memory of any one of the first type sub-servers, a second program instruction for training or executing the deep learning model is stored, and a third program instruction for generating a data request of the deep learning model is also stored.
In this embodiment, the first program instruction, the second program instruction, and the third program instruction stored in any one of the first-type sub-servers are the same as the first program instruction, the second program instruction, and the third program instruction stored in the first-type sub-server; in one or more other embodiments, the first program instructions, the second program instructions, and the third program instructions stored in any one of the first class of sub-servers are different from the first program instructions, the second program instructions, and the third program instructions stored in the first class of sub-servers.
In yet another embodiment, please refer to fig. 1, fig. 1 is a flowchart of a data privacy protection method based on a deep learning training process according to an embodiment of the present invention. As shown in fig. 1, the data privacy protection method based on the deep learning training process further includes the following steps: and S02, verifying the data request in the second type server, acquiring a first data set according to a verification result, encrypting the first data set, and transmitting the encrypted first data set to a third type server.
Based on the explanation of the first type server in step S01 of the present invention in the above embodiment, it is easy to understand that the first type server generates a corresponding data request as a demander of training data; further, the second type server is a provider of training data, and the provider can obtain a corresponding original training data set, namely the first data set according to specific content in the data request.
Further, in this embodiment, in the second type server, the step S02 of verifying the data request, obtaining a first data set according to the verification result, encrypting the first data set, and sending the encrypted first data set to a third type server, including the following steps:
s021, verifying the validity of the data request by the identity information of the data retrieval request.
Further, an identity verification mechanism can be correspondingly arranged in the second type of server, and the identity information of the first type of server is verified by combining the identity verification mechanism with the data retrieval identity information, so that the data retrieval request is ensured to come from a legal source, and unauthorized access or potential security risk is prevented. In particular, the authentication mechanism includes comparing identity whitelists, verifying digital signatures, or retrieving other security protocols.
S022, acquiring a first data set meeting the data requirement specification according to the data requirement specification in the data request.
And after the data request passes the verification of the second type server, the second type server acquires corresponding sample data according to the data requirement specification in the data request. Further, the types of the sample data comprise text, images, video and the like, and the requirements of the deep learning task and the model of the type of the specific sample data.
S023, encrypting any sample data in the first data set, and sending the encrypted first data set to a third type server.
In this embodiment, the sample data is a single-channel or multi-channel image sample. Further, the encrypting any sample data in the first data set in step S023 includes the following steps:
s0231, spreading the sample data into one or more channel vectors according to the channel information of the sample data, and constructing a characterization vector of the sample data based on the channel vectors.
Based on a single-channel image sample with the size of m multiplied by n, expanding according to a unique channel to obtain a corresponding channel vector L 1×(m×n); since the single channel image sample has only one channel vector, the characterization vector of the single channel sample data l=l 1×(m×n).
Based on three-channel image samples with m multiplied by n, the three-channel image samples are respectively unfolded according to a first channel, a second channel and a third channel to respectively obtain first channel vectorsSecond channel vector/>Third channel vector/>Where (m×n) represents the number of pixel data in any channel vector in the three-channel image sample. Further, single channel sample data based on multiple channel vectors, which characterize the vector/>
Similarly, based on image samples of m×n and alpha channels, the image samples are respectively unfolded according to a first channel, a second channel, a third channel, … and an alpha channel to respectively obtain first channel vectorsSecond channel vector/>Third channel vector/>…, Alpha channel vector/>Where (m×n) represents the number of pixel data in any channel vector in the image sample of α channels. Further, single channel sample data based on multiple channel vectors, which characterize the vector/>
S0232, constructing a random deformation matrix based on the channel information and the data size of the sample data.
Specifically, the constructing the random deformation matrix based on the channel information and the data size of the sample data in step S0231 includes the following steps: and constructing an initial random deformation matrix based on the channel information and the data size of the sample data, and expanding the initial random deformation matrix into a random deformation matrix.
The size of the initial random deformation matrix can be randomly adjusted according to the channel information and the data size of the sample data, and specifically, the initial random deformation matrix meets the following characterization model: θ ε×ε, where ε= (α (m×n))/ω, where ε represents the number of rows or columns of the initial random deformation matrix, α represents the number of channels of the sample data, m represents the number of rows of the sample data, N represents the number of columns of the sample data, ω εN +, ω < α (m×n).
Further, the random deformation matrix Θ (α(m×n))×(α(m×n)) expanded based on the initial random deformation matrix specifically satisfies the following data distribution model: Where Θ (x,y) represents the data of the random deformation matrix Θ (α(m×n))×(α(m×n)) in the x-th row and x-th column, θ x-Nε,y-Nε represents the data of the initial random deformation matrix θ ε×ε in the x-N epsilon row and y-N epsilon column, N < epsilon, specifically N represents an integer much smaller than epsilon.
S0233, encrypting the sample data by combining the random deformation matrix and the characterization vector.
In this embodiment, the encrypted sample data satisfies the following characterization model: Φ=l 1×(α(m×n))·Θ(α(m×n))×(α(m×n)).
In yet another embodiment, please refer to fig. 1, fig. 1 is a flowchart of a data privacy protection method based on a deep learning training process according to an embodiment of the present invention. As shown in fig. 1, the data privacy protection method based on the deep learning training process further includes the following steps: s03, in the third type server, generating a second data set with the same data distribution as the first data set by utilizing the encrypted first data set, and sending the second data set to the first type server.
In order to better realize data information transmission between the third type server, the first type server and the second type server, the third type server is a cloud server. It is easy to understand that the cloud server can provide powerful computing resources and flexible storage capacity, and can adapt to the operation requirement of generating a large number of data sets required by deep learning model training; meanwhile, the cloud server generally has high-speed network bandwidth and excellent connectivity, and is beneficial to quickly and stably transmitting data; cloud servers are typically distributed around the world, and can better accommodate the needs of distributed teams and globalization cooperations.
In this embodiment, the third class of servers includes one or more data generation models, any of which are used to generate a second data set that is identical to the data distribution in the first data set. Further, the data generation model may be a generation countermeasure network model or a conditional generation countermeasure network model.
Wherein the generation of the countermeasure Network model (GAN, generative Adversarial Network) aims at generating new data which is similar to but not identical to the training data, and consists of a Generator (Generator) and a arbiter (Discriminator), which co-evolve through countermeasure training, and the Generator in the generation of the countermeasure Network model does not consider specific conditions or labels when generating the data. The Conditional GENERATIVE ADVERSARIAL Network model (CGAN) allows conditions to be introduced during the generation process. The generator receives additional information, such as category labels or other condition information, in addition to random noise. In this embodiment, in order to make the generated data carry corresponding labels for facilitating training of the subsequent deep learning model, the data generation model is selected to generate the countermeasure network model under the condition.
In yet another embodiment, please refer to fig. 1, fig. 1 is a flowchart of a data privacy protection method based on a deep learning training process according to an embodiment of the present invention. As shown in fig. 1, the data privacy protection method based on the deep learning training process further includes the following steps: s04, training the deep learning model by using the second data set in the first type server, and identifying a sample data label by using the trained deep learning model.
It is easy to understand that the training set used for training the deep learning model in step S04 is the second data set generated based on the encrypted original data; therefore, the recognition capability of the deep learning model pin trained with the second data set is different from that of the deep learning model trained directly with the original data. In order to compensate for the loss caused by training data, in this embodiment, in the first server described in step S04, the deep learning model is trained by using the second data set, and the sample data tag is identified by using the trained deep learning model, and further includes the following steps:
S041, the first convolution layer in the trained deep learning model is sent to a second type server.
It is easy to understand that the deep learning model includes a plurality of convolution layers, and further, the first convolution layer in the present invention refers to the first convolution layer connected to the deep learning input layer.
S042, in the second type server, the first convolution layer is enhanced by utilizing the random deformation matrix, and the enhanced first convolution layer is sent to the first type server.
In this embodiment, the enhanced first convolution layer satisfies the following model: c =Φ·C0, wherein C represents the enhanced first convolution layer, Φ represents the random deformation matrix, and C 0 represents the original first convolution layer. And in the second type of server, according to the encryption operation of the second server, the first convolution layer in the trained deep learning model is enhanced to compensate the loss caused by training data.
S043, in the second server, the original first convolution layer is replaced with the enhanced first convolution layer, an updated deep learning model is obtained, and the sample data label is identified by the updated deep learning model.
The updated deep learning model considers the extra randomness introduced by the random deformation matrix, and improves the adaptability of the model to encrypted data.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention, and are intended to be included within the scope of the appended claims and description.

Claims (7)

1. The data privacy protection method based on the deep learning training process is characterized by comprising the following steps of:
Generating a data solicitation request according to a training task of a deep learning model in a first type server, and sending the data solicitation request to a second type server;
in the second type server, verifying the data request, acquiring a first data set according to a verification result, encrypting the first data set, and transmitting the encrypted first data set to a third type server;
generating a second data set with the same data distribution as the first data set by using the encrypted first data set in the third type server, and sending the second data set to the first type server;
Training the deep learning model by using the second data set in the first type server, and identifying a sample data tag by using the trained deep learning model;
in the second type server, verifying the data request, acquiring a first data set according to a verification result, encrypting the first data set, and transmitting the encrypted first data set to a third type server, wherein the method comprises the following steps:
verifying the validity of the data acquisition request through the data acquisition identity information in the data acquisition request;
Acquiring a first data set meeting the data demand specification according to the data demand specification in the data request;
encrypting any sample data in the first data set, and transmitting the encrypted first data set to a third type server;
Encrypting any sample data in the first data set, comprising the steps of:
According to channel information of sample data, expanding the sample data into one or more channel vectors, and constructing a characterization vector of the sample data based on the channel vectors, wherein the characterization vector meets the following characterization model:
Wherein/> Indicating the number of channels as/>Size is/>Characterization vector of sample data of >/>A first channel vector representing sample data,A second channel vector representing sample data,/>Third channel vector representing sample data,/>Represents the first/>, of the sample dataChannel vectors;
Constructing an initial random deformation matrix based on the channel information and the data size of the sample data, and expanding the initial random deformation matrix into a random deformation matrix Wherein/>Representing the number of rows or columns of the random deformation matrix;
The random deformation matrix satisfies the following data distribution:
,/>
Wherein, Representing an initial random deformation matrix,/>Representing a random deformation matrix,/>Representing the number of rows or columns of the initial random deformation matrix,/>Representing the number of lines of sample data,/>Representing the number of columns of sample data,/>,/>,/>Representing a positive integer,/>Representing the random deformation matrix at/>Data of row y column,/>Representing an initial random deformation matrixIn/>Line and/>Column data,/>,/>Represents an integer;
And encrypting the sample data by combining the random deformation matrix and the characterization vector, wherein the encrypted sample data meets the following characterization model: ,/> representing a matrix of sample data after encryption, Representing a token vector of the sample data.
2. The method for protecting data privacy based on deep learning training process according to claim 1, wherein the memory of the first type server stores first program instructions of one or more deep learning models, second program instructions for training or executing any one of the deep learning models, and third program instructions for generating a data retrieval request of any one of the deep learning models.
3. The method for protecting data privacy based on deep learning training process according to claim 2, wherein the first type server comprises a plurality of first type sub-servers, wherein a first program instruction of a deep learning model is stored in a memory of any one of the first type sub-servers, a second program instruction for training or executing the corresponding deep learning model is stored, and a third program instruction for generating a data request of the corresponding deep learning model is stored.
4. The method for protecting data privacy based on deep learning training process of claim 1, wherein the third type of server is a cloud server.
5. The deep learning training process based data privacy protection method of claim 4, wherein the third class of servers includes one or more data generation models, any of which is used to generate a second data set that is identical to the data distribution in the first data set.
6. The deep learning training process based data privacy protection method of claim 5, wherein the data generation model comprises generating an antagonism network model or a conditional generation antagonism network model.
7. The method for protecting data privacy based on deep learning training process of claim 4, wherein in the first type of server, training the deep learning model using the second data set and identifying sample data tags using the trained deep learning model, further comprising the steps of:
Transmitting the first convolution layer in the trained deep learning model to a second type server;
In the second type server, the first convolution layer is enhanced by utilizing the random deformation matrix, and the enhanced first convolution layer is sent to the first type server;
in the first type of server, an updated deep learning model is obtained by replacing the original first convolution layer with the enhanced first convolution layer, and the sample data tag is identified by using the updated deep learning model.
CN202311670970.8A 2023-12-07 2023-12-07 Data privacy protection method based on deep learning training process Active CN117668874B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311670970.8A CN117668874B (en) 2023-12-07 2023-12-07 Data privacy protection method based on deep learning training process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311670970.8A CN117668874B (en) 2023-12-07 2023-12-07 Data privacy protection method based on deep learning training process

Publications (2)

Publication Number Publication Date
CN117668874A CN117668874A (en) 2024-03-08
CN117668874B true CN117668874B (en) 2024-06-07

Family

ID=90067717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311670970.8A Active CN117668874B (en) 2023-12-07 2023-12-07 Data privacy protection method based on deep learning training process

Country Status (1)

Country Link
CN (1) CN117668874B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109684855A (en) * 2018-12-17 2019-04-26 电子科技大学 A kind of combined depth learning training method based on secret protection technology
CN110572253A (en) * 2019-09-16 2019-12-13 济南大学 Method and system for enhancing privacy of federated learning training data
CN112632620A (en) * 2020-12-30 2021-04-09 支付宝(杭州)信息技术有限公司 Federal learning method and system for enhancing privacy protection
CN114780999A (en) * 2022-06-21 2022-07-22 广州中平智能科技有限公司 Deep learning data privacy protection method, system, equipment and medium
CN115292728A (en) * 2022-07-15 2022-11-04 浙江大学 Image data privacy protection method based on generation countermeasure network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109684855A (en) * 2018-12-17 2019-04-26 电子科技大学 A kind of combined depth learning training method based on secret protection technology
CN110572253A (en) * 2019-09-16 2019-12-13 济南大学 Method and system for enhancing privacy of federated learning training data
CN112632620A (en) * 2020-12-30 2021-04-09 支付宝(杭州)信息技术有限公司 Federal learning method and system for enhancing privacy protection
CN114780999A (en) * 2022-06-21 2022-07-22 广州中平智能科技有限公司 Deep learning data privacy protection method, system, equipment and medium
CN115292728A (en) * 2022-07-15 2022-11-04 浙江大学 Image data privacy protection method based on generation countermeasure network

Also Published As

Publication number Publication date
CN117668874A (en) 2024-03-08

Similar Documents

Publication Publication Date Title
Xu et al. Joint optimization of offloading utility and privacy for edge computing enabled IoT
CN106972927B (en) Encryption method and system for different security levels
Makri et al. EPIC: efficient private image classification (or: Learning from the masters)
Awan et al. Secure framework enhancing AES algorithm in cloud computing
CN106921493B (en) Encryption method and system
CN110427969B (en) Data processing method and device and electronic equipment
DE112013000357B4 (en) A system for authenticating an acceptance of a string by a machine
CN106911712B (en) Encryption method and system applied to distributed system
US11558403B2 (en) Quantum computing machine learning for security threats
CN111125727A (en) Confusion circuit generation method, prediction result determination method, device and electronic equipment
CN112631550A (en) Block chain random number generation method, device, equipment and computer storage medium
US20210150042A1 (en) Protecting information embedded in a machine learning model
CN112784823A (en) Face image recognition method, face image recognition device, computing equipment and medium
Jiang et al. A new steganography without embedding based on adversarial training
US20180323978A1 (en) Secure Distance Computations
CN113420321B (en) Electronic bidding method, bidding node, first bidding node and third party node
US9858004B2 (en) Methods and systems for generating host keys for storage devices
CN116069957A (en) Information retrieval method, device and equipment
CN117668874B (en) Data privacy protection method based on deep learning training process
US20220271914A1 (en) System and Method for Providing a Secure, Collaborative, and Distributed Computing Environment as well as a Repository for Secure Data Storage and Sharing
Dinesh et al. Security aware data transaction using optimized blowfish algorithm in cloud environment
Hranický et al. Distributed password cracking in a hybrid environment
US11133926B2 (en) Attribute-based key management system
CN114553549B (en) Data encryption method and system
Londhe et al. Data division and replication approach for improving security and availability of cloud storage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant