WO2021238992A1 - Neural network training method and apparatus, electronic device, and readable storage medium - Google Patents
Neural network training method and apparatus, electronic device, and readable storage medium Download PDFInfo
- Publication number
- WO2021238992A1 WO2021238992A1 PCT/CN2021/096109 CN2021096109W WO2021238992A1 WO 2021238992 A1 WO2021238992 A1 WO 2021238992A1 CN 2021096109 W CN2021096109 W CN 2021096109W WO 2021238992 A1 WO2021238992 A1 WO 2021238992A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- neural network
- type
- trained
- training data
- layer
- Prior art date
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 158
- 238000000034 method Methods 0.000 title claims abstract description 52
- 230000008569 process Effects 0.000 claims description 21
- 238000007906 compression Methods 0.000 claims description 17
- 230000006835 compression Effects 0.000 claims description 16
- 230000006837 decompression Effects 0.000 claims description 8
- 239000010410 layer Substances 0.000 description 144
- 238000003062 neural network model Methods 0.000 description 16
- 238000010586 diagram Methods 0.000 description 9
- 238000011176 pooling Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 235000002566 Capsicum Nutrition 0.000 description 1
- 239000006002 Pepper Substances 0.000 description 1
- 235000016761 Piper aduncum Nutrition 0.000 description 1
- 235000017804 Piper guineense Nutrition 0.000 description 1
- 244000203593 Piper nigrum Species 0.000 description 1
- 235000008184 Piper nigrum Nutrition 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000005184 irreversible process Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 150000003839 salts Chemical class 0.000 description 1
- 239000002356 single layer Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/602—Providing cryptographic facilities or services
Definitions
- This application relates to deep learning technology, in particular to a neural network training method, device, electronic equipment and readable storage medium.
- Online learning is a learning method that uses online unsupervised data for model training, thereby further improving the generalization performance of the model in the actual deployment environment.
- it is usually necessary to use some or all of the original supervised data to assist training to ensure the performance of the model. Due to the privacy and confidentiality of data involved, the original supervised data cannot be directly stored on the deployment side of the online learning system.
- the usual file is encrypted and stored. After decryption, the training scheme involves the risk of secret key leakage and insecure data memory. In this case, encryption training is an effective solution to ensure data security.
- encryption training the data does not need to be decrypted, but directly participates in the training in the form of ciphertext.
- Existing encryption training schemes include symmetric encryption schemes, training data plus noise encryption schemes, and autoencoder encryption schemes.
- the symmetric encryption scheme ensures that the encrypted training model is consistent with the original data training model, thus ensuring the performance of the model; but the original data can be restored after the secret key is leaked, and there is a data security risk; at the same time, the symmetric encryption scheme can only be applied to a single layer Perceptrons and other models that do not include nonlinear operations cannot be applied to deep neural networks.
- the training data plus noise encryption scheme encrypts the original data by adding noise to the original data.
- the noise changes the pattern of the original data, the performance of the model is severely degraded if the noise is too large; the confidentiality of the original data is insufficient if the noise is too small.
- the self-encoder encryption scheme trains a self-encoder to extract features of the original data, and use hidden layer features to learn the pattern of the original data and use it as encrypted data.
- the decoder parameters are leaked, the original data can still be restored through hidden layer features and the decoder, which poses a certain data security risk.
- the original data pattern is complex (pictures, videos, etc.) and the data scale is large, it is difficult for self-encoding to learn good hidden layer features to represent all the patterns of the original data; therefore, the performance of the encrypted training model in this case is also Will be greatly affected.
- the present application provides a neural network training method, device, electronic equipment, and readable storage medium.
- a neural network training method including:
- the fixed layer of the neural network to be trained to process the first type of training data to obtain the encryption features of the first type of training data; wherein, the first type of training data is original supervised data, and the fixed layer is The first N layers of the neural network to be trained, the fixed layer includes at least one nonlinear layer, and N is a positive integer;
- the trainable layer of the neural network to be trained is trained until the neural network to be trained converges, and the second type of training data is training data obtained online.
- a neural network training device including
- the data processing unit is used to encrypt the first type of training data by using the fixed layer of the neural network to be trained to obtain the encryption characteristics of the first type of training data; wherein, the first type of training data is original supervised data ,
- the fixed layer is the first N layers of the neural network to be trained, the fixed layer includes at least one nonlinear layer, and N is a positive integer;
- the training unit is configured to train the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data until the neural network to be trained converges, and the second type of training data is online The acquired training data.
- an electronic device including a processor and a memory, the memory storing machine executable instructions that can be executed by the processor, and the processor is executing the machine executable instructions.
- Time is prompted: Use the fixed layer of the neural network to be trained to encrypt the first type of training data to obtain the encryption features of the first type of training data; wherein, the first type of training data is the original supervised data, so
- the fixed layer is the first N layers of the neural network to be trained, the fixed layer includes at least one nonlinear layer, and N is a positive integer; based on the encryption feature and the second type of training data, the neural network to be trained
- the trainable layer of the network is trained until the neural network to be trained converges, and the second type of training data is training data obtained online.
- a machine-readable storage medium having machine-executable instructions stored in the machine-readable storage medium, and the aforementioned neural network training is implemented when the machine-executable instructions are executed by a processor method.
- the first type of training data is processed by using the fixed layer of the neural network to be trained to obtain encrypted features, and based on the encrypted features and the second type of training data, the trainable layer of the neural network to be trained is trained until it is to be trained
- the neural network converges to improve the performance of the neural network model while ensuring the safety of the first type of training data.
- Fig. 1 is a schematic flowchart of a neural network training method shown in an exemplary embodiment of the present application
- FIG. 2 is a schematic diagram of a process of training a trainable layer of a neural network to be trained based on encryption features and a second type of training data according to an exemplary embodiment of the present application;
- FIG. 3 is a schematic diagram of a process of training a trainable layer of a neural network to be trained based on encryption features and a second type of training data according to an exemplary embodiment of the present application;
- Fig. 4A is a schematic diagram of a process for obtaining encryption features according to an exemplary embodiment of the present application
- 4B is a schematic flowchart of a neural network training method shown in an exemplary embodiment of the present application.
- Fig. 5A is a schematic diagram of a neural network shown in an exemplary embodiment of the present application.
- FIG. 5B is a schematic flowchart of a data encryption process shown in an exemplary embodiment of the present application.
- FIG. 5C is a schematic flowchart of an online training process shown in an exemplary embodiment of the present application.
- Fig. 6 is a schematic structural diagram of a neural network training device shown in an exemplary embodiment of the present application.
- Fig. 7 is a schematic diagram showing the hardware structure of an electronic device according to an exemplary embodiment of the present application.
- FIG. 1 is a schematic flowchart of a neural network training method provided by an embodiment of this application.
- the neural network training method may include the following steps.
- the neural network to be trained refers to a neural network that has completed the pre-training, which will not be repeated in the embodiments of the present application.
- Step S100 Encrypt the first type of training data by using the fixed layer of the neural network to be trained to obtain the encryption features of the first type of training data; wherein, the fixed layer is the first N layers of the neural network to be trained, and the fixed layer includes at least A non-linear layer, N is a positive integer.
- the neural network such as the convolutional layer and the pooling layer
- the convolutional layer and the pooling layer correspond to the lossy feature extraction process, even if the intermediate features and convolutional layer parameters output by these layers are known, the original data cannot be restored; therefore, this application
- the first type of training data can be encrypted through the convolutional layer and the pooling layer of the neural network, which can effectively ensure data privacy and security.
- the fine-tuning of the fixed shallow parameters of a pre-trained neural network model has little effect on the model performance
- the fixed shallow parameters of the pre-trained neural network model are kept unchanged during the training process. The effect on the performance of the neural network model is small.
- the preset number of layers of the neural network to be trained can be used as the fixed layer (the parameters of the fixed layer are not involved in the neural network Training), and use the fixed layer to encrypt the first type of training data, so as to realize the encryption of the first type of training data, and obtain the encryption feature corresponding to the first type of training data.
- the first type of training data is original supervised data.
- the fixed layer used to encrypt the first type of training data needs to include at least one non-linear layer (such as a pooling layer, an activation layer, etc.).
- the layers in the first 1-2 blocks of the neural network can be determined as the fixed layers of the neural network.
- the implementation of using the fixed layer of the neural network to be trained to encrypt the first type of training data in step S100 can be performed offline, that is, the encryption of the first type of training data is implemented offline. Perform neural network training on.
- Step S110 based on the encryption feature and the second type of training data, train the trainable layer of the neural network to be trained until the neural network to be trained converges.
- the trainable layer of the neural network to be trained can be trained based on the obtained encrypted feature and the second type of training data until the neural network to be trained converges .
- the trainable layer of the neural network to be trained includes the rest of the layers except the fixed layer, which usually includes the convolutional layer and the fully connected layer located at the high-level of the neural network to be trained.
- the parameters of the trainable layer are in the neural network. Training is carried out during online training.
- the second type of training data is training data obtained online, such as online unsupervised data.
- the first N layers of the neural network to be trained including at least one nonlinear layer are set as fixed layers, and the first type of training data is processed by using the fixed layer of the neural network to be trained .
- the trainable layer of the neural network to be trained is trained until the neural network to be trained converges, which improves the security of the first type of training data The performance of the neural network model.
- step S100 after the first type of training data is encrypted using the fixed layer of the neural network to be trained, the method may further include:
- training the trainable layer of the neural network to be trained based on the encrypted features and the second type of training data may include:
- the trainable layer of the neural network to be trained is trained.
- the first type of training data is encrypted by using the fixed layer of the neural network to be trained to obtain the encryption After the feature, you can also specify the encryption feature.
- the specified processing may include, but is not limited to, one or more of quantization, cropping, and compression.
- the aforementioned compression is lossy compression.
- the trainable layer of the neural network to be trained can be trained based on the processed encrypted features and the second type of training data.
- the training of the trainable layer of the neural network to be trained based on the processed encrypted features and the second type of training data may include the following steps:
- Step S200 when the designated processing includes compression, perform decompression processing on the processed encrypted feature
- Step S210 training the trainable layer of the neural network to be trained based on the decompressed encryption features, and using the fixed layer of the neural network to be trained to process the second type of training data, and based on the processed second type of training data Train the trainable layer of the neural network to be trained.
- the encrypted features when performing online training on the neural network to be trained, if the encrypted features are compressed, then when training the trainable layer of the neural network to be trained based on the encrypted features, the compressed encrypted features need to be decompressed. Compression process, get the encrypted feature after decompression.
- the trainable layer of the neural network to be trained can be trained based on the decompressed encryption features; on the other hand, the trainable layer of the neural network to be trained can be trained based on the second type of training data Layer for training.
- the decompressed encrypted features and the second type of training data can be regarded as a large data set for training the trainable layer of the neural network to be trained.
- the encrypted feature is the feature processed by the fixed layer of the neural network to be trained
- the fixed layer of the neural network to be trained will no longer process the encrypted feature, but Use this encryption feature to train the trainable layer of the neural network to be trained.
- the fixed layer of the neural network to be trained needs to be used to process the second type of training data, and based on the processed second type of training data, the neural network to be trained can be trained Layer for training.
- step S110 training the trainable layer of the neural network to be trained based on the encrypted features and the second type of training data may include the following steps:
- Step S111 Perform feature enhancement on the encrypted feature.
- Step S112 training the trainable layer of the neural network to be trained based on the encrypted feature after feature enhancement and the second type of training data.
- the encrypted features can be enhanced, that is, the encrypted features can be added by certain means.
- Information or changed data for example, adding Gaussian noise or salt and pepper noise, etc., and based on the feature-enhanced encryption feature and the second type of training data, train the trainable layer of the neural network to be trained.
- the encrypted feature used to train the trainable layer of the neural network to be trained is a compressed encrypted feature
- the latter encrypted feature is subjected to decompression processing, and the decompressed encrypted feature is subjected to feature enhancement processing.
- the neural network training system may include two parts: the first part is the offline encryption subsystem, and the second part is the online training subsystem; among them:
- the offline encryption subsystem uses a shallow layer of the neural network model to be trained (that is, the above-mentioned first N layers, including at least one nonlinear layer) as the encryption layer, and processes the first type of training data to obtain encrypted features.
- the flowchart It can be as shown in Figure 4A.
- the first type of training data is forward-calculated through the fixed layer of the model to obtain the feature map; then the feature map is cropped and quantized to reduce the size of the feature map; then the feature map is further compressed using the image storage compression algorithm
- the compression algorithm includes but is not limited to run-length coding, JPEG (an image format) compression, etc.; the final feature obtained by performing this series of processing on the feature map is the encrypted data of the first type of training data.
- the obtained encrypted data can effectively protect the security of the first type of training data.
- the encrypted data is used as the middle layer feature of the model and can be input to the subsequent layers for training, thus ensuring the performance of the model.
- the online training system uses the encryption features corresponding to the first type of training data and the second type of training data to train the parameters of the non-fixed layer (ie, the above-mentioned trainable layer) of the neural network model to be trained to further improve the actual deployment environment of the model
- the implementation flow chart of the performance in Figure 4B can be shown in Figure 4B.
- the encryption feature in order to enhance the richness of data and improve the performance of the neural network model, the encryption feature can be enhanced, and then the enhanced encryption feature can be used, and the second type of training after the fixed layer processing of the network to be trained Data, the two parts of the characteristics are combined to train the parameters of the trainable layer of the neural network to be trained, thereby improving the performance of the neural network model.
- FIG. 5A is a schematic diagram of a neural network provided in an embodiment of this application.
- the neural network includes a convolutional layer and a fully connected layer.
- a pooling layer may also be included between the convolutional layers, which is not shown in the figure.
- the convolutional layer includes a fixed convolutional layer at the bottom layer (that is, the above-mentioned fixed layer) and a trainable convolutional layer at a high level.
- the fixed convolutional layer is used as an encryption layer for encrypting the first type of training data, and its parameters are not involved in training; the parameters of the trainable convolutional layer and the fully connected layer (that is, the above-mentioned trainable layer) are trained in the online training process.
- FIG. 5B is a schematic flowchart of a data encryption process provided by an embodiment of this application.
- any picture in the first type of training data (or data set) is forward-calculated with a fixed convolutional layer to obtain feature maps of many channels. These feature maps hide the features of the original picture, but The data features related to the training task are retained; then the feature map is quantized, cropped, and compressed to obtain the final encrypted feature.
- FIG. 5C is a schematic flowchart of an online training process provided by an embodiment of the application.
- the encrypted feature is decompressed to obtain the corresponding lossy feature map (left column), and the second type of training data is subjected to the forward calculation of the fixed convolution layer to obtain the corresponding feature map (right column).
- These feature maps are input to the subsequent trainable convolutional layer and fully connected layer together, and the parameters of these trainable layers are trained. Since the encryption of the first type of training data is achieved by encrypting the first type of training data through the fixed layer of the neural network to be trained, that is, the encrypted feature belongs to the middle layer feature of the neural network to be trained. Therefore, the encrypted feature is used to participate in the training to be trained.
- the training of the trainable layer of the neural network can improve the performance of the neural network model while ensuring the safety of the first type of training data.
- the encrypted features are compressed and stored using a lossy compression algorithm, and used after decompression during neural network training. Because lossy compression loss information has less impact on the data to be compressed (ie, encryption features), but the compression ratio is significantly greater than lossless compression. Therefore, the security of the first type of training data can be further improved while ensuring performance , And significantly reduce the storage space occupied by encryption features.
- the first type of training data is processed by using the fixed layer of the neural network to be trained to obtain the encrypted feature, and based on the encrypted feature and the second type of training data, the trainable layer of the neural network to be trained Training is performed to improve the performance of the neural network model while ensuring the safety of the first type of training data.
- FIG. 6 is a schematic structural diagram of a neural network training device provided by an embodiment of this application.
- the neural network training device may include:
- the data processing unit 610 is configured to encrypt the first type of training data by using the fixed layer of the neural network to be trained to obtain the encryption characteristics of the first type of training data; wherein, the first type of training data is original supervised Data, the fixed layer is the first N layers of the neural network to be trained, the fixed layer includes at least one nonlinear layer, and N is a positive integer;
- the training unit 620 is configured to train the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data until the neural network to be trained converges, and the second type of training data is Training data obtained online.
- the data processing unit 610 after the data processing unit 610 encrypts the first type of training data by using the fixed layer of the neural network to be trained, it further performs the specified processing on the encryption feature, wherein the specified processing
- the type includes processing for improving the security of the encryption feature, or/and processing for reducing the storage space occupied by the encryption feature;
- the training unit 620 trains the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data, including:
- the designated processing includes one or more of the following processing:
- the training unit 620 trains the trainable layer of the neural network to be trained based on the processed encrypted features and the second type of training data, including:
- the two types of training data train the trainable layer of the neural network to be trained.
- the training unit 620 trains the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data, including:
- FIG. 7 is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of this application.
- the electronic device may include a processor 701 and a memory 702 storing machine-executable instructions.
- the processor 701 and the memory 702 can communicate via a system bus 703. Moreover, by reading and executing the machine executable instructions corresponding to the encoding control logic in the memory 702, the processor 701 can execute the neural network training method described above.
- the memory 702 mentioned herein may be any electronic, magnetic, optical, or other physical storage device, and may contain or store information, such as executable instructions, data, and so on.
- the machine-readable storage medium can be: RAM (Radom Access Memory), volatile memory, non-volatile memory, flash memory, storage drive (such as hard drive), solid state hard drive, any type of storage disk (Such as CD, DVD, etc.), or similar storage media, or a combination of them.
- a machine-readable storage medium is also provided, such as the memory 702 in FIG. 7.
- the machine-readable storage medium stores machine-executable instructions.
- the machine-readable storage medium may be ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioethics (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- Image Analysis (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
Claims (16)
- 一种神经网络训练方法,其特征在于,包括:A neural network training method is characterized in that it includes:利用待训练神经网络的固定层对第一类型训练数据进行加密处理,以得到第一类型训练数据的加密特征;其中,所述第一类型训练数据为原始的有监督数据,所述固定层为所述待训练神经网络的前N层,所述固定层包括至少一个非线性层,N为正整数;The first type of training data is encrypted using the fixed layer of the neural network to be trained to obtain the encryption characteristics of the first type of training data; wherein, the first type of training data is original supervised data, and the fixed layer is The first N layers of the neural network to be trained, the fixed layer includes at least one nonlinear layer, and N is a positive integer;基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,直至所述待训练神经网络收敛,所述第二类型训练数据为在线获取的训练数据。Based on the encryption feature and the second type of training data, the trainable layer of the neural network to be trained is trained until the neural network to be trained converges, and the second type of training data is training data obtained online.
- 根据权利要求1所述的方法,其特征在于,所述利用待训练神经网络的固定层对第一类型训练数据进行加密处理之后,所述方法还包括:The method according to claim 1, wherein after the first type of training data is encrypted using the fixed layer of the neural network to be trained, the method further comprises:对所述加密特征进行指定处理,其中,所述指定处理的类型包括用于提高所述加密特征的安全性的处理,或/和,用于减少所述加密特征占用的存储空间的处理;Performing designated processing on the encryption feature, wherein the type of the designated processing includes processing for improving the security of the encryption feature, or/and processing for reducing the storage space occupied by the encryption feature;所述基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:The training the trainable layer of the neural network to be trained based on the encrypted feature and the second type of training data includes:基于处理后的加密特征,以及所述第二类型训练数据,对所述待训练神经网络的可训练层进行训练。Training the trainable layer of the neural network to be trained based on the processed encrypted features and the second type of training data.
- 根据权利要求2所述的方法,其特征在于,所述指定处理包括以下处理之一或多个:The method according to claim 2, wherein the designated processing includes one or more of the following processing:量化、裁剪以及压缩。Quantify, crop and compress.
- 根据权利要求3所述的方法,其特征在于,所述基于处理后的加密特征,以及所述第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:The method according to claim 3, wherein the training the trainable layer of the neural network to be trained based on the processed encryption feature and the second type of training data comprises:当所述指定处理包括压缩时,对所述处理后的加密特征进行解压缩处理;When the specified processing includes compression, perform decompression processing on the processed encrypted feature;基于解压缩后的加密特征对所述待训练神经网络的可训练层进行训练,以及,利用所述待训练神经网络的固定层对所述第二类型训练数据进行处理,并基于处理后的第二类型训练数据对所述待训练神经网络的可训练层进行训练。Training the trainable layer of the neural network to be trained based on the decompressed encryption features, and using the fixed layer of the neural network to be trained to process the second type of training data, and based on the processed first layer The two types of training data train the trainable layer of the neural network to be trained.
- 根据权利要求1所述的方法,其特征在于,所述基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:The method according to claim 1, wherein the training the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data comprises:对所述加密特征进行特征增强;Feature enhancement of the encryption feature;基于特征增强后的加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练。Training the trainable layer of the neural network to be trained based on the encrypted feature after feature enhancement and the second type of training data.
- 一种神经网络训练装置,其特征在于,包括:A neural network training device is characterized in that it comprises:数据处理单元,用于利用待训练神经网络的固定层对第一类型训练数据进行加密处 理,以得到第一类型训练数据的加密特征;其中,所述第一类型训练数据为原始的有监督数据,所述固定层为所述待训练神经网络的前N层,所述固定层包括至少一个非线性层,N为正整数;The data processing unit is used to encrypt the first type of training data by using the fixed layer of the neural network to be trained to obtain the encryption characteristics of the first type of training data; wherein, the first type of training data is original supervised data , The fixed layer is the first N layers of the neural network to be trained, the fixed layer includes at least one nonlinear layer, and N is a positive integer;训练单元,用于基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,直至所述待训练神经网络收敛,所述第二类型训练数据为在线获取的训练数据。The training unit is configured to train the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data until the neural network to be trained converges, and the second type of training data is online The acquired training data.
- 根据权利要求6所述的装置,其特征在于,所述数据处理单元利用待训练神经网络的固定层对第一类型训练数据进行加密处理之后,还对所述加密特征进行指定处理,其中,所述指定处理的类型包括用于提高所述加密特征的安全性的处理,或/和,用于减少所述加密特征占用的存储空间的处理;7. The device according to claim 6, wherein the data processing unit uses the fixed layer of the neural network to be trained to encrypt the first type of training data, and then further performs the specified processing on the encrypted feature, wherein The specified processing type includes processing for improving the security of the encryption feature, or/and processing for reducing the storage space occupied by the encryption feature;所述训练单元基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:The training unit trains the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data, including:基于处理后的加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练。Training the trainable layer of the neural network to be trained based on the processed encrypted features and the second type of training data.
- 根据权利要求7所述的装置,其特征在于,所述指定处理包括以下处理之一或多个:The device according to claim 7, wherein the designated processing includes one or more of the following processing:量化、裁剪以及压缩。Quantify, crop and compress.
- 根据权利要求8所述的装置,其特征在于,所述训练单元基于处理后的加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:The device according to claim 8, wherein the training unit trains the trainable layer of the neural network to be trained based on the processed encrypted features and the second type of training data, comprising:当所述指定处理包括压缩时,对所述处理后的加密特征进行解压缩处理;When the specified processing includes compression, perform decompression processing on the processed encrypted feature;基于解压缩后的加密特征对所述待训练神经网络的可训练层进行训练,以及,利用所述待训练神经网络的固定层对所述第二类型训练数据进行处理,并基于处理后的第二类型训练数据对所述待训练神经网络的可训练层进行训练。Training the trainable layer of the neural network to be trained based on the decompressed encryption features, and using the fixed layer of the neural network to be trained to process the second type of training data, and based on the processed first layer The two types of training data train the trainable layer of the neural network to be trained.
- 根据权利要求6所述的装置,其特征在于,所述训练单元基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:The device according to claim 6, wherein the training unit trains the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data, comprising:对所述加密特征进行特征增强;Feature enhancement of the encryption feature;基于特征增强后的加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练。Training the trainable layer of the neural network to be trained based on the encrypted feature after feature enhancement and the second type of training data.
- 一种电子设备,其特征在于,包括处理器和存储器,所述存储器存储有能够被所述处理器执行的机器可执行指令,所述处理器在执行机器可执行指令时被促使:An electronic device, characterized by comprising a processor and a memory, the memory storing machine executable instructions that can be executed by the processor, and the processor is prompted when the machine executable instructions are executed:利用待训练神经网络的固定层对第一类型训练数据进行加密处理,以得到第一类型 训练数据的加密特征;其中,所述第一类型训练数据为原始的有监督数据,所述固定层为所述待训练神经网络的前N层,所述固定层包括至少一个非线性层,N为正整数;The first type of training data is encrypted using the fixed layer of the neural network to be trained to obtain the encryption characteristics of the first type of training data; wherein, the first type of training data is original supervised data, and the fixed layer is The first N layers of the neural network to be trained, the fixed layer includes at least one nonlinear layer, and N is a positive integer;基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,直至所述待训练神经网络收敛,所述第二类型训练数据为在线获取的训练数据。Based on the encryption feature and the second type of training data, the trainable layer of the neural network to be trained is trained until the neural network to be trained converges, and the second type of training data is training data obtained online.
- 根据权利要求11所述的设备,其特征在于,在利用待训练神经网络的固定层对第一类型训练数据进行加密处理之后,所述处理器还被促使:The device according to claim 11, characterized in that, after the first type of training data is encrypted using the fixed layer of the neural network to be trained, the processor is further prompted to:对所述加密特征进行指定处理,其中,所述指定处理的类型包括用于提高所述加密特征的安全性的处理,或/和,用于减少所述加密特征占用的存储空间的处理;Performing designated processing on the encryption feature, wherein the type of the designated processing includes processing for improving the security of the encryption feature, or/and processing for reducing the storage space occupied by the encryption feature;所述基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:The training the trainable layer of the neural network to be trained based on the encrypted feature and the second type of training data includes:基于处理后的加密特征,以及所述第二类型训练数据,对所述待训练神经网络的可训练层进行训练。Training the trainable layer of the neural network to be trained based on the processed encrypted features and the second type of training data.
- 根据权利要求12所述的设备,其特征在于,所述指定处理包括以下处理之一或多个:The device according to claim 12, wherein the designated processing includes one or more of the following processing:量化、裁剪以及压缩。Quantify, crop and compress.
- 根据权利要求13所述的设备,其特征在于,在基于处理后的加密特征,以及所述第二类型训练数据,对所述待训练神经网络的可训练层进行训练时,所述处理器被促使:The device according to claim 13, wherein when training the trainable layer of the neural network to be trained based on the processed encrypted features and the second type of training data, the processor is Prompt:当所述指定处理包括压缩时,对所述处理后的加密特征进行解压缩处理;When the specified processing includes compression, perform decompression processing on the processed encrypted feature;基于解压缩后的加密特征对所述待训练神经网络的可训练层进行训练,以及,利用所述待训练神经网络的固定层对所述第二类型训练数据进行处理,并基于处理后的第二类型训练数据对所述待训练神经网络的可训练层进行训练。Training the trainable layer of the neural network to be trained based on the decompressed encryption features, and using the fixed layer of the neural network to be trained to process the second type of training data, and based on the processed first layer The two types of training data train the trainable layer of the neural network to be trained.
- 根据权利要求11所述的设备,其特征在于,在基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练时,所述处理器被促使:The device according to claim 11, wherein when training the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data, the processor is prompted to:对所述加密特征进行特征增强;Feature enhancement of the encryption feature;基于特征增强后的加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练。Training the trainable layer of the neural network to be trained based on the encrypted feature after feature enhancement and the second type of training data.
- 一种机器可读存储介质,其特征在于,所述机器可读存储介质内存储有机器可执行指令,所述机器可执行指令被处理器执行时实现权利要求1-5任一项所述的方法。A machine-readable storage medium, characterized in that, machine-executable instructions are stored in the machine-readable storage medium, and when the machine-executable instructions are executed by a processor, the method described in any one of claims 1 to 5 is realized method.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010456574.5A CN113723604B (en) | 2020-05-26 | 2020-05-26 | Neural network training method and device, electronic equipment and readable storage medium |
CN202010456574.5 | 2020-05-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021238992A1 true WO2021238992A1 (en) | 2021-12-02 |
Family
ID=78672063
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/096109 WO2021238992A1 (en) | 2020-05-26 | 2021-05-26 | Neural network training method and apparatus, electronic device, and readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113723604B (en) |
WO (1) | WO2021238992A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117874794B (en) * | 2024-03-12 | 2024-07-05 | 北方健康医疗大数据科技有限公司 | Training method, system and device for large language model and readable storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9436835B1 (en) * | 2012-01-05 | 2016-09-06 | Gokay Saldamli | Homomorphic encryption in computing systems and environments |
CN108564587A (en) * | 2018-03-07 | 2018-09-21 | 浙江大学 | A kind of a wide range of remote sensing image semantic segmentation method based on full convolutional neural networks |
CN110830515A (en) * | 2019-12-13 | 2020-02-21 | 支付宝(杭州)信息技术有限公司 | Flow detection method and device and electronic equipment |
CN111027632A (en) * | 2019-12-13 | 2020-04-17 | 支付宝(杭州)信息技术有限公司 | Model training method, device and equipment |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9946970B2 (en) * | 2014-11-07 | 2018-04-17 | Microsoft Technology Licensing, Llc | Neural networks for encrypted data |
JP6746139B2 (en) * | 2016-09-08 | 2020-08-26 | 公立大学法人会津大学 | Detection agent system using mobile terminal, machine learning method in detection agent system, and program for implementing the same |
FR3057090B1 (en) * | 2016-09-30 | 2018-10-19 | Safran Identity & Security | METHODS FOR SECURELY LEARNING PARAMETERS FROM A CONVOLVED NEURON NETWORK AND SECURED CLASSIFICATION OF INPUT DATA |
CN109214193B (en) * | 2017-07-05 | 2022-03-22 | 创新先进技术有限公司 | Data encryption and machine learning model training method and device and electronic equipment |
CN108876864B (en) * | 2017-11-03 | 2022-03-08 | 北京旷视科技有限公司 | Image encoding method, image decoding method, image encoding device, image decoding device, electronic equipment and computer readable medium |
CN108921282B (en) * | 2018-05-16 | 2022-05-31 | 深圳大学 | Construction method and device of deep neural network model |
CN108776790A (en) * | 2018-06-06 | 2018-11-09 | 海南大学 | Face encryption recognition methods based on neural network under cloud environment |
US11575500B2 (en) * | 2018-07-25 | 2023-02-07 | Sap Se | Encrypted protection system for a trained neural network |
CN109325584B (en) * | 2018-08-10 | 2021-06-25 | 深圳前海微众银行股份有限公司 | Federal modeling method and device based on neural network and readable storage medium |
CN110674941B (en) * | 2019-09-25 | 2023-04-18 | 南开大学 | Data encryption transmission method and system based on neural network |
-
2020
- 2020-05-26 CN CN202010456574.5A patent/CN113723604B/en active Active
-
2021
- 2021-05-26 WO PCT/CN2021/096109 patent/WO2021238992A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9436835B1 (en) * | 2012-01-05 | 2016-09-06 | Gokay Saldamli | Homomorphic encryption in computing systems and environments |
CN108564587A (en) * | 2018-03-07 | 2018-09-21 | 浙江大学 | A kind of a wide range of remote sensing image semantic segmentation method based on full convolutional neural networks |
CN110830515A (en) * | 2019-12-13 | 2020-02-21 | 支付宝(杭州)信息技术有限公司 | Flow detection method and device and electronic equipment |
CN111027632A (en) * | 2019-12-13 | 2020-04-17 | 支付宝(杭州)信息技术有限公司 | Model training method, device and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN113723604B (en) | 2024-03-26 |
CN113723604A (en) | 2021-11-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Xiong et al. | An integer wavelet transform based scheme for reversible data hiding in encrypted images | |
Chang et al. | Privacy-preserving reversible information hiding based on arithmetic of quadratic residues | |
Yin et al. | Separable and Error‐Free Reversible Data Hiding in Encrypted Image with High Payload | |
Manohar et al. | Data encryption & decryption using steganography | |
CN105634732A (en) | Ciphertext domain multi-bit reversible information hiding method | |
CN110110535B (en) | Low-distortion steganography method based on pixel matrix | |
US12033233B2 (en) | Image steganography utilizing adversarial perturbations | |
Wu et al. | Separable reversible data hiding in encrypted images based on scalable blocks | |
El-Bendary | FEC merged with double security approach based on encrypted image steganography for different purpose in the presence of noise and different attacks | |
WO2021238992A1 (en) | Neural network training method and apparatus, electronic device, and readable storage medium | |
US20210019443A1 (en) | Image processing method and image processing system for deep learning | |
Sadhya et al. | Design of a cancelable biometric template protection scheme for fingerprints based on cryptographic hash functions | |
Yang et al. | Efficient color image encryption by color-grayscale conversion based on steganography | |
Yu et al. | Reversible data hiding in encrypted images for coding channel based on adaptive steganography | |
CN112529974B (en) | Color visual password sharing method and device for binary image | |
Roselinkiruba et al. | Dynamic optimal pixel block selection data hiding approach using bit plane and image encryption | |
Chai et al. | TPE-ADE: Thumbnail-Preserving Encryption Based on Adaptive Deviation Embedding for JPEG Images | |
Chen et al. | Reversible data hiding in encrypted images based on reversible integer transformation and quadtree-based partition | |
CN111275603B (en) | Security image steganography method based on style conversion and electronic device | |
Kaur et al. | Image steganography using hybrid edge detection and first component alteration technique | |
CN111598765B (en) | Three-dimensional model robust watermarking method based on homomorphic encryption domain | |
Hasan et al. | A Novel Compressed Domain Technique of Reversible Steganography | |
Ishikawa et al. | Learnable Cube-based Video Encryption for Privacy-Preserving Action Recognition | |
Asif et al. | High-Capacity Reversible Data Hiding using Deep Learning | |
Panchikkil et al. | A Machine Learning based Reversible Data Hiding Scheme in Encrypted Images using Fibonacci Transform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21812204 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21812204 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21812204 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 060723) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21812204 Country of ref document: EP Kind code of ref document: A1 |