WO2021238992A1 - Neural network training method and apparatus, electronic device, and readable storage medium - Google Patents

Neural network training method and apparatus, electronic device, and readable storage medium Download PDF

Info

Publication number
WO2021238992A1
WO2021238992A1 PCT/CN2021/096109 CN2021096109W WO2021238992A1 WO 2021238992 A1 WO2021238992 A1 WO 2021238992A1 CN 2021096109 W CN2021096109 W CN 2021096109W WO 2021238992 A1 WO2021238992 A1 WO 2021238992A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
type
trained
training data
layer
Prior art date
Application number
PCT/CN2021/096109
Other languages
French (fr)
Chinese (zh)
Inventor
浦世亮
徐习明
黄博
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2021238992A1 publication Critical patent/WO2021238992A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services

Definitions

  • This application relates to deep learning technology, in particular to a neural network training method, device, electronic equipment and readable storage medium.
  • Online learning is a learning method that uses online unsupervised data for model training, thereby further improving the generalization performance of the model in the actual deployment environment.
  • it is usually necessary to use some or all of the original supervised data to assist training to ensure the performance of the model. Due to the privacy and confidentiality of data involved, the original supervised data cannot be directly stored on the deployment side of the online learning system.
  • the usual file is encrypted and stored. After decryption, the training scheme involves the risk of secret key leakage and insecure data memory. In this case, encryption training is an effective solution to ensure data security.
  • encryption training the data does not need to be decrypted, but directly participates in the training in the form of ciphertext.
  • Existing encryption training schemes include symmetric encryption schemes, training data plus noise encryption schemes, and autoencoder encryption schemes.
  • the symmetric encryption scheme ensures that the encrypted training model is consistent with the original data training model, thus ensuring the performance of the model; but the original data can be restored after the secret key is leaked, and there is a data security risk; at the same time, the symmetric encryption scheme can only be applied to a single layer Perceptrons and other models that do not include nonlinear operations cannot be applied to deep neural networks.
  • the training data plus noise encryption scheme encrypts the original data by adding noise to the original data.
  • the noise changes the pattern of the original data, the performance of the model is severely degraded if the noise is too large; the confidentiality of the original data is insufficient if the noise is too small.
  • the self-encoder encryption scheme trains a self-encoder to extract features of the original data, and use hidden layer features to learn the pattern of the original data and use it as encrypted data.
  • the decoder parameters are leaked, the original data can still be restored through hidden layer features and the decoder, which poses a certain data security risk.
  • the original data pattern is complex (pictures, videos, etc.) and the data scale is large, it is difficult for self-encoding to learn good hidden layer features to represent all the patterns of the original data; therefore, the performance of the encrypted training model in this case is also Will be greatly affected.
  • the present application provides a neural network training method, device, electronic equipment, and readable storage medium.
  • a neural network training method including:
  • the fixed layer of the neural network to be trained to process the first type of training data to obtain the encryption features of the first type of training data; wherein, the first type of training data is original supervised data, and the fixed layer is The first N layers of the neural network to be trained, the fixed layer includes at least one nonlinear layer, and N is a positive integer;
  • the trainable layer of the neural network to be trained is trained until the neural network to be trained converges, and the second type of training data is training data obtained online.
  • a neural network training device including
  • the data processing unit is used to encrypt the first type of training data by using the fixed layer of the neural network to be trained to obtain the encryption characteristics of the first type of training data; wherein, the first type of training data is original supervised data ,
  • the fixed layer is the first N layers of the neural network to be trained, the fixed layer includes at least one nonlinear layer, and N is a positive integer;
  • the training unit is configured to train the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data until the neural network to be trained converges, and the second type of training data is online The acquired training data.
  • an electronic device including a processor and a memory, the memory storing machine executable instructions that can be executed by the processor, and the processor is executing the machine executable instructions.
  • Time is prompted: Use the fixed layer of the neural network to be trained to encrypt the first type of training data to obtain the encryption features of the first type of training data; wherein, the first type of training data is the original supervised data, so
  • the fixed layer is the first N layers of the neural network to be trained, the fixed layer includes at least one nonlinear layer, and N is a positive integer; based on the encryption feature and the second type of training data, the neural network to be trained
  • the trainable layer of the network is trained until the neural network to be trained converges, and the second type of training data is training data obtained online.
  • a machine-readable storage medium having machine-executable instructions stored in the machine-readable storage medium, and the aforementioned neural network training is implemented when the machine-executable instructions are executed by a processor method.
  • the first type of training data is processed by using the fixed layer of the neural network to be trained to obtain encrypted features, and based on the encrypted features and the second type of training data, the trainable layer of the neural network to be trained is trained until it is to be trained
  • the neural network converges to improve the performance of the neural network model while ensuring the safety of the first type of training data.
  • Fig. 1 is a schematic flowchart of a neural network training method shown in an exemplary embodiment of the present application
  • FIG. 2 is a schematic diagram of a process of training a trainable layer of a neural network to be trained based on encryption features and a second type of training data according to an exemplary embodiment of the present application;
  • FIG. 3 is a schematic diagram of a process of training a trainable layer of a neural network to be trained based on encryption features and a second type of training data according to an exemplary embodiment of the present application;
  • Fig. 4A is a schematic diagram of a process for obtaining encryption features according to an exemplary embodiment of the present application
  • 4B is a schematic flowchart of a neural network training method shown in an exemplary embodiment of the present application.
  • Fig. 5A is a schematic diagram of a neural network shown in an exemplary embodiment of the present application.
  • FIG. 5B is a schematic flowchart of a data encryption process shown in an exemplary embodiment of the present application.
  • FIG. 5C is a schematic flowchart of an online training process shown in an exemplary embodiment of the present application.
  • Fig. 6 is a schematic structural diagram of a neural network training device shown in an exemplary embodiment of the present application.
  • Fig. 7 is a schematic diagram showing the hardware structure of an electronic device according to an exemplary embodiment of the present application.
  • FIG. 1 is a schematic flowchart of a neural network training method provided by an embodiment of this application.
  • the neural network training method may include the following steps.
  • the neural network to be trained refers to a neural network that has completed the pre-training, which will not be repeated in the embodiments of the present application.
  • Step S100 Encrypt the first type of training data by using the fixed layer of the neural network to be trained to obtain the encryption features of the first type of training data; wherein, the fixed layer is the first N layers of the neural network to be trained, and the fixed layer includes at least A non-linear layer, N is a positive integer.
  • the neural network such as the convolutional layer and the pooling layer
  • the convolutional layer and the pooling layer correspond to the lossy feature extraction process, even if the intermediate features and convolutional layer parameters output by these layers are known, the original data cannot be restored; therefore, this application
  • the first type of training data can be encrypted through the convolutional layer and the pooling layer of the neural network, which can effectively ensure data privacy and security.
  • the fine-tuning of the fixed shallow parameters of a pre-trained neural network model has little effect on the model performance
  • the fixed shallow parameters of the pre-trained neural network model are kept unchanged during the training process. The effect on the performance of the neural network model is small.
  • the preset number of layers of the neural network to be trained can be used as the fixed layer (the parameters of the fixed layer are not involved in the neural network Training), and use the fixed layer to encrypt the first type of training data, so as to realize the encryption of the first type of training data, and obtain the encryption feature corresponding to the first type of training data.
  • the first type of training data is original supervised data.
  • the fixed layer used to encrypt the first type of training data needs to include at least one non-linear layer (such as a pooling layer, an activation layer, etc.).
  • the layers in the first 1-2 blocks of the neural network can be determined as the fixed layers of the neural network.
  • the implementation of using the fixed layer of the neural network to be trained to encrypt the first type of training data in step S100 can be performed offline, that is, the encryption of the first type of training data is implemented offline. Perform neural network training on.
  • Step S110 based on the encryption feature and the second type of training data, train the trainable layer of the neural network to be trained until the neural network to be trained converges.
  • the trainable layer of the neural network to be trained can be trained based on the obtained encrypted feature and the second type of training data until the neural network to be trained converges .
  • the trainable layer of the neural network to be trained includes the rest of the layers except the fixed layer, which usually includes the convolutional layer and the fully connected layer located at the high-level of the neural network to be trained.
  • the parameters of the trainable layer are in the neural network. Training is carried out during online training.
  • the second type of training data is training data obtained online, such as online unsupervised data.
  • the first N layers of the neural network to be trained including at least one nonlinear layer are set as fixed layers, and the first type of training data is processed by using the fixed layer of the neural network to be trained .
  • the trainable layer of the neural network to be trained is trained until the neural network to be trained converges, which improves the security of the first type of training data The performance of the neural network model.
  • step S100 after the first type of training data is encrypted using the fixed layer of the neural network to be trained, the method may further include:
  • training the trainable layer of the neural network to be trained based on the encrypted features and the second type of training data may include:
  • the trainable layer of the neural network to be trained is trained.
  • the first type of training data is encrypted by using the fixed layer of the neural network to be trained to obtain the encryption After the feature, you can also specify the encryption feature.
  • the specified processing may include, but is not limited to, one or more of quantization, cropping, and compression.
  • the aforementioned compression is lossy compression.
  • the trainable layer of the neural network to be trained can be trained based on the processed encrypted features and the second type of training data.
  • the training of the trainable layer of the neural network to be trained based on the processed encrypted features and the second type of training data may include the following steps:
  • Step S200 when the designated processing includes compression, perform decompression processing on the processed encrypted feature
  • Step S210 training the trainable layer of the neural network to be trained based on the decompressed encryption features, and using the fixed layer of the neural network to be trained to process the second type of training data, and based on the processed second type of training data Train the trainable layer of the neural network to be trained.
  • the encrypted features when performing online training on the neural network to be trained, if the encrypted features are compressed, then when training the trainable layer of the neural network to be trained based on the encrypted features, the compressed encrypted features need to be decompressed. Compression process, get the encrypted feature after decompression.
  • the trainable layer of the neural network to be trained can be trained based on the decompressed encryption features; on the other hand, the trainable layer of the neural network to be trained can be trained based on the second type of training data Layer for training.
  • the decompressed encrypted features and the second type of training data can be regarded as a large data set for training the trainable layer of the neural network to be trained.
  • the encrypted feature is the feature processed by the fixed layer of the neural network to be trained
  • the fixed layer of the neural network to be trained will no longer process the encrypted feature, but Use this encryption feature to train the trainable layer of the neural network to be trained.
  • the fixed layer of the neural network to be trained needs to be used to process the second type of training data, and based on the processed second type of training data, the neural network to be trained can be trained Layer for training.
  • step S110 training the trainable layer of the neural network to be trained based on the encrypted features and the second type of training data may include the following steps:
  • Step S111 Perform feature enhancement on the encrypted feature.
  • Step S112 training the trainable layer of the neural network to be trained based on the encrypted feature after feature enhancement and the second type of training data.
  • the encrypted features can be enhanced, that is, the encrypted features can be added by certain means.
  • Information or changed data for example, adding Gaussian noise or salt and pepper noise, etc., and based on the feature-enhanced encryption feature and the second type of training data, train the trainable layer of the neural network to be trained.
  • the encrypted feature used to train the trainable layer of the neural network to be trained is a compressed encrypted feature
  • the latter encrypted feature is subjected to decompression processing, and the decompressed encrypted feature is subjected to feature enhancement processing.
  • the neural network training system may include two parts: the first part is the offline encryption subsystem, and the second part is the online training subsystem; among them:
  • the offline encryption subsystem uses a shallow layer of the neural network model to be trained (that is, the above-mentioned first N layers, including at least one nonlinear layer) as the encryption layer, and processes the first type of training data to obtain encrypted features.
  • the flowchart It can be as shown in Figure 4A.
  • the first type of training data is forward-calculated through the fixed layer of the model to obtain the feature map; then the feature map is cropped and quantized to reduce the size of the feature map; then the feature map is further compressed using the image storage compression algorithm
  • the compression algorithm includes but is not limited to run-length coding, JPEG (an image format) compression, etc.; the final feature obtained by performing this series of processing on the feature map is the encrypted data of the first type of training data.
  • the obtained encrypted data can effectively protect the security of the first type of training data.
  • the encrypted data is used as the middle layer feature of the model and can be input to the subsequent layers for training, thus ensuring the performance of the model.
  • the online training system uses the encryption features corresponding to the first type of training data and the second type of training data to train the parameters of the non-fixed layer (ie, the above-mentioned trainable layer) of the neural network model to be trained to further improve the actual deployment environment of the model
  • the implementation flow chart of the performance in Figure 4B can be shown in Figure 4B.
  • the encryption feature in order to enhance the richness of data and improve the performance of the neural network model, the encryption feature can be enhanced, and then the enhanced encryption feature can be used, and the second type of training after the fixed layer processing of the network to be trained Data, the two parts of the characteristics are combined to train the parameters of the trainable layer of the neural network to be trained, thereby improving the performance of the neural network model.
  • FIG. 5A is a schematic diagram of a neural network provided in an embodiment of this application.
  • the neural network includes a convolutional layer and a fully connected layer.
  • a pooling layer may also be included between the convolutional layers, which is not shown in the figure.
  • the convolutional layer includes a fixed convolutional layer at the bottom layer (that is, the above-mentioned fixed layer) and a trainable convolutional layer at a high level.
  • the fixed convolutional layer is used as an encryption layer for encrypting the first type of training data, and its parameters are not involved in training; the parameters of the trainable convolutional layer and the fully connected layer (that is, the above-mentioned trainable layer) are trained in the online training process.
  • FIG. 5B is a schematic flowchart of a data encryption process provided by an embodiment of this application.
  • any picture in the first type of training data (or data set) is forward-calculated with a fixed convolutional layer to obtain feature maps of many channels. These feature maps hide the features of the original picture, but The data features related to the training task are retained; then the feature map is quantized, cropped, and compressed to obtain the final encrypted feature.
  • FIG. 5C is a schematic flowchart of an online training process provided by an embodiment of the application.
  • the encrypted feature is decompressed to obtain the corresponding lossy feature map (left column), and the second type of training data is subjected to the forward calculation of the fixed convolution layer to obtain the corresponding feature map (right column).
  • These feature maps are input to the subsequent trainable convolutional layer and fully connected layer together, and the parameters of these trainable layers are trained. Since the encryption of the first type of training data is achieved by encrypting the first type of training data through the fixed layer of the neural network to be trained, that is, the encrypted feature belongs to the middle layer feature of the neural network to be trained. Therefore, the encrypted feature is used to participate in the training to be trained.
  • the training of the trainable layer of the neural network can improve the performance of the neural network model while ensuring the safety of the first type of training data.
  • the encrypted features are compressed and stored using a lossy compression algorithm, and used after decompression during neural network training. Because lossy compression loss information has less impact on the data to be compressed (ie, encryption features), but the compression ratio is significantly greater than lossless compression. Therefore, the security of the first type of training data can be further improved while ensuring performance , And significantly reduce the storage space occupied by encryption features.
  • the first type of training data is processed by using the fixed layer of the neural network to be trained to obtain the encrypted feature, and based on the encrypted feature and the second type of training data, the trainable layer of the neural network to be trained Training is performed to improve the performance of the neural network model while ensuring the safety of the first type of training data.
  • FIG. 6 is a schematic structural diagram of a neural network training device provided by an embodiment of this application.
  • the neural network training device may include:
  • the data processing unit 610 is configured to encrypt the first type of training data by using the fixed layer of the neural network to be trained to obtain the encryption characteristics of the first type of training data; wherein, the first type of training data is original supervised Data, the fixed layer is the first N layers of the neural network to be trained, the fixed layer includes at least one nonlinear layer, and N is a positive integer;
  • the training unit 620 is configured to train the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data until the neural network to be trained converges, and the second type of training data is Training data obtained online.
  • the data processing unit 610 after the data processing unit 610 encrypts the first type of training data by using the fixed layer of the neural network to be trained, it further performs the specified processing on the encryption feature, wherein the specified processing
  • the type includes processing for improving the security of the encryption feature, or/and processing for reducing the storage space occupied by the encryption feature;
  • the training unit 620 trains the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data, including:
  • the designated processing includes one or more of the following processing:
  • the training unit 620 trains the trainable layer of the neural network to be trained based on the processed encrypted features and the second type of training data, including:
  • the two types of training data train the trainable layer of the neural network to be trained.
  • the training unit 620 trains the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data, including:
  • FIG. 7 is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of this application.
  • the electronic device may include a processor 701 and a memory 702 storing machine-executable instructions.
  • the processor 701 and the memory 702 can communicate via a system bus 703. Moreover, by reading and executing the machine executable instructions corresponding to the encoding control logic in the memory 702, the processor 701 can execute the neural network training method described above.
  • the memory 702 mentioned herein may be any electronic, magnetic, optical, or other physical storage device, and may contain or store information, such as executable instructions, data, and so on.
  • the machine-readable storage medium can be: RAM (Radom Access Memory), volatile memory, non-volatile memory, flash memory, storage drive (such as hard drive), solid state hard drive, any type of storage disk (Such as CD, DVD, etc.), or similar storage media, or a combination of them.
  • a machine-readable storage medium is also provided, such as the memory 702 in FIG. 7.
  • the machine-readable storage medium stores machine-executable instructions.
  • the machine-readable storage medium may be ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioethics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present application provides a neural network training method and apparatus, an electronic device, and a readable storage medium. The neural network training method comprises: processing a first type of training data by using fixed layers of a neural network to be trained to obtain an encrypted feature; and on the basis of the encrypted feature and a second type of training data, training a trainable layer of the neural network to be trained until the neural network to be trained converges.

Description

一种神经网络训练方法、装置、电子设备及可读存储介质Neural network training method, device, electronic equipment and readable storage medium
相关申请的交叉引用Cross-references to related applications
本专利申请要求于2020年5月26日提交的、申请号为202010456574.5、发明名称为“一种神经网络训练方法、装置、电子设备及可读存储介质”的中国专利申请的优先权,该申请的全文以引用的方式并入本文中。This patent application claims the priority of the Chinese patent application filed on May 26, 2020, the application number is 202010456574.5, and the invention title is "a neural network training method, device, electronic equipment, and readable storage medium". The application The full text of is incorporated herein by reference.
技术领域Technical field
本申请涉及深度学习技术,尤其涉及一种神经网络训练方法、装置、电子设备及可读存储介质。This application relates to deep learning technology, in particular to a neural network training method, device, electronic equipment and readable storage medium.
背景技术Background technique
在线学习是一种利用线上的无监督数据进行模型的训练,从而进一步提高模型在部署的实际环境中的泛化性能的学习方法。在一个在线学习***中,通常需要利用部分或者全部原始的有监督数据来辅助训练,保证模型的性能。由于涉及到数据的隐私性和保密性,原始的有监督数据不能直接地存储在在线学习***的部署端。通常的文件加密存储,解密后参与训练的方案存在秘钥泄露和数据内存不安全的风险。在这种情况下,加密训练是一种用来保证数据安全的有效方案。Online learning is a learning method that uses online unsupervised data for model training, thereby further improving the generalization performance of the model in the actual deployment environment. In an online learning system, it is usually necessary to use some or all of the original supervised data to assist training to ensure the performance of the model. Due to the privacy and confidentiality of data involved, the original supervised data cannot be directly stored on the deployment side of the online learning system. The usual file is encrypted and stored. After decryption, the training scheme involves the risk of secret key leakage and insecure data memory. In this case, encryption training is an effective solution to ensure data security.
在加密训练中,数据不需要进行解密,而是直接以密文的形式参与训练。现有的加密训练方案包括对称加密方案、训练数据加噪声加密方案和自编码器加密方案。In encryption training, the data does not need to be decrypted, but directly participates in the training in the form of ciphertext. Existing encryption training schemes include symmetric encryption schemes, training data plus noise encryption schemes, and autoencoder encryption schemes.
对称加密方案保证了加密训练的模型和原始数据训练的模型一致,因此保证了模型的性能;但是秘钥泄露后原始数据可以被还原,存在数据安全风险;同时对称加密方案只能应用于单层感知机等不包括非线性运算的模型,无法应用于深度神经网络。The symmetric encryption scheme ensures that the encrypted training model is consistent with the original data training model, thus ensuring the performance of the model; but the original data can be restored after the secret key is leaked, and there is a data security risk; at the same time, the symmetric encryption scheme can only be applied to a single layer Perceptrons and other models that do not include nonlinear operations cannot be applied to deep neural networks.
训练数据加噪声加密方案通过对原始数据加噪声来加密原始数据。但是由于噪声改变了原始数据的模式,因此噪声太大模型性能下降严重;噪声太小原始数据的保密性又不足。The training data plus noise encryption scheme encrypts the original data by adding noise to the original data. However, because the noise changes the pattern of the original data, the performance of the model is severely degraded if the noise is too large; the confidentiality of the original data is insufficient if the noise is too small.
自编码器加密方案训练一个自编码器对原始数据进行特征提取,利用隐层特征学习原始数据的模式,并作为加密数据。但是当解码器参数泄露后,原始数据仍然能够通过隐层特征和解码器进行还原,存在一定的数据安全风险。此外,当原始数据模式复杂(图 片,视频等)且数据规模很大时,自编码难以学习到好的隐层特征来代表原始数据的所有模式;因此这种情况下加密训练的模型的性能也会受到较大影响。The self-encoder encryption scheme trains a self-encoder to extract features of the original data, and use hidden layer features to learn the pattern of the original data and use it as encrypted data. However, when the decoder parameters are leaked, the original data can still be restored through hidden layer features and the decoder, which poses a certain data security risk. In addition, when the original data pattern is complex (pictures, videos, etc.) and the data scale is large, it is difficult for self-encoding to learn good hidden layer features to represent all the patterns of the original data; therefore, the performance of the encrypted training model in this case is also Will be greatly affected.
发明内容Summary of the invention
有鉴于此,本申请提供一种神经网络训练方法、装置、电子设备及可读存储介质。In view of this, the present application provides a neural network training method, device, electronic equipment, and readable storage medium.
具体地,本申请是通过如下技术方案实现的:Specifically, this application is implemented through the following technical solutions:
根据本申请实施例的第一方面,提供一种神经网络训练方法,包括:According to the first aspect of the embodiments of the present application, a neural network training method is provided, including:
利用待训练神经网络的固定层对第一类型训练数据进行处理,以得到第一类型训练数据的加密特征;其中,所述第一类型训练数据为原始的有监督数据,所述固定层为所述待训练神经网络的前N层,所述固定层包括至少一个非线性层,N为正整数;Use the fixed layer of the neural network to be trained to process the first type of training data to obtain the encryption features of the first type of training data; wherein, the first type of training data is original supervised data, and the fixed layer is The first N layers of the neural network to be trained, the fixed layer includes at least one nonlinear layer, and N is a positive integer;
基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,直至所述待训练神经网络收敛,所述第二类型训练数据为在线获取的训练数据。Based on the encryption feature and the second type of training data, the trainable layer of the neural network to be trained is trained until the neural network to be trained converges, and the second type of training data is training data obtained online.
根据本申请实施例的第二方面,提供一种神经网络训练装置,包括According to a second aspect of the embodiments of the present application, there is provided a neural network training device, including
数据处理单元,用于利用待训练神经网络的固定层对第一类型训练数据进行加密处理,以得到第一类型训练数据的加密特征;其中,所述第一类型训练数据为原始的有监督数据,所述固定层为所述待训练神经网络的前N层,所述固定层包括至少一个非线性层,N为正整数;The data processing unit is used to encrypt the first type of training data by using the fixed layer of the neural network to be trained to obtain the encryption characteristics of the first type of training data; wherein, the first type of training data is original supervised data , The fixed layer is the first N layers of the neural network to be trained, the fixed layer includes at least one nonlinear layer, and N is a positive integer;
训练单元,用于基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,直至所述待训练神经网络收敛,所述第二类型训练数据为在线获取的训练数据。The training unit is configured to train the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data until the neural network to be trained converges, and the second type of training data is online The acquired training data.
根据本申请实施例的第三方面,提供一种电子设备,包括处理器和存储器,所述存储器存储有能够被所述处理器执行的机器可执行指令,所述处理器在执行机器可执行指令时被促使:利用待训练神经网络的固定层对第一类型训练数据进行加密处理,以得到第一类型训练数据的加密特征;其中,所述第一类型训练数据为原始的有监督数据,所述固定层为所述待训练神经网络的前N层,所述固定层包括至少一个非线性层,N为正整数;基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,直至所述待训练神经网络收敛,所述第二类型训练数据为在线获取的训练数据。According to a third aspect of the embodiments of the present application, there is provided an electronic device including a processor and a memory, the memory storing machine executable instructions that can be executed by the processor, and the processor is executing the machine executable instructions. Time is prompted: Use the fixed layer of the neural network to be trained to encrypt the first type of training data to obtain the encryption features of the first type of training data; wherein, the first type of training data is the original supervised data, so The fixed layer is the first N layers of the neural network to be trained, the fixed layer includes at least one nonlinear layer, and N is a positive integer; based on the encryption feature and the second type of training data, the neural network to be trained The trainable layer of the network is trained until the neural network to be trained converges, and the second type of training data is training data obtained online.
根据本申请实施例的第四方面,提供一种机器可读存储介质,所述机器可读存储介质内存储有机器可执行指令,所述机器可执行指令被处理器执行时实现上述神经网络训练方法。According to a fourth aspect of the embodiments of the present application, there is provided a machine-readable storage medium having machine-executable instructions stored in the machine-readable storage medium, and the aforementioned neural network training is implemented when the machine-executable instructions are executed by a processor method.
本申请提供的技术方案至少可以带来以下有益效果:The technical solution provided by this application can at least bring about the following beneficial effects:
通过利用待训练神经网络的固定层对第一类型训练数据进行处理,以得到加密特征,并基于该加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练,直至待训练神经网络收敛,在保证第一类型训练数据安全性的情况下,提高了神经网络模型的性能。The first type of training data is processed by using the fixed layer of the neural network to be trained to obtain encrypted features, and based on the encrypted features and the second type of training data, the trainable layer of the neural network to be trained is trained until it is to be trained The neural network converges to improve the performance of the neural network model while ensuring the safety of the first type of training data.
附图说明Description of the drawings
图1是本申请示例性实施例示出的一种神经网络训练方法的流程示意图;Fig. 1 is a schematic flowchart of a neural network training method shown in an exemplary embodiment of the present application;
图2是本申请示例性实施例示出的一种基于加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练的流程示意图;FIG. 2 is a schematic diagram of a process of training a trainable layer of a neural network to be trained based on encryption features and a second type of training data according to an exemplary embodiment of the present application;
图3是本申请示例性实施例示出的一种基于加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练的流程示意图;FIG. 3 is a schematic diagram of a process of training a trainable layer of a neural network to be trained based on encryption features and a second type of training data according to an exemplary embodiment of the present application;
图4A是本申请示例性实施例示出的一种得到加密特征的流程示意图;Fig. 4A is a schematic diagram of a process for obtaining encryption features according to an exemplary embodiment of the present application;
图4B是本申请示例性实施例示出的一种神经网络训练的方法流程示意图;4B is a schematic flowchart of a neural network training method shown in an exemplary embodiment of the present application;
图5A是本申请示例性实施例示出的一种神经网络的示意图;Fig. 5A is a schematic diagram of a neural network shown in an exemplary embodiment of the present application;
图5B是本申请示例性实施例示出的一种数据加密的流程示意图;FIG. 5B is a schematic flowchart of a data encryption process shown in an exemplary embodiment of the present application;
图5C是本申请示例性实施例示出的一种在线训练过程的流程示意图;FIG. 5C is a schematic flowchart of an online training process shown in an exemplary embodiment of the present application;
图6是本申请示例性实施例示出的一种神经网络训练装置的结构示意图;Fig. 6 is a schematic structural diagram of a neural network training device shown in an exemplary embodiment of the present application;
图7是本申请示例性实施例示出的一种电子设备的硬件结构示意图。Fig. 7 is a schematic diagram showing the hardware structure of an electronic device according to an exemplary embodiment of the present application.
具体实施方式Detailed ways
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。Here, exemplary embodiments will be described in detail, and examples thereof are shown in the accompanying drawings. When the following description refers to the drawings, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements. The implementation manners described in the following exemplary embodiments do not represent all implementation manners consistent with the present application. On the contrary, they are merely examples of devices and methods consistent with some aspects of the application as detailed in the appended claims.
在本申请使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。The terms used in this application are only for the purpose of describing specific embodiments, and are not intended to limit the application. The singular forms of "a", "said" and "the" used in this application and the appended claims are also intended to include plural forms, unless the context clearly indicates other meanings.
为了使本领域技术人员更好地理解本申请实施例提供的技术方案,并使本申请实施例的上述目的、特征和优点能够更加明显易懂,下面结合附图对本申请实施例中技术方案作进一步详细的说明。In order to enable those skilled in the art to better understand the technical solutions provided in the embodiments of this application, and to make the above-mentioned objectives, features and advantages of the embodiments of this application more obvious and easy to understand, the technical solutions in the embodiments of this application are described below with reference to the accompanying drawings. Further detailed description.
请参见图1,为本申请实施例提供的一种神经网络训练方法的流程示意图,如图1所示,该神经网络训练方法可以包括以下步骤。Please refer to FIG. 1, which is a schematic flowchart of a neural network training method provided by an embodiment of this application. As shown in FIG. 1, the neural network training method may include the following steps.
需要说明的是,在本申请实施例中,若未特殊说明,所提及的待训练神经网络是指完成了预训练的神经网络,本申请实施例后续不再复述。It should be noted that, in the embodiments of the present application, unless otherwise specified, the neural network to be trained refers to a neural network that has completed the pre-training, which will not be repeated in the embodiments of the present application.
步骤S100、利用待训练神经网络的固定层对第一类型训练数据进行加密处理,以得到第一类型训练数据的加密特征;其中,固定层为待训练神经网络的前N层,固定层包括至少一个非线性层,N为正整数。Step S100: Encrypt the first type of training data by using the fixed layer of the neural network to be trained to obtain the encryption features of the first type of training data; wherein, the fixed layer is the first N layers of the neural network to be trained, and the fixed layer includes at least A non-linear layer, N is a positive integer.
由于神经网络的例如卷积层和池化层本身就对应有损的特征提取过程,即使在知道这些层输出的中间特征和卷积层参数的情况下仍然无法还原出原始数据;因此,本申请实施例中,可以通过神经网络的卷积层和池化层对第一类型训练数据进行加密处理,可以有效保证数据隐私和安全。Since the neural network, such as the convolutional layer and the pooling layer, correspond to the lossy feature extraction process, even if the intermediate features and convolutional layer parameters output by these layers are known, the original data cannot be restored; therefore, this application In the embodiment, the first type of training data can be encrypted through the convolutional layer and the pooling layer of the neural network, which can effectively ensure data privacy and security.
此外,由于一个预训练好的神经网络模型的固定浅层的参数的微调对模型性能的影响很小,因此,在训练过程中保持预训练好的神经网络模型的固定浅层的参数不变,对神经网络模型的性能的影响很小。In addition, since the fine-tuning of the fixed shallow parameters of a pre-trained neural network model has little effect on the model performance, the fixed shallow parameters of the pre-trained neural network model are kept unchanged during the training process. The effect on the performance of the neural network model is small.
基于此,为了在保证第一类型训练数据的安全性的情况下,保证神经网络模型的性能,可以将待训练神经网络的前预设数量层作为固定层(固定层的参数不参与神经网络的训练),并利用该固定层对第一类型训练数据进行加密处理,以实现对第一类型训练数据的加密,得到第一类型训练数据对应的加密特征。Based on this, in order to ensure the performance of the neural network model while ensuring the safety of the first type of training data, the preset number of layers of the neural network to be trained can be used as the fixed layer (the parameters of the fixed layer are not involved in the neural network Training), and use the fixed layer to encrypt the first type of training data, so as to realize the encryption of the first type of training data, and obtain the encryption feature corresponding to the first type of training data.
示例性的,第一类型训练数据为原始的有监督数据。Exemplarily, the first type of training data is original supervised data.
示例性的,为了保证第一类型训练数据的安全性,用于对第一类型训练数据进行加密处理的固定层需要包括至少一个非线性层(如池化层、激活层等)。Exemplarily, in order to ensure the security of the first type of training data, the fixed layer used to encrypt the first type of training data needs to include at least one non-linear layer (such as a pooling layer, an activation layer, etc.).
需要说明的是,由于神经网络的固定层的参数不参与训练,因此,固定层的数量越 多,对神经网络模型的性能影响越大;此外,神经网络的固定层的数量越多,利用神经网络的固定层处理后的数据的安全性越高。因此,在设置神经网络的固定层时,需要均衡考虑神经网络模型的性能以及经固定层处理后的数据的安全性。固定层数量太多,则会导致神经网络模型的性能太差;固定层数量太少,则利用固定层处理后的数据的安全性太差。It should be noted that since the parameters of the fixed layer of the neural network are not involved in the training, the more the number of fixed layers, the greater the impact on the performance of the neural network model; in addition, the more the number of fixed layers of the neural network, the more the neural network The higher the security of the data processed by the fixed layer of the network. Therefore, when setting the fixed layer of the neural network, it is necessary to balance the performance of the neural network model and the security of the data processed by the fixed layer. Too many fixed layers will result in poor performance of the neural network model; too few fixed layers will result in poor security of the data processed by the fixed layers.
示例性的,可以将神经网络的前1~2个block(块)中的层确定为神经网络的固定层。Exemplarily, the layers in the first 1-2 blocks of the neural network can be determined as the fixed layers of the neural network.
此外,在本申请实施例中,步骤S100中利用待训练神经网络的固定层对第一类型训练数据进行加密处理的实现可以线下进行,即线下实现对第一类型训练数据的加密,线上进行神经网络的训练。In addition, in the embodiment of the present application, the implementation of using the fixed layer of the neural network to be trained to encrypt the first type of training data in step S100 can be performed offline, that is, the encryption of the first type of training data is implemented offline. Perform neural network training on.
步骤S110、基于加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练,直至待训练神经网络收敛。Step S110, based on the encryption feature and the second type of training data, train the trainable layer of the neural network to be trained until the neural network to be trained converges.
本申请实施例中,按照步骤S100中描述的方式得到加密特征后,可以基于所得到的加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练,直至待训练神经网络收敛。In the embodiment of this application, after the encrypted feature is obtained in the manner described in step S100, the trainable layer of the neural network to be trained can be trained based on the obtained encrypted feature and the second type of training data until the neural network to be trained converges .
示例性的,待训练神经网络的可训练层包括除固定层之外的其余层,其通常包括位于待训练神经网络的高层的卷积层以及全连接层,可训练层的参数在神经网络的在线训练过程中进行训练。Exemplarily, the trainable layer of the neural network to be trained includes the rest of the layers except the fixed layer, which usually includes the convolutional layer and the fully connected layer located at the high-level of the neural network to be trained. The parameters of the trainable layer are in the neural network. Training is carried out during online training.
示例性的,第二类型训练数据为在线获取的训练数据,如线上的无监督数据。Exemplarily, the second type of training data is training data obtained online, such as online unsupervised data.
可见,在图1所示方法流程中,通过将待训练神经网络的包括至少一个非线性层的前N层设置为固定层,并利用待训练神经网络的固定层对第一类型训练数据进行处理,以得到加密特征,并基于加密特征以及第二类型训练数据,对待训练神经网络的可训练层进行训练,直至待训练神经网络收敛,在保证第一类型训练数据安全性的情况下,提高了神经网络模型的性能。It can be seen that in the method flow shown in Figure 1, the first N layers of the neural network to be trained including at least one nonlinear layer are set as fixed layers, and the first type of training data is processed by using the fixed layer of the neural network to be trained , In order to obtain encrypted features, and based on the encrypted features and the second type of training data, the trainable layer of the neural network to be trained is trained until the neural network to be trained converges, which improves the security of the first type of training data The performance of the neural network model.
在一个实施例中,步骤S100中,利用待训练神经网络的固定层对第一类型训练数据进行加密处理之后,还可以包括:In an embodiment, in step S100, after the first type of training data is encrypted using the fixed layer of the neural network to be trained, the method may further include:
对加密特征进行指定处理,以提高加密特征的安全性,或/和,减少加密特征占用的存储空间;Specify the encryption feature to improve the security of the encryption feature, or/and reduce the storage space occupied by the encryption feature;
步骤S110中,基于加密特征,以及第二类型训练数据,对待训练神经网络的可训 练层进行训练,可以包括:In step S110, training the trainable layer of the neural network to be trained based on the encrypted features and the second type of training data may include:
基于处理后的加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练。Based on the processed encrypted features and the second type of training data, the trainable layer of the neural network to be trained is trained.
示例性的,为了进一步提升第一类型训练数据的安全性,或/和,减小加密特征占用的存储空间,在利用待训练神经网络的固定层对第一类型训练数据进行加密处理,得到加密特征之后,还可以对加密特征进行指定处理。Exemplarily, in order to further improve the security of the first type of training data, or/and reduce the storage space occupied by the encryption feature, the first type of training data is encrypted by using the fixed layer of the neural network to be trained to obtain the encryption After the feature, you can also specify the encryption feature.
在一个示例中,该指定处理可以包括但不限于量化、裁剪以及压缩等处理中的一个或多个。In an example, the specified processing may include, but is not limited to, one or more of quantization, cropping, and compression.
示例性的,上述压缩为有损压缩。Exemplarily, the aforementioned compression is lossy compression.
相应地,得到处理后的加密特征之后,在进行线上训练时,可以基于处理后的加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练。Correspondingly, after the processed encrypted features are obtained, during online training, the trainable layer of the neural network to be trained can be trained based on the processed encrypted features and the second type of training data.
在一个示例中,如图2所示,上述基于处理后的加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练,可以包括以下步骤:In an example, as shown in FIG. 2, the training of the trainable layer of the neural network to be trained based on the processed encrypted features and the second type of training data may include the following steps:
步骤S200、当指定处理包括压缩时,对处理后的加密特征进行解压缩处理;Step S200, when the designated processing includes compression, perform decompression processing on the processed encrypted feature;
步骤S210、基于解压缩后的加密特征对待训练神经网络的可训练层进行训练,以及,利用待训练神经网络的固定层对第二类型训练数据进行处理,并基于处理后的第二类型训练数据对待训练神经网络的可训练层进行训练。Step S210, training the trainable layer of the neural network to be trained based on the decompressed encryption features, and using the fixed layer of the neural network to be trained to process the second type of training data, and based on the processed second type of training data Train the trainable layer of the neural network to be trained.
示例性的,当对待训练神经网络进行线上训练时,若加密特征进行了压缩处理,则在基于加密特征对待训练神经网络的可训练层进行训练时,需要先对压缩后的加密特征进行解压缩处理,得到解压缩后的加密特征。Exemplarily, when performing online training on the neural network to be trained, if the encrypted features are compressed, then when training the trainable layer of the neural network to be trained based on the encrypted features, the compressed encrypted features need to be decompressed. Compression process, get the encrypted feature after decompression.
在进行神经网络的线上训练时,一方面,可以基于解压缩后的加密特征对待训练神经网络的可训练层进行训练;另一方面,可以基于第二类型训练数据对待训练神经网络的可训练层进行训练。这里,解压缩后的加密特征以及第二类型训练数据可以整体被视为用于对待训练神经网络的可训练层进行训练的一个大的数据集。When conducting online training of neural networks, on the one hand, the trainable layer of the neural network to be trained can be trained based on the decompressed encryption features; on the other hand, the trainable layer of the neural network to be trained can be trained based on the second type of training data Layer for training. Here, the decompressed encrypted features and the second type of training data can be regarded as a large data set for training the trainable layer of the neural network to be trained.
其中,由于加密特征为待训练神经网络的固定层处理后的特征,因此,当将加密特征输入到待训练神经网络时,待训练神经网络的固定层不会再对加密特征进行处理,而是利用该加密特征对待训练神经网络的可训练层进行训练。Among them, since the encrypted feature is the feature processed by the fixed layer of the neural network to be trained, when the encrypted feature is input to the neural network to be trained, the fixed layer of the neural network to be trained will no longer process the encrypted feature, but Use this encryption feature to train the trainable layer of the neural network to be trained.
当将第二类型训练数据输入到待训练神经网络时,需要利用待训练神经网络的固定 层对第二类型训练数据进行处理,并基于处理后的第二类型训练数据对待训练神经网络的可训练层进行训练。When the second type of training data is input to the neural network to be trained, the fixed layer of the neural network to be trained needs to be used to process the second type of training data, and based on the processed second type of training data, the neural network to be trained can be trained Layer for training.
在一个实施例中,如图3所示,步骤S110中,基于加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练,可以包括以下步骤:In one embodiment, as shown in FIG. 3, in step S110, training the trainable layer of the neural network to be trained based on the encrypted features and the second type of training data may include the following steps:
步骤S111、对加密特征进行特征增强。Step S111: Perform feature enhancement on the encrypted feature.
步骤S112、基于特征增强后的加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练。Step S112, training the trainable layer of the neural network to be trained based on the encrypted feature after feature enhancement and the second type of training data.
示例性的,为了增强数据的丰富性,提高神经网络模型性能,在基于加密特征对待训练神经网络的可训练层进行训练时,可以对加密特征进行增强处理,即通过一定手段对加密特征附加一些信息或变化数据,例如,添加高斯噪声或椒盐噪声等,并基于特征增强后的加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练。Exemplarily, in order to enhance the richness of data and improve the performance of the neural network model, when training the trainable layer of the neural network to be trained based on the encrypted features, the encrypted features can be enhanced, that is, the encrypted features can be added by certain means. Information or changed data, for example, adding Gaussian noise or salt and pepper noise, etc., and based on the feature-enhanced encryption feature and the second type of training data, train the trainable layer of the neural network to be trained.
需要说明的是,在该实施例中,若用于对待训练神经网络的可训练层进行训练的加密特征为经过压缩后的加密特征,则在对加密特征进行特征增强处理之前,需要先对压缩后的加密特征进行解压缩处理,并对解压缩后的加密特征进行特征增强处理。It should be noted that, in this embodiment, if the encrypted feature used to train the trainable layer of the neural network to be trained is a compressed encrypted feature, it is necessary to compress the encrypted feature before performing feature enhancement processing on the encrypted feature. The latter encrypted feature is subjected to decompression processing, and the decompressed encrypted feature is subjected to feature enhancement processing.
为了使本领域技术人员能够更好地理解本申请实施例提供的技术方案,下面结合具体实例对本申请实施例提供的技术方案进行说明。In order to enable those skilled in the art to better understand the technical solutions provided by the embodiments of the present application, the technical solutions provided by the embodiments of the present application will be described below with reference to specific examples.
在该实施例中,神经网络训练***可以包括两部分:第一部分为线下加密子***,第二部分为在线训练子***;其中:In this embodiment, the neural network training system may include two parts: the first part is the offline encryption subsystem, and the second part is the online training subsystem; among them:
线下加密子***利用一个待训练神经网络模型的浅层(即上述前N层,包括至少一个非线性层)作为加密层,对第一类型训练数据进行处理,以得到加密特征,其流程图可以如图4A所示。第一类型训练数据经过模型的固定层进行前向计算,得到特征图;然后对特征图进行裁剪和量化,用来减小特征图的大小;之后进一步使用图片存储的压缩算法对特征图进行压缩并存储,该压缩算法包括但不限于游程编码,JPEG(一种图像格式)压缩等;通过对特征图进行这一系列处理最后得到的特征即为第一类型训练数据的加密数据。The offline encryption subsystem uses a shallow layer of the neural network model to be trained (that is, the above-mentioned first N layers, including at least one nonlinear layer) as the encryption layer, and processes the first type of training data to obtain encrypted features. The flowchart It can be as shown in Figure 4A. The first type of training data is forward-calculated through the fixed layer of the model to obtain the feature map; then the feature map is cropped and quantized to reduce the size of the feature map; then the feature map is further compressed using the image storage compression algorithm And store, the compression algorithm includes but is not limited to run-length coding, JPEG (an image format) compression, etc.; the final feature obtained by performing this series of processing on the feature map is the encrypted data of the first type of training data.
由于第一类型训练数据经过了卷积、池化、量化、裁剪、压缩等一系列不可还原的过程,因此得到的加密数据能够有效保护第一类型训练数据的安全性。此外,加密数据作为模型的中间层特征,可以输入后续层进行训练,因此保证了模型的性能。Since the first type of training data has undergone a series of irreversible processes such as convolution, pooling, quantization, cropping, and compression, the obtained encrypted data can effectively protect the security of the first type of training data. In addition, the encrypted data is used as the middle layer feature of the model and can be input to the subsequent layers for training, thus ensuring the performance of the model.
在线训练***利用第一类型训练数据对应的加密特征,以及第二类型训练数据一起对待训练神经网络模型的非固定层(即上述可训练层)的参数进行训练,进一步提高模型在部署的实际环境中的性能,其实现流程图可以如图4B所示。The online training system uses the encryption features corresponding to the first type of training data and the second type of training data to train the parameters of the non-fixed layer (ie, the above-mentioned trainable layer) of the neural network model to be trained to further improve the actual deployment environment of the model The implementation flow chart of the performance in Figure 4B can be shown in Figure 4B.
示例性的,为了增强数据的丰富性,提高神经网络模型性能,可以对加密特征进行增强处理,进而,利用增强处理后的加密特征,以及经过待训练网络的固定层处理后的第二类型训练数据,两部分特征联合一起对待训练神经网络的可训练层的参数进行训练,从而提升神经网络模型的性能。Exemplarily, in order to enhance the richness of data and improve the performance of the neural network model, the encryption feature can be enhanced, and then the enhanced encryption feature can be used, and the second type of training after the fixed layer processing of the network to be trained Data, the two parts of the characteristics are combined to train the parameters of the trainable layer of the neural network to be trained, thereby improving the performance of the neural network model.
举例来说,请参见图5A,为本申请实施例提供的一个神经网络的示意图,该神经网络包括卷积层和全连接层。For example, please refer to FIG. 5A, which is a schematic diagram of a neural network provided in an embodiment of this application. The neural network includes a convolutional layer and a fully connected layer.
示例性的,卷积层之间还可以包括池化层,图中未示出。Exemplarily, a pooling layer may also be included between the convolutional layers, which is not shown in the figure.
在该示例中,卷积层包括底层的固定卷积层(即上述固定层)和高层的可训练卷积层。固定卷积层用来作为对第一类型训练数据进行加密的加密层,其参数不参与训练;可训练卷积层和全连接层(即上述可训练层)的参数在在线训练过程进行训练。In this example, the convolutional layer includes a fixed convolutional layer at the bottom layer (that is, the above-mentioned fixed layer) and a trainable convolutional layer at a high level. The fixed convolutional layer is used as an encryption layer for encrypting the first type of training data, and its parameters are not involved in training; the parameters of the trainable convolutional layer and the fully connected layer (that is, the above-mentioned trainable layer) are trained in the online training process.
图5B为本申请实施例提供的一种数据加密的流程示意图。如图5B所示,第一类型训练数据(或数据集)中的任一张图片经过固定卷积层前向计算后,得到许多通道的特征图,这些特征图隐藏了原始图片的特征,但保留了和训练任务相关的数据特征;然后对特征图进行量化、裁剪以及压缩等处理,得到最终的加密特征。FIG. 5B is a schematic flowchart of a data encryption process provided by an embodiment of this application. As shown in Figure 5B, any picture in the first type of training data (or data set) is forward-calculated with a fixed convolutional layer to obtain feature maps of many channels. These feature maps hide the features of the original picture, but The data features related to the training task are retained; then the feature map is quantized, cropped, and compressed to obtain the final encrypted feature.
图5C为本申请实施例提供的一种在线训练过程的流程示意图。如图5C所示,加密特征经过解压缩操作得到对应的有损的特征图(左列),而第二类型训练数据经过固定卷积层前向计算同样得到对应的特征图(右列),这些特征图一起输入后续的可训练卷积层和全连接层,并对这些可训练层的参数进行训练。由于对第一类型训练数据的加密是通过待训练神经网络的固定层对第一类型训练数据进行加密实现的,即加密特征属于待训练神经网络的中间层特征,因此,利用加密特征参与待训练神经网络的可训练层的训练,可以在保证第一类型训练数据安全性的情况下,提高神经网络模型的性能。此外,在得到加密特征后,利用有损压缩算法对加密特征进行压缩存储,并在进行神经网络训练时解压缩后使用。由于有损压缩损失的信息对待压缩的数据(即加密特征)的影响较小,但压缩比却明显大于无损压缩,因此,可以在保证性能的情况下,进一步提高第一类型训练数据的安全性,并明显减少加密特征占用的存储空间。FIG. 5C is a schematic flowchart of an online training process provided by an embodiment of the application. As shown in Figure 5C, the encrypted feature is decompressed to obtain the corresponding lossy feature map (left column), and the second type of training data is subjected to the forward calculation of the fixed convolution layer to obtain the corresponding feature map (right column). These feature maps are input to the subsequent trainable convolutional layer and fully connected layer together, and the parameters of these trainable layers are trained. Since the encryption of the first type of training data is achieved by encrypting the first type of training data through the fixed layer of the neural network to be trained, that is, the encrypted feature belongs to the middle layer feature of the neural network to be trained. Therefore, the encrypted feature is used to participate in the training to be trained. The training of the trainable layer of the neural network can improve the performance of the neural network model while ensuring the safety of the first type of training data. In addition, after the encrypted features are obtained, the encrypted features are compressed and stored using a lossy compression algorithm, and used after decompression during neural network training. Because lossy compression loss information has less impact on the data to be compressed (ie, encryption features), but the compression ratio is significantly greater than lossless compression. Therefore, the security of the first type of training data can be further improved while ensuring performance , And significantly reduce the storage space occupied by encryption features.
本申请实施例中,通过利用待训练神经网络的固定层对第一类型训练数据进行处理, 以得到加密特征,并基于该加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练,在保证第一类型训练数据安全性的情况下,提高了神经网络模型的性能。In the embodiment of the present application, the first type of training data is processed by using the fixed layer of the neural network to be trained to obtain the encrypted feature, and based on the encrypted feature and the second type of training data, the trainable layer of the neural network to be trained Training is performed to improve the performance of the neural network model while ensuring the safety of the first type of training data.
以上对本申请提供的方法进行了描述。下面对本申请提供的装置进行描述:The method provided by this application is described above. The following describes the device provided by this application:
请参见图6,为本申请实施例提供的一种神经网络训练装置的结构示意图,如图6所示,该神经网络训练装置可以包括:Please refer to FIG. 6, which is a schematic structural diagram of a neural network training device provided by an embodiment of this application. As shown in FIG. 6, the neural network training device may include:
数据处理单元610,用于利用待训练神经网络的固定层对第一类型训练数据进行加密处理,以得到第一类型训练数据的加密特征;其中,所述第一类型训练数据为原始的有监督数据,所述固定层为所述待训练神经网络的前N层,所述固定层包括至少一个非线性层,N为正整数;The data processing unit 610 is configured to encrypt the first type of training data by using the fixed layer of the neural network to be trained to obtain the encryption characteristics of the first type of training data; wherein, the first type of training data is original supervised Data, the fixed layer is the first N layers of the neural network to be trained, the fixed layer includes at least one nonlinear layer, and N is a positive integer;
训练单元620,用于基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,直至所述待训练神经网络收敛,所述第二类型训练数据为在线获取的训练数据。The training unit 620 is configured to train the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data until the neural network to be trained converges, and the second type of training data is Training data obtained online.
在一种可能的实施例中,所述数据处理单元610利用待训练神经网络的固定层对第一类型训练数据进行加密处理之后,还对所述加密特征进行指定处理,其中,所述指定处理的类型包括用于提高所述加密特征的安全性的处理,或/和,用于减少所述加密特征占用的存储空间的处理;In a possible embodiment, after the data processing unit 610 encrypts the first type of training data by using the fixed layer of the neural network to be trained, it further performs the specified processing on the encryption feature, wherein the specified processing The type includes processing for improving the security of the encryption feature, or/and processing for reducing the storage space occupied by the encryption feature;
所述训练单元620基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:The training unit 620 trains the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data, including:
基于处理后的加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练。Training the trainable layer of the neural network to be trained based on the processed encrypted features and the second type of training data.
在一种可能的实施例中,所述指定处理包括以下处理之一或多个:In a possible embodiment, the designated processing includes one or more of the following processing:
量化、裁剪以及压缩。Quantify, crop and compress.
在一种可能的实施例中,所述训练单元620基于处理后的加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:In a possible embodiment, the training unit 620 trains the trainable layer of the neural network to be trained based on the processed encrypted features and the second type of training data, including:
当所述指定处理包括压缩时,对所述处理后的加密特征进行解压缩处理;When the specified processing includes compression, perform decompression processing on the processed encrypted feature;
基于解压缩后的加密特征对所述待训练神经网络的可训练层进行训练,以及,利用所述待训练神经网络的固定层对所述第二类型训练数据进行处理,并基于处理后的第二 类型训练数据对所述待训练神经网络的可训练层进行训练。Training the trainable layer of the neural network to be trained based on the decompressed encryption features, and using the fixed layer of the neural network to be trained to process the second type of training data, and based on the processed first layer The two types of training data train the trainable layer of the neural network to be trained.
在一种可能的实施例中,所述训练单元620基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:In a possible embodiment, the training unit 620 trains the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data, including:
对所述加密特征进行特征增强;Feature enhancement of the encryption feature;
基于特征增强后的加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练。Training the trainable layer of the neural network to be trained based on the encrypted feature after feature enhancement and the second type of training data.
请参见图7,为本申请实施例提供的一种电子设备的硬件结构示意图。该电子设备可包括处理器701、存储有机器可执行指令的存储器702。处理器701与存储器702可经由***总线703通信。并且,通过读取并执行存储器702中与编码控制逻辑对应的机器可执行指令,处理器701可执行上文描述的神经网络训练方法。Please refer to FIG. 7, which is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of this application. The electronic device may include a processor 701 and a memory 702 storing machine-executable instructions. The processor 701 and the memory 702 can communicate via a system bus 703. Moreover, by reading and executing the machine executable instructions corresponding to the encoding control logic in the memory 702, the processor 701 can execute the neural network training method described above.
本文中提到的存储器702可以是任何电子、磁性、光学或其它物理存储装置,可以包含或存储信息,如可执行指令、数据,等等。例如,机器可读存储介质可以是:RAM(Radom Access Memory,随机存取存储器)、易失存储器、非易失性存储器、闪存、存储驱动器(如硬盘驱动器)、固态硬盘、任何类型的存储盘(如光盘、dvd等),或者类似的存储介质,或者它们的组合。The memory 702 mentioned herein may be any electronic, magnetic, optical, or other physical storage device, and may contain or store information, such as executable instructions, data, and so on. For example, the machine-readable storage medium can be: RAM (Radom Access Memory), volatile memory, non-volatile memory, flash memory, storage drive (such as hard drive), solid state hard drive, any type of storage disk (Such as CD, DVD, etc.), or similar storage media, or a combination of them.
在一些实施例中,还提供了一种机器可读存储介质,如图7中的存储器702,该机器可读存储介质内存储有机器可执行指令,所述机器可执行指令被处理器执行时实现上文描述的神经网络训练方法。例如,所述机器可读存储介质可以是ROM、RAM、CD-ROM、磁带、软盘和光数据存储设备等。In some embodiments, a machine-readable storage medium is also provided, such as the memory 702 in FIG. 7. The machine-readable storage medium stores machine-executable instructions. Implement the neural network training method described above. For example, the machine-readable storage medium may be ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。It should be noted that in this article, relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply one of these entities or operations. There is any such actual relationship or order between. Moreover, the terms "including", "including" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article, or device that includes a series of elements includes not only those elements, but also those that are not explicitly listed Other elements of, or also include elements inherent to this process, method, article or equipment. If there are no more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other identical elements in the process, method, article, or equipment that includes the element.
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。The above descriptions are only preferred embodiments of this application, and are not intended to limit this application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of this application shall be included in this application Within the scope of protection.

Claims (16)

  1. 一种神经网络训练方法,其特征在于,包括:A neural network training method is characterized in that it includes:
    利用待训练神经网络的固定层对第一类型训练数据进行加密处理,以得到第一类型训练数据的加密特征;其中,所述第一类型训练数据为原始的有监督数据,所述固定层为所述待训练神经网络的前N层,所述固定层包括至少一个非线性层,N为正整数;The first type of training data is encrypted using the fixed layer of the neural network to be trained to obtain the encryption characteristics of the first type of training data; wherein, the first type of training data is original supervised data, and the fixed layer is The first N layers of the neural network to be trained, the fixed layer includes at least one nonlinear layer, and N is a positive integer;
    基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,直至所述待训练神经网络收敛,所述第二类型训练数据为在线获取的训练数据。Based on the encryption feature and the second type of training data, the trainable layer of the neural network to be trained is trained until the neural network to be trained converges, and the second type of training data is training data obtained online.
  2. 根据权利要求1所述的方法,其特征在于,所述利用待训练神经网络的固定层对第一类型训练数据进行加密处理之后,所述方法还包括:The method according to claim 1, wherein after the first type of training data is encrypted using the fixed layer of the neural network to be trained, the method further comprises:
    对所述加密特征进行指定处理,其中,所述指定处理的类型包括用于提高所述加密特征的安全性的处理,或/和,用于减少所述加密特征占用的存储空间的处理;Performing designated processing on the encryption feature, wherein the type of the designated processing includes processing for improving the security of the encryption feature, or/and processing for reducing the storage space occupied by the encryption feature;
    所述基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:The training the trainable layer of the neural network to be trained based on the encrypted feature and the second type of training data includes:
    基于处理后的加密特征,以及所述第二类型训练数据,对所述待训练神经网络的可训练层进行训练。Training the trainable layer of the neural network to be trained based on the processed encrypted features and the second type of training data.
  3. 根据权利要求2所述的方法,其特征在于,所述指定处理包括以下处理之一或多个:The method according to claim 2, wherein the designated processing includes one or more of the following processing:
    量化、裁剪以及压缩。Quantify, crop and compress.
  4. 根据权利要求3所述的方法,其特征在于,所述基于处理后的加密特征,以及所述第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:The method according to claim 3, wherein the training the trainable layer of the neural network to be trained based on the processed encryption feature and the second type of training data comprises:
    当所述指定处理包括压缩时,对所述处理后的加密特征进行解压缩处理;When the specified processing includes compression, perform decompression processing on the processed encrypted feature;
    基于解压缩后的加密特征对所述待训练神经网络的可训练层进行训练,以及,利用所述待训练神经网络的固定层对所述第二类型训练数据进行处理,并基于处理后的第二类型训练数据对所述待训练神经网络的可训练层进行训练。Training the trainable layer of the neural network to be trained based on the decompressed encryption features, and using the fixed layer of the neural network to be trained to process the second type of training data, and based on the processed first layer The two types of training data train the trainable layer of the neural network to be trained.
  5. 根据权利要求1所述的方法,其特征在于,所述基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:The method according to claim 1, wherein the training the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data comprises:
    对所述加密特征进行特征增强;Feature enhancement of the encryption feature;
    基于特征增强后的加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练。Training the trainable layer of the neural network to be trained based on the encrypted feature after feature enhancement and the second type of training data.
  6. 一种神经网络训练装置,其特征在于,包括:A neural network training device is characterized in that it comprises:
    数据处理单元,用于利用待训练神经网络的固定层对第一类型训练数据进行加密处 理,以得到第一类型训练数据的加密特征;其中,所述第一类型训练数据为原始的有监督数据,所述固定层为所述待训练神经网络的前N层,所述固定层包括至少一个非线性层,N为正整数;The data processing unit is used to encrypt the first type of training data by using the fixed layer of the neural network to be trained to obtain the encryption characteristics of the first type of training data; wherein, the first type of training data is original supervised data , The fixed layer is the first N layers of the neural network to be trained, the fixed layer includes at least one nonlinear layer, and N is a positive integer;
    训练单元,用于基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,直至所述待训练神经网络收敛,所述第二类型训练数据为在线获取的训练数据。The training unit is configured to train the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data until the neural network to be trained converges, and the second type of training data is online The acquired training data.
  7. 根据权利要求6所述的装置,其特征在于,所述数据处理单元利用待训练神经网络的固定层对第一类型训练数据进行加密处理之后,还对所述加密特征进行指定处理,其中,所述指定处理的类型包括用于提高所述加密特征的安全性的处理,或/和,用于减少所述加密特征占用的存储空间的处理;7. The device according to claim 6, wherein the data processing unit uses the fixed layer of the neural network to be trained to encrypt the first type of training data, and then further performs the specified processing on the encrypted feature, wherein The specified processing type includes processing for improving the security of the encryption feature, or/and processing for reducing the storage space occupied by the encryption feature;
    所述训练单元基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:The training unit trains the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data, including:
    基于处理后的加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练。Training the trainable layer of the neural network to be trained based on the processed encrypted features and the second type of training data.
  8. 根据权利要求7所述的装置,其特征在于,所述指定处理包括以下处理之一或多个:The device according to claim 7, wherein the designated processing includes one or more of the following processing:
    量化、裁剪以及压缩。Quantify, crop and compress.
  9. 根据权利要求8所述的装置,其特征在于,所述训练单元基于处理后的加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:The device according to claim 8, wherein the training unit trains the trainable layer of the neural network to be trained based on the processed encrypted features and the second type of training data, comprising:
    当所述指定处理包括压缩时,对所述处理后的加密特征进行解压缩处理;When the specified processing includes compression, perform decompression processing on the processed encrypted feature;
    基于解压缩后的加密特征对所述待训练神经网络的可训练层进行训练,以及,利用所述待训练神经网络的固定层对所述第二类型训练数据进行处理,并基于处理后的第二类型训练数据对所述待训练神经网络的可训练层进行训练。Training the trainable layer of the neural network to be trained based on the decompressed encryption features, and using the fixed layer of the neural network to be trained to process the second type of training data, and based on the processed first layer The two types of training data train the trainable layer of the neural network to be trained.
  10. 根据权利要求6所述的装置,其特征在于,所述训练单元基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:The device according to claim 6, wherein the training unit trains the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data, comprising:
    对所述加密特征进行特征增强;Feature enhancement of the encryption feature;
    基于特征增强后的加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练。Training the trainable layer of the neural network to be trained based on the encrypted feature after feature enhancement and the second type of training data.
  11. 一种电子设备,其特征在于,包括处理器和存储器,所述存储器存储有能够被所述处理器执行的机器可执行指令,所述处理器在执行机器可执行指令时被促使:An electronic device, characterized by comprising a processor and a memory, the memory storing machine executable instructions that can be executed by the processor, and the processor is prompted when the machine executable instructions are executed:
    利用待训练神经网络的固定层对第一类型训练数据进行加密处理,以得到第一类型 训练数据的加密特征;其中,所述第一类型训练数据为原始的有监督数据,所述固定层为所述待训练神经网络的前N层,所述固定层包括至少一个非线性层,N为正整数;The first type of training data is encrypted using the fixed layer of the neural network to be trained to obtain the encryption characteristics of the first type of training data; wherein, the first type of training data is original supervised data, and the fixed layer is The first N layers of the neural network to be trained, the fixed layer includes at least one nonlinear layer, and N is a positive integer;
    基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,直至所述待训练神经网络收敛,所述第二类型训练数据为在线获取的训练数据。Based on the encryption feature and the second type of training data, the trainable layer of the neural network to be trained is trained until the neural network to be trained converges, and the second type of training data is training data obtained online.
  12. 根据权利要求11所述的设备,其特征在于,在利用待训练神经网络的固定层对第一类型训练数据进行加密处理之后,所述处理器还被促使:The device according to claim 11, characterized in that, after the first type of training data is encrypted using the fixed layer of the neural network to be trained, the processor is further prompted to:
    对所述加密特征进行指定处理,其中,所述指定处理的类型包括用于提高所述加密特征的安全性的处理,或/和,用于减少所述加密特征占用的存储空间的处理;Performing designated processing on the encryption feature, wherein the type of the designated processing includes processing for improving the security of the encryption feature, or/and processing for reducing the storage space occupied by the encryption feature;
    所述基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:The training the trainable layer of the neural network to be trained based on the encrypted feature and the second type of training data includes:
    基于处理后的加密特征,以及所述第二类型训练数据,对所述待训练神经网络的可训练层进行训练。Training the trainable layer of the neural network to be trained based on the processed encrypted features and the second type of training data.
  13. 根据权利要求12所述的设备,其特征在于,所述指定处理包括以下处理之一或多个:The device according to claim 12, wherein the designated processing includes one or more of the following processing:
    量化、裁剪以及压缩。Quantify, crop and compress.
  14. 根据权利要求13所述的设备,其特征在于,在基于处理后的加密特征,以及所述第二类型训练数据,对所述待训练神经网络的可训练层进行训练时,所述处理器被促使:The device according to claim 13, wherein when training the trainable layer of the neural network to be trained based on the processed encrypted features and the second type of training data, the processor is Prompt:
    当所述指定处理包括压缩时,对所述处理后的加密特征进行解压缩处理;When the specified processing includes compression, perform decompression processing on the processed encrypted feature;
    基于解压缩后的加密特征对所述待训练神经网络的可训练层进行训练,以及,利用所述待训练神经网络的固定层对所述第二类型训练数据进行处理,并基于处理后的第二类型训练数据对所述待训练神经网络的可训练层进行训练。Training the trainable layer of the neural network to be trained based on the decompressed encryption features, and using the fixed layer of the neural network to be trained to process the second type of training data, and based on the processed first layer The two types of training data train the trainable layer of the neural network to be trained.
  15. 根据权利要求11所述的设备,其特征在于,在基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练时,所述处理器被促使:The device according to claim 11, wherein when training the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data, the processor is prompted to:
    对所述加密特征进行特征增强;Feature enhancement of the encryption feature;
    基于特征增强后的加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练。Training the trainable layer of the neural network to be trained based on the encrypted feature after feature enhancement and the second type of training data.
  16. 一种机器可读存储介质,其特征在于,所述机器可读存储介质内存储有机器可执行指令,所述机器可执行指令被处理器执行时实现权利要求1-5任一项所述的方法。A machine-readable storage medium, characterized in that, machine-executable instructions are stored in the machine-readable storage medium, and when the machine-executable instructions are executed by a processor, the method described in any one of claims 1 to 5 is realized method.
PCT/CN2021/096109 2020-05-26 2021-05-26 Neural network training method and apparatus, electronic device, and readable storage medium WO2021238992A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010456574.5A CN113723604B (en) 2020-05-26 2020-05-26 Neural network training method and device, electronic equipment and readable storage medium
CN202010456574.5 2020-05-26

Publications (1)

Publication Number Publication Date
WO2021238992A1 true WO2021238992A1 (en) 2021-12-02

Family

ID=78672063

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/096109 WO2021238992A1 (en) 2020-05-26 2021-05-26 Neural network training method and apparatus, electronic device, and readable storage medium

Country Status (2)

Country Link
CN (1) CN113723604B (en)
WO (1) WO2021238992A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117874794B (en) * 2024-03-12 2024-07-05 北方健康医疗大数据科技有限公司 Training method, system and device for large language model and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9436835B1 (en) * 2012-01-05 2016-09-06 Gokay Saldamli Homomorphic encryption in computing systems and environments
CN108564587A (en) * 2018-03-07 2018-09-21 浙江大学 A kind of a wide range of remote sensing image semantic segmentation method based on full convolutional neural networks
CN110830515A (en) * 2019-12-13 2020-02-21 支付宝(杭州)信息技术有限公司 Flow detection method and device and electronic equipment
CN111027632A (en) * 2019-12-13 2020-04-17 支付宝(杭州)信息技术有限公司 Model training method, device and equipment

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9946970B2 (en) * 2014-11-07 2018-04-17 Microsoft Technology Licensing, Llc Neural networks for encrypted data
JP6746139B2 (en) * 2016-09-08 2020-08-26 公立大学法人会津大学 Detection agent system using mobile terminal, machine learning method in detection agent system, and program for implementing the same
FR3057090B1 (en) * 2016-09-30 2018-10-19 Safran Identity & Security METHODS FOR SECURELY LEARNING PARAMETERS FROM A CONVOLVED NEURON NETWORK AND SECURED CLASSIFICATION OF INPUT DATA
CN109214193B (en) * 2017-07-05 2022-03-22 创新先进技术有限公司 Data encryption and machine learning model training method and device and electronic equipment
CN108876864B (en) * 2017-11-03 2022-03-08 北京旷视科技有限公司 Image encoding method, image decoding method, image encoding device, image decoding device, electronic equipment and computer readable medium
CN108921282B (en) * 2018-05-16 2022-05-31 深圳大学 Construction method and device of deep neural network model
CN108776790A (en) * 2018-06-06 2018-11-09 海南大学 Face encryption recognition methods based on neural network under cloud environment
US11575500B2 (en) * 2018-07-25 2023-02-07 Sap Se Encrypted protection system for a trained neural network
CN109325584B (en) * 2018-08-10 2021-06-25 深圳前海微众银行股份有限公司 Federal modeling method and device based on neural network and readable storage medium
CN110674941B (en) * 2019-09-25 2023-04-18 南开大学 Data encryption transmission method and system based on neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9436835B1 (en) * 2012-01-05 2016-09-06 Gokay Saldamli Homomorphic encryption in computing systems and environments
CN108564587A (en) * 2018-03-07 2018-09-21 浙江大学 A kind of a wide range of remote sensing image semantic segmentation method based on full convolutional neural networks
CN110830515A (en) * 2019-12-13 2020-02-21 支付宝(杭州)信息技术有限公司 Flow detection method and device and electronic equipment
CN111027632A (en) * 2019-12-13 2020-04-17 支付宝(杭州)信息技术有限公司 Model training method, device and equipment

Also Published As

Publication number Publication date
CN113723604B (en) 2024-03-26
CN113723604A (en) 2021-11-30

Similar Documents

Publication Publication Date Title
Xiong et al. An integer wavelet transform based scheme for reversible data hiding in encrypted images
Chang et al. Privacy-preserving reversible information hiding based on arithmetic of quadratic residues
Yin et al. Separable and Error‐Free Reversible Data Hiding in Encrypted Image with High Payload
Manohar et al. Data encryption & decryption using steganography
CN105634732A (en) Ciphertext domain multi-bit reversible information hiding method
CN110110535B (en) Low-distortion steganography method based on pixel matrix
US12033233B2 (en) Image steganography utilizing adversarial perturbations
Wu et al. Separable reversible data hiding in encrypted images based on scalable blocks
El-Bendary FEC merged with double security approach based on encrypted image steganography for different purpose in the presence of noise and different attacks
WO2021238992A1 (en) Neural network training method and apparatus, electronic device, and readable storage medium
US20210019443A1 (en) Image processing method and image processing system for deep learning
Sadhya et al. Design of a cancelable biometric template protection scheme for fingerprints based on cryptographic hash functions
Yang et al. Efficient color image encryption by color-grayscale conversion based on steganography
Yu et al. Reversible data hiding in encrypted images for coding channel based on adaptive steganography
CN112529974B (en) Color visual password sharing method and device for binary image
Roselinkiruba et al. Dynamic optimal pixel block selection data hiding approach using bit plane and image encryption
Chai et al. TPE-ADE: Thumbnail-Preserving Encryption Based on Adaptive Deviation Embedding for JPEG Images
Chen et al. Reversible data hiding in encrypted images based on reversible integer transformation and quadtree-based partition
CN111275603B (en) Security image steganography method based on style conversion and electronic device
Kaur et al. Image steganography using hybrid edge detection and first component alteration technique
CN111598765B (en) Three-dimensional model robust watermarking method based on homomorphic encryption domain
Hasan et al. A Novel Compressed Domain Technique of Reversible Steganography
Ishikawa et al. Learnable Cube-based Video Encryption for Privacy-Preserving Action Recognition
Asif et al. High-Capacity Reversible Data Hiding using Deep Learning
Panchikkil et al. A Machine Learning based Reversible Data Hiding Scheme in Encrypted Images using Fibonacci Transform

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21812204

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21812204

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 21812204

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 060723)

122 Ep: pct application non-entry in european phase

Ref document number: 21812204

Country of ref document: EP

Kind code of ref document: A1