WO2021238992A1 - 一种神经网络训练方法、装置、电子设备及可读存储介质 - Google Patents

一种神经网络训练方法、装置、电子设备及可读存储介质 Download PDF

Info

Publication number
WO2021238992A1
WO2021238992A1 PCT/CN2021/096109 CN2021096109W WO2021238992A1 WO 2021238992 A1 WO2021238992 A1 WO 2021238992A1 CN 2021096109 W CN2021096109 W CN 2021096109W WO 2021238992 A1 WO2021238992 A1 WO 2021238992A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
type
trained
training data
layer
Prior art date
Application number
PCT/CN2021/096109
Other languages
English (en)
French (fr)
Inventor
浦世亮
徐习明
黄博
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2021238992A1 publication Critical patent/WO2021238992A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services

Definitions

  • This application relates to deep learning technology, in particular to a neural network training method, device, electronic equipment and readable storage medium.
  • Online learning is a learning method that uses online unsupervised data for model training, thereby further improving the generalization performance of the model in the actual deployment environment.
  • it is usually necessary to use some or all of the original supervised data to assist training to ensure the performance of the model. Due to the privacy and confidentiality of data involved, the original supervised data cannot be directly stored on the deployment side of the online learning system.
  • the usual file is encrypted and stored. After decryption, the training scheme involves the risk of secret key leakage and insecure data memory. In this case, encryption training is an effective solution to ensure data security.
  • encryption training the data does not need to be decrypted, but directly participates in the training in the form of ciphertext.
  • Existing encryption training schemes include symmetric encryption schemes, training data plus noise encryption schemes, and autoencoder encryption schemes.
  • the symmetric encryption scheme ensures that the encrypted training model is consistent with the original data training model, thus ensuring the performance of the model; but the original data can be restored after the secret key is leaked, and there is a data security risk; at the same time, the symmetric encryption scheme can only be applied to a single layer Perceptrons and other models that do not include nonlinear operations cannot be applied to deep neural networks.
  • the training data plus noise encryption scheme encrypts the original data by adding noise to the original data.
  • the noise changes the pattern of the original data, the performance of the model is severely degraded if the noise is too large; the confidentiality of the original data is insufficient if the noise is too small.
  • the self-encoder encryption scheme trains a self-encoder to extract features of the original data, and use hidden layer features to learn the pattern of the original data and use it as encrypted data.
  • the decoder parameters are leaked, the original data can still be restored through hidden layer features and the decoder, which poses a certain data security risk.
  • the original data pattern is complex (pictures, videos, etc.) and the data scale is large, it is difficult for self-encoding to learn good hidden layer features to represent all the patterns of the original data; therefore, the performance of the encrypted training model in this case is also Will be greatly affected.
  • the present application provides a neural network training method, device, electronic equipment, and readable storage medium.
  • a neural network training method including:
  • the fixed layer of the neural network to be trained to process the first type of training data to obtain the encryption features of the first type of training data; wherein, the first type of training data is original supervised data, and the fixed layer is The first N layers of the neural network to be trained, the fixed layer includes at least one nonlinear layer, and N is a positive integer;
  • the trainable layer of the neural network to be trained is trained until the neural network to be trained converges, and the second type of training data is training data obtained online.
  • a neural network training device including
  • the data processing unit is used to encrypt the first type of training data by using the fixed layer of the neural network to be trained to obtain the encryption characteristics of the first type of training data; wherein, the first type of training data is original supervised data ,
  • the fixed layer is the first N layers of the neural network to be trained, the fixed layer includes at least one nonlinear layer, and N is a positive integer;
  • the training unit is configured to train the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data until the neural network to be trained converges, and the second type of training data is online The acquired training data.
  • an electronic device including a processor and a memory, the memory storing machine executable instructions that can be executed by the processor, and the processor is executing the machine executable instructions.
  • Time is prompted: Use the fixed layer of the neural network to be trained to encrypt the first type of training data to obtain the encryption features of the first type of training data; wherein, the first type of training data is the original supervised data, so
  • the fixed layer is the first N layers of the neural network to be trained, the fixed layer includes at least one nonlinear layer, and N is a positive integer; based on the encryption feature and the second type of training data, the neural network to be trained
  • the trainable layer of the network is trained until the neural network to be trained converges, and the second type of training data is training data obtained online.
  • a machine-readable storage medium having machine-executable instructions stored in the machine-readable storage medium, and the aforementioned neural network training is implemented when the machine-executable instructions are executed by a processor method.
  • the first type of training data is processed by using the fixed layer of the neural network to be trained to obtain encrypted features, and based on the encrypted features and the second type of training data, the trainable layer of the neural network to be trained is trained until it is to be trained
  • the neural network converges to improve the performance of the neural network model while ensuring the safety of the first type of training data.
  • Fig. 1 is a schematic flowchart of a neural network training method shown in an exemplary embodiment of the present application
  • FIG. 2 is a schematic diagram of a process of training a trainable layer of a neural network to be trained based on encryption features and a second type of training data according to an exemplary embodiment of the present application;
  • FIG. 3 is a schematic diagram of a process of training a trainable layer of a neural network to be trained based on encryption features and a second type of training data according to an exemplary embodiment of the present application;
  • Fig. 4A is a schematic diagram of a process for obtaining encryption features according to an exemplary embodiment of the present application
  • 4B is a schematic flowchart of a neural network training method shown in an exemplary embodiment of the present application.
  • Fig. 5A is a schematic diagram of a neural network shown in an exemplary embodiment of the present application.
  • FIG. 5B is a schematic flowchart of a data encryption process shown in an exemplary embodiment of the present application.
  • FIG. 5C is a schematic flowchart of an online training process shown in an exemplary embodiment of the present application.
  • Fig. 6 is a schematic structural diagram of a neural network training device shown in an exemplary embodiment of the present application.
  • Fig. 7 is a schematic diagram showing the hardware structure of an electronic device according to an exemplary embodiment of the present application.
  • FIG. 1 is a schematic flowchart of a neural network training method provided by an embodiment of this application.
  • the neural network training method may include the following steps.
  • the neural network to be trained refers to a neural network that has completed the pre-training, which will not be repeated in the embodiments of the present application.
  • Step S100 Encrypt the first type of training data by using the fixed layer of the neural network to be trained to obtain the encryption features of the first type of training data; wherein, the fixed layer is the first N layers of the neural network to be trained, and the fixed layer includes at least A non-linear layer, N is a positive integer.
  • the neural network such as the convolutional layer and the pooling layer
  • the convolutional layer and the pooling layer correspond to the lossy feature extraction process, even if the intermediate features and convolutional layer parameters output by these layers are known, the original data cannot be restored; therefore, this application
  • the first type of training data can be encrypted through the convolutional layer and the pooling layer of the neural network, which can effectively ensure data privacy and security.
  • the fine-tuning of the fixed shallow parameters of a pre-trained neural network model has little effect on the model performance
  • the fixed shallow parameters of the pre-trained neural network model are kept unchanged during the training process. The effect on the performance of the neural network model is small.
  • the preset number of layers of the neural network to be trained can be used as the fixed layer (the parameters of the fixed layer are not involved in the neural network Training), and use the fixed layer to encrypt the first type of training data, so as to realize the encryption of the first type of training data, and obtain the encryption feature corresponding to the first type of training data.
  • the first type of training data is original supervised data.
  • the fixed layer used to encrypt the first type of training data needs to include at least one non-linear layer (such as a pooling layer, an activation layer, etc.).
  • the layers in the first 1-2 blocks of the neural network can be determined as the fixed layers of the neural network.
  • the implementation of using the fixed layer of the neural network to be trained to encrypt the first type of training data in step S100 can be performed offline, that is, the encryption of the first type of training data is implemented offline. Perform neural network training on.
  • Step S110 based on the encryption feature and the second type of training data, train the trainable layer of the neural network to be trained until the neural network to be trained converges.
  • the trainable layer of the neural network to be trained can be trained based on the obtained encrypted feature and the second type of training data until the neural network to be trained converges .
  • the trainable layer of the neural network to be trained includes the rest of the layers except the fixed layer, which usually includes the convolutional layer and the fully connected layer located at the high-level of the neural network to be trained.
  • the parameters of the trainable layer are in the neural network. Training is carried out during online training.
  • the second type of training data is training data obtained online, such as online unsupervised data.
  • the first N layers of the neural network to be trained including at least one nonlinear layer are set as fixed layers, and the first type of training data is processed by using the fixed layer of the neural network to be trained .
  • the trainable layer of the neural network to be trained is trained until the neural network to be trained converges, which improves the security of the first type of training data The performance of the neural network model.
  • step S100 after the first type of training data is encrypted using the fixed layer of the neural network to be trained, the method may further include:
  • training the trainable layer of the neural network to be trained based on the encrypted features and the second type of training data may include:
  • the trainable layer of the neural network to be trained is trained.
  • the first type of training data is encrypted by using the fixed layer of the neural network to be trained to obtain the encryption After the feature, you can also specify the encryption feature.
  • the specified processing may include, but is not limited to, one or more of quantization, cropping, and compression.
  • the aforementioned compression is lossy compression.
  • the trainable layer of the neural network to be trained can be trained based on the processed encrypted features and the second type of training data.
  • the training of the trainable layer of the neural network to be trained based on the processed encrypted features and the second type of training data may include the following steps:
  • Step S200 when the designated processing includes compression, perform decompression processing on the processed encrypted feature
  • Step S210 training the trainable layer of the neural network to be trained based on the decompressed encryption features, and using the fixed layer of the neural network to be trained to process the second type of training data, and based on the processed second type of training data Train the trainable layer of the neural network to be trained.
  • the encrypted features when performing online training on the neural network to be trained, if the encrypted features are compressed, then when training the trainable layer of the neural network to be trained based on the encrypted features, the compressed encrypted features need to be decompressed. Compression process, get the encrypted feature after decompression.
  • the trainable layer of the neural network to be trained can be trained based on the decompressed encryption features; on the other hand, the trainable layer of the neural network to be trained can be trained based on the second type of training data Layer for training.
  • the decompressed encrypted features and the second type of training data can be regarded as a large data set for training the trainable layer of the neural network to be trained.
  • the encrypted feature is the feature processed by the fixed layer of the neural network to be trained
  • the fixed layer of the neural network to be trained will no longer process the encrypted feature, but Use this encryption feature to train the trainable layer of the neural network to be trained.
  • the fixed layer of the neural network to be trained needs to be used to process the second type of training data, and based on the processed second type of training data, the neural network to be trained can be trained Layer for training.
  • step S110 training the trainable layer of the neural network to be trained based on the encrypted features and the second type of training data may include the following steps:
  • Step S111 Perform feature enhancement on the encrypted feature.
  • Step S112 training the trainable layer of the neural network to be trained based on the encrypted feature after feature enhancement and the second type of training data.
  • the encrypted features can be enhanced, that is, the encrypted features can be added by certain means.
  • Information or changed data for example, adding Gaussian noise or salt and pepper noise, etc., and based on the feature-enhanced encryption feature and the second type of training data, train the trainable layer of the neural network to be trained.
  • the encrypted feature used to train the trainable layer of the neural network to be trained is a compressed encrypted feature
  • the latter encrypted feature is subjected to decompression processing, and the decompressed encrypted feature is subjected to feature enhancement processing.
  • the neural network training system may include two parts: the first part is the offline encryption subsystem, and the second part is the online training subsystem; among them:
  • the offline encryption subsystem uses a shallow layer of the neural network model to be trained (that is, the above-mentioned first N layers, including at least one nonlinear layer) as the encryption layer, and processes the first type of training data to obtain encrypted features.
  • the flowchart It can be as shown in Figure 4A.
  • the first type of training data is forward-calculated through the fixed layer of the model to obtain the feature map; then the feature map is cropped and quantized to reduce the size of the feature map; then the feature map is further compressed using the image storage compression algorithm
  • the compression algorithm includes but is not limited to run-length coding, JPEG (an image format) compression, etc.; the final feature obtained by performing this series of processing on the feature map is the encrypted data of the first type of training data.
  • the obtained encrypted data can effectively protect the security of the first type of training data.
  • the encrypted data is used as the middle layer feature of the model and can be input to the subsequent layers for training, thus ensuring the performance of the model.
  • the online training system uses the encryption features corresponding to the first type of training data and the second type of training data to train the parameters of the non-fixed layer (ie, the above-mentioned trainable layer) of the neural network model to be trained to further improve the actual deployment environment of the model
  • the implementation flow chart of the performance in Figure 4B can be shown in Figure 4B.
  • the encryption feature in order to enhance the richness of data and improve the performance of the neural network model, the encryption feature can be enhanced, and then the enhanced encryption feature can be used, and the second type of training after the fixed layer processing of the network to be trained Data, the two parts of the characteristics are combined to train the parameters of the trainable layer of the neural network to be trained, thereby improving the performance of the neural network model.
  • FIG. 5A is a schematic diagram of a neural network provided in an embodiment of this application.
  • the neural network includes a convolutional layer and a fully connected layer.
  • a pooling layer may also be included between the convolutional layers, which is not shown in the figure.
  • the convolutional layer includes a fixed convolutional layer at the bottom layer (that is, the above-mentioned fixed layer) and a trainable convolutional layer at a high level.
  • the fixed convolutional layer is used as an encryption layer for encrypting the first type of training data, and its parameters are not involved in training; the parameters of the trainable convolutional layer and the fully connected layer (that is, the above-mentioned trainable layer) are trained in the online training process.
  • FIG. 5B is a schematic flowchart of a data encryption process provided by an embodiment of this application.
  • any picture in the first type of training data (or data set) is forward-calculated with a fixed convolutional layer to obtain feature maps of many channels. These feature maps hide the features of the original picture, but The data features related to the training task are retained; then the feature map is quantized, cropped, and compressed to obtain the final encrypted feature.
  • FIG. 5C is a schematic flowchart of an online training process provided by an embodiment of the application.
  • the encrypted feature is decompressed to obtain the corresponding lossy feature map (left column), and the second type of training data is subjected to the forward calculation of the fixed convolution layer to obtain the corresponding feature map (right column).
  • These feature maps are input to the subsequent trainable convolutional layer and fully connected layer together, and the parameters of these trainable layers are trained. Since the encryption of the first type of training data is achieved by encrypting the first type of training data through the fixed layer of the neural network to be trained, that is, the encrypted feature belongs to the middle layer feature of the neural network to be trained. Therefore, the encrypted feature is used to participate in the training to be trained.
  • the training of the trainable layer of the neural network can improve the performance of the neural network model while ensuring the safety of the first type of training data.
  • the encrypted features are compressed and stored using a lossy compression algorithm, and used after decompression during neural network training. Because lossy compression loss information has less impact on the data to be compressed (ie, encryption features), but the compression ratio is significantly greater than lossless compression. Therefore, the security of the first type of training data can be further improved while ensuring performance , And significantly reduce the storage space occupied by encryption features.
  • the first type of training data is processed by using the fixed layer of the neural network to be trained to obtain the encrypted feature, and based on the encrypted feature and the second type of training data, the trainable layer of the neural network to be trained Training is performed to improve the performance of the neural network model while ensuring the safety of the first type of training data.
  • FIG. 6 is a schematic structural diagram of a neural network training device provided by an embodiment of this application.
  • the neural network training device may include:
  • the data processing unit 610 is configured to encrypt the first type of training data by using the fixed layer of the neural network to be trained to obtain the encryption characteristics of the first type of training data; wherein, the first type of training data is original supervised Data, the fixed layer is the first N layers of the neural network to be trained, the fixed layer includes at least one nonlinear layer, and N is a positive integer;
  • the training unit 620 is configured to train the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data until the neural network to be trained converges, and the second type of training data is Training data obtained online.
  • the data processing unit 610 after the data processing unit 610 encrypts the first type of training data by using the fixed layer of the neural network to be trained, it further performs the specified processing on the encryption feature, wherein the specified processing
  • the type includes processing for improving the security of the encryption feature, or/and processing for reducing the storage space occupied by the encryption feature;
  • the training unit 620 trains the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data, including:
  • the designated processing includes one or more of the following processing:
  • the training unit 620 trains the trainable layer of the neural network to be trained based on the processed encrypted features and the second type of training data, including:
  • the two types of training data train the trainable layer of the neural network to be trained.
  • the training unit 620 trains the trainable layer of the neural network to be trained based on the encryption feature and the second type of training data, including:
  • FIG. 7 is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of this application.
  • the electronic device may include a processor 701 and a memory 702 storing machine-executable instructions.
  • the processor 701 and the memory 702 can communicate via a system bus 703. Moreover, by reading and executing the machine executable instructions corresponding to the encoding control logic in the memory 702, the processor 701 can execute the neural network training method described above.
  • the memory 702 mentioned herein may be any electronic, magnetic, optical, or other physical storage device, and may contain or store information, such as executable instructions, data, and so on.
  • the machine-readable storage medium can be: RAM (Radom Access Memory), volatile memory, non-volatile memory, flash memory, storage drive (such as hard drive), solid state hard drive, any type of storage disk (Such as CD, DVD, etc.), or similar storage media, or a combination of them.
  • a machine-readable storage medium is also provided, such as the memory 702 in FIG. 7.
  • the machine-readable storage medium stores machine-executable instructions.
  • the machine-readable storage medium may be ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computational Linguistics (AREA)
  • Bioethics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Hardware Design (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本申请提供一种神经网络训练方法、装置、电子设备及可读存储介质,该神经网络训练方法包括:利用待训练神经网络的固定层对第一类型训练数据进行处理,以得到加密特征;基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,直至所述待训练神经网络收敛。

Description

一种神经网络训练方法、装置、电子设备及可读存储介质
相关申请的交叉引用
本专利申请要求于2020年5月26日提交的、申请号为202010456574.5、发明名称为“一种神经网络训练方法、装置、电子设备及可读存储介质”的中国专利申请的优先权,该申请的全文以引用的方式并入本文中。
技术领域
本申请涉及深度学习技术,尤其涉及一种神经网络训练方法、装置、电子设备及可读存储介质。
背景技术
在线学习是一种利用线上的无监督数据进行模型的训练,从而进一步提高模型在部署的实际环境中的泛化性能的学习方法。在一个在线学习***中,通常需要利用部分或者全部原始的有监督数据来辅助训练,保证模型的性能。由于涉及到数据的隐私性和保密性,原始的有监督数据不能直接地存储在在线学习***的部署端。通常的文件加密存储,解密后参与训练的方案存在秘钥泄露和数据内存不安全的风险。在这种情况下,加密训练是一种用来保证数据安全的有效方案。
在加密训练中,数据不需要进行解密,而是直接以密文的形式参与训练。现有的加密训练方案包括对称加密方案、训练数据加噪声加密方案和自编码器加密方案。
对称加密方案保证了加密训练的模型和原始数据训练的模型一致,因此保证了模型的性能;但是秘钥泄露后原始数据可以被还原,存在数据安全风险;同时对称加密方案只能应用于单层感知机等不包括非线性运算的模型,无法应用于深度神经网络。
训练数据加噪声加密方案通过对原始数据加噪声来加密原始数据。但是由于噪声改变了原始数据的模式,因此噪声太大模型性能下降严重;噪声太小原始数据的保密性又不足。
自编码器加密方案训练一个自编码器对原始数据进行特征提取,利用隐层特征学习原始数据的模式,并作为加密数据。但是当解码器参数泄露后,原始数据仍然能够通过隐层特征和解码器进行还原,存在一定的数据安全风险。此外,当原始数据模式复杂(图 片,视频等)且数据规模很大时,自编码难以学习到好的隐层特征来代表原始数据的所有模式;因此这种情况下加密训练的模型的性能也会受到较大影响。
发明内容
有鉴于此,本申请提供一种神经网络训练方法、装置、电子设备及可读存储介质。
具体地,本申请是通过如下技术方案实现的:
根据本申请实施例的第一方面,提供一种神经网络训练方法,包括:
利用待训练神经网络的固定层对第一类型训练数据进行处理,以得到第一类型训练数据的加密特征;其中,所述第一类型训练数据为原始的有监督数据,所述固定层为所述待训练神经网络的前N层,所述固定层包括至少一个非线性层,N为正整数;
基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,直至所述待训练神经网络收敛,所述第二类型训练数据为在线获取的训练数据。
根据本申请实施例的第二方面,提供一种神经网络训练装置,包括
数据处理单元,用于利用待训练神经网络的固定层对第一类型训练数据进行加密处理,以得到第一类型训练数据的加密特征;其中,所述第一类型训练数据为原始的有监督数据,所述固定层为所述待训练神经网络的前N层,所述固定层包括至少一个非线性层,N为正整数;
训练单元,用于基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,直至所述待训练神经网络收敛,所述第二类型训练数据为在线获取的训练数据。
根据本申请实施例的第三方面,提供一种电子设备,包括处理器和存储器,所述存储器存储有能够被所述处理器执行的机器可执行指令,所述处理器在执行机器可执行指令时被促使:利用待训练神经网络的固定层对第一类型训练数据进行加密处理,以得到第一类型训练数据的加密特征;其中,所述第一类型训练数据为原始的有监督数据,所述固定层为所述待训练神经网络的前N层,所述固定层包括至少一个非线性层,N为正整数;基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,直至所述待训练神经网络收敛,所述第二类型训练数据为在线获取的训练数据。
根据本申请实施例的第四方面,提供一种机器可读存储介质,所述机器可读存储介质内存储有机器可执行指令,所述机器可执行指令被处理器执行时实现上述神经网络训练方法。
本申请提供的技术方案至少可以带来以下有益效果:
通过利用待训练神经网络的固定层对第一类型训练数据进行处理,以得到加密特征,并基于该加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练,直至待训练神经网络收敛,在保证第一类型训练数据安全性的情况下,提高了神经网络模型的性能。
附图说明
图1是本申请示例性实施例示出的一种神经网络训练方法的流程示意图;
图2是本申请示例性实施例示出的一种基于加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练的流程示意图;
图3是本申请示例性实施例示出的一种基于加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练的流程示意图;
图4A是本申请示例性实施例示出的一种得到加密特征的流程示意图;
图4B是本申请示例性实施例示出的一种神经网络训练的方法流程示意图;
图5A是本申请示例性实施例示出的一种神经网络的示意图;
图5B是本申请示例性实施例示出的一种数据加密的流程示意图;
图5C是本申请示例性实施例示出的一种在线训练过程的流程示意图;
图6是本申请示例性实施例示出的一种神经网络训练装置的结构示意图;
图7是本申请示例性实施例示出的一种电子设备的硬件结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
在本申请使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。
为了使本领域技术人员更好地理解本申请实施例提供的技术方案,并使本申请实施例的上述目的、特征和优点能够更加明显易懂,下面结合附图对本申请实施例中技术方案作进一步详细的说明。
请参见图1,为本申请实施例提供的一种神经网络训练方法的流程示意图,如图1所示,该神经网络训练方法可以包括以下步骤。
需要说明的是,在本申请实施例中,若未特殊说明,所提及的待训练神经网络是指完成了预训练的神经网络,本申请实施例后续不再复述。
步骤S100、利用待训练神经网络的固定层对第一类型训练数据进行加密处理,以得到第一类型训练数据的加密特征;其中,固定层为待训练神经网络的前N层,固定层包括至少一个非线性层,N为正整数。
由于神经网络的例如卷积层和池化层本身就对应有损的特征提取过程,即使在知道这些层输出的中间特征和卷积层参数的情况下仍然无法还原出原始数据;因此,本申请实施例中,可以通过神经网络的卷积层和池化层对第一类型训练数据进行加密处理,可以有效保证数据隐私和安全。
此外,由于一个预训练好的神经网络模型的固定浅层的参数的微调对模型性能的影响很小,因此,在训练过程中保持预训练好的神经网络模型的固定浅层的参数不变,对神经网络模型的性能的影响很小。
基于此,为了在保证第一类型训练数据的安全性的情况下,保证神经网络模型的性能,可以将待训练神经网络的前预设数量层作为固定层(固定层的参数不参与神经网络的训练),并利用该固定层对第一类型训练数据进行加密处理,以实现对第一类型训练数据的加密,得到第一类型训练数据对应的加密特征。
示例性的,第一类型训练数据为原始的有监督数据。
示例性的,为了保证第一类型训练数据的安全性,用于对第一类型训练数据进行加密处理的固定层需要包括至少一个非线性层(如池化层、激活层等)。
需要说明的是,由于神经网络的固定层的参数不参与训练,因此,固定层的数量越 多,对神经网络模型的性能影响越大;此外,神经网络的固定层的数量越多,利用神经网络的固定层处理后的数据的安全性越高。因此,在设置神经网络的固定层时,需要均衡考虑神经网络模型的性能以及经固定层处理后的数据的安全性。固定层数量太多,则会导致神经网络模型的性能太差;固定层数量太少,则利用固定层处理后的数据的安全性太差。
示例性的,可以将神经网络的前1~2个block(块)中的层确定为神经网络的固定层。
此外,在本申请实施例中,步骤S100中利用待训练神经网络的固定层对第一类型训练数据进行加密处理的实现可以线下进行,即线下实现对第一类型训练数据的加密,线上进行神经网络的训练。
步骤S110、基于加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练,直至待训练神经网络收敛。
本申请实施例中,按照步骤S100中描述的方式得到加密特征后,可以基于所得到的加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练,直至待训练神经网络收敛。
示例性的,待训练神经网络的可训练层包括除固定层之外的其余层,其通常包括位于待训练神经网络的高层的卷积层以及全连接层,可训练层的参数在神经网络的在线训练过程中进行训练。
示例性的,第二类型训练数据为在线获取的训练数据,如线上的无监督数据。
可见,在图1所示方法流程中,通过将待训练神经网络的包括至少一个非线性层的前N层设置为固定层,并利用待训练神经网络的固定层对第一类型训练数据进行处理,以得到加密特征,并基于加密特征以及第二类型训练数据,对待训练神经网络的可训练层进行训练,直至待训练神经网络收敛,在保证第一类型训练数据安全性的情况下,提高了神经网络模型的性能。
在一个实施例中,步骤S100中,利用待训练神经网络的固定层对第一类型训练数据进行加密处理之后,还可以包括:
对加密特征进行指定处理,以提高加密特征的安全性,或/和,减少加密特征占用的存储空间;
步骤S110中,基于加密特征,以及第二类型训练数据,对待训练神经网络的可训 练层进行训练,可以包括:
基于处理后的加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练。
示例性的,为了进一步提升第一类型训练数据的安全性,或/和,减小加密特征占用的存储空间,在利用待训练神经网络的固定层对第一类型训练数据进行加密处理,得到加密特征之后,还可以对加密特征进行指定处理。
在一个示例中,该指定处理可以包括但不限于量化、裁剪以及压缩等处理中的一个或多个。
示例性的,上述压缩为有损压缩。
相应地,得到处理后的加密特征之后,在进行线上训练时,可以基于处理后的加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练。
在一个示例中,如图2所示,上述基于处理后的加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练,可以包括以下步骤:
步骤S200、当指定处理包括压缩时,对处理后的加密特征进行解压缩处理;
步骤S210、基于解压缩后的加密特征对待训练神经网络的可训练层进行训练,以及,利用待训练神经网络的固定层对第二类型训练数据进行处理,并基于处理后的第二类型训练数据对待训练神经网络的可训练层进行训练。
示例性的,当对待训练神经网络进行线上训练时,若加密特征进行了压缩处理,则在基于加密特征对待训练神经网络的可训练层进行训练时,需要先对压缩后的加密特征进行解压缩处理,得到解压缩后的加密特征。
在进行神经网络的线上训练时,一方面,可以基于解压缩后的加密特征对待训练神经网络的可训练层进行训练;另一方面,可以基于第二类型训练数据对待训练神经网络的可训练层进行训练。这里,解压缩后的加密特征以及第二类型训练数据可以整体被视为用于对待训练神经网络的可训练层进行训练的一个大的数据集。
其中,由于加密特征为待训练神经网络的固定层处理后的特征,因此,当将加密特征输入到待训练神经网络时,待训练神经网络的固定层不会再对加密特征进行处理,而是利用该加密特征对待训练神经网络的可训练层进行训练。
当将第二类型训练数据输入到待训练神经网络时,需要利用待训练神经网络的固定 层对第二类型训练数据进行处理,并基于处理后的第二类型训练数据对待训练神经网络的可训练层进行训练。
在一个实施例中,如图3所示,步骤S110中,基于加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练,可以包括以下步骤:
步骤S111、对加密特征进行特征增强。
步骤S112、基于特征增强后的加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练。
示例性的,为了增强数据的丰富性,提高神经网络模型性能,在基于加密特征对待训练神经网络的可训练层进行训练时,可以对加密特征进行增强处理,即通过一定手段对加密特征附加一些信息或变化数据,例如,添加高斯噪声或椒盐噪声等,并基于特征增强后的加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练。
需要说明的是,在该实施例中,若用于对待训练神经网络的可训练层进行训练的加密特征为经过压缩后的加密特征,则在对加密特征进行特征增强处理之前,需要先对压缩后的加密特征进行解压缩处理,并对解压缩后的加密特征进行特征增强处理。
为了使本领域技术人员能够更好地理解本申请实施例提供的技术方案,下面结合具体实例对本申请实施例提供的技术方案进行说明。
在该实施例中,神经网络训练***可以包括两部分:第一部分为线下加密子***,第二部分为在线训练子***;其中:
线下加密子***利用一个待训练神经网络模型的浅层(即上述前N层,包括至少一个非线性层)作为加密层,对第一类型训练数据进行处理,以得到加密特征,其流程图可以如图4A所示。第一类型训练数据经过模型的固定层进行前向计算,得到特征图;然后对特征图进行裁剪和量化,用来减小特征图的大小;之后进一步使用图片存储的压缩算法对特征图进行压缩并存储,该压缩算法包括但不限于游程编码,JPEG(一种图像格式)压缩等;通过对特征图进行这一系列处理最后得到的特征即为第一类型训练数据的加密数据。
由于第一类型训练数据经过了卷积、池化、量化、裁剪、压缩等一系列不可还原的过程,因此得到的加密数据能够有效保护第一类型训练数据的安全性。此外,加密数据作为模型的中间层特征,可以输入后续层进行训练,因此保证了模型的性能。
在线训练***利用第一类型训练数据对应的加密特征,以及第二类型训练数据一起对待训练神经网络模型的非固定层(即上述可训练层)的参数进行训练,进一步提高模型在部署的实际环境中的性能,其实现流程图可以如图4B所示。
示例性的,为了增强数据的丰富性,提高神经网络模型性能,可以对加密特征进行增强处理,进而,利用增强处理后的加密特征,以及经过待训练网络的固定层处理后的第二类型训练数据,两部分特征联合一起对待训练神经网络的可训练层的参数进行训练,从而提升神经网络模型的性能。
举例来说,请参见图5A,为本申请实施例提供的一个神经网络的示意图,该神经网络包括卷积层和全连接层。
示例性的,卷积层之间还可以包括池化层,图中未示出。
在该示例中,卷积层包括底层的固定卷积层(即上述固定层)和高层的可训练卷积层。固定卷积层用来作为对第一类型训练数据进行加密的加密层,其参数不参与训练;可训练卷积层和全连接层(即上述可训练层)的参数在在线训练过程进行训练。
图5B为本申请实施例提供的一种数据加密的流程示意图。如图5B所示,第一类型训练数据(或数据集)中的任一张图片经过固定卷积层前向计算后,得到许多通道的特征图,这些特征图隐藏了原始图片的特征,但保留了和训练任务相关的数据特征;然后对特征图进行量化、裁剪以及压缩等处理,得到最终的加密特征。
图5C为本申请实施例提供的一种在线训练过程的流程示意图。如图5C所示,加密特征经过解压缩操作得到对应的有损的特征图(左列),而第二类型训练数据经过固定卷积层前向计算同样得到对应的特征图(右列),这些特征图一起输入后续的可训练卷积层和全连接层,并对这些可训练层的参数进行训练。由于对第一类型训练数据的加密是通过待训练神经网络的固定层对第一类型训练数据进行加密实现的,即加密特征属于待训练神经网络的中间层特征,因此,利用加密特征参与待训练神经网络的可训练层的训练,可以在保证第一类型训练数据安全性的情况下,提高神经网络模型的性能。此外,在得到加密特征后,利用有损压缩算法对加密特征进行压缩存储,并在进行神经网络训练时解压缩后使用。由于有损压缩损失的信息对待压缩的数据(即加密特征)的影响较小,但压缩比却明显大于无损压缩,因此,可以在保证性能的情况下,进一步提高第一类型训练数据的安全性,并明显减少加密特征占用的存储空间。
本申请实施例中,通过利用待训练神经网络的固定层对第一类型训练数据进行处理, 以得到加密特征,并基于该加密特征,以及第二类型训练数据,对待训练神经网络的可训练层进行训练,在保证第一类型训练数据安全性的情况下,提高了神经网络模型的性能。
以上对本申请提供的方法进行了描述。下面对本申请提供的装置进行描述:
请参见图6,为本申请实施例提供的一种神经网络训练装置的结构示意图,如图6所示,该神经网络训练装置可以包括:
数据处理单元610,用于利用待训练神经网络的固定层对第一类型训练数据进行加密处理,以得到第一类型训练数据的加密特征;其中,所述第一类型训练数据为原始的有监督数据,所述固定层为所述待训练神经网络的前N层,所述固定层包括至少一个非线性层,N为正整数;
训练单元620,用于基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,直至所述待训练神经网络收敛,所述第二类型训练数据为在线获取的训练数据。
在一种可能的实施例中,所述数据处理单元610利用待训练神经网络的固定层对第一类型训练数据进行加密处理之后,还对所述加密特征进行指定处理,其中,所述指定处理的类型包括用于提高所述加密特征的安全性的处理,或/和,用于减少所述加密特征占用的存储空间的处理;
所述训练单元620基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:
基于处理后的加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练。
在一种可能的实施例中,所述指定处理包括以下处理之一或多个:
量化、裁剪以及压缩。
在一种可能的实施例中,所述训练单元620基于处理后的加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:
当所述指定处理包括压缩时,对所述处理后的加密特征进行解压缩处理;
基于解压缩后的加密特征对所述待训练神经网络的可训练层进行训练,以及,利用所述待训练神经网络的固定层对所述第二类型训练数据进行处理,并基于处理后的第二 类型训练数据对所述待训练神经网络的可训练层进行训练。
在一种可能的实施例中,所述训练单元620基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:
对所述加密特征进行特征增强;
基于特征增强后的加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练。
请参见图7,为本申请实施例提供的一种电子设备的硬件结构示意图。该电子设备可包括处理器701、存储有机器可执行指令的存储器702。处理器701与存储器702可经由***总线703通信。并且,通过读取并执行存储器702中与编码控制逻辑对应的机器可执行指令,处理器701可执行上文描述的神经网络训练方法。
本文中提到的存储器702可以是任何电子、磁性、光学或其它物理存储装置,可以包含或存储信息,如可执行指令、数据,等等。例如,机器可读存储介质可以是:RAM(Radom Access Memory,随机存取存储器)、易失存储器、非易失性存储器、闪存、存储驱动器(如硬盘驱动器)、固态硬盘、任何类型的存储盘(如光盘、dvd等),或者类似的存储介质,或者它们的组合。
在一些实施例中,还提供了一种机器可读存储介质,如图7中的存储器702,该机器可读存储介质内存储有机器可执行指令,所述机器可执行指令被处理器执行时实现上文描述的神经网络训练方法。例如,所述机器可读存储介质可以是ROM、RAM、CD-ROM、磁带、软盘和光数据存储设备等。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (16)

  1. 一种神经网络训练方法,其特征在于,包括:
    利用待训练神经网络的固定层对第一类型训练数据进行加密处理,以得到第一类型训练数据的加密特征;其中,所述第一类型训练数据为原始的有监督数据,所述固定层为所述待训练神经网络的前N层,所述固定层包括至少一个非线性层,N为正整数;
    基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,直至所述待训练神经网络收敛,所述第二类型训练数据为在线获取的训练数据。
  2. 根据权利要求1所述的方法,其特征在于,所述利用待训练神经网络的固定层对第一类型训练数据进行加密处理之后,所述方法还包括:
    对所述加密特征进行指定处理,其中,所述指定处理的类型包括用于提高所述加密特征的安全性的处理,或/和,用于减少所述加密特征占用的存储空间的处理;
    所述基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:
    基于处理后的加密特征,以及所述第二类型训练数据,对所述待训练神经网络的可训练层进行训练。
  3. 根据权利要求2所述的方法,其特征在于,所述指定处理包括以下处理之一或多个:
    量化、裁剪以及压缩。
  4. 根据权利要求3所述的方法,其特征在于,所述基于处理后的加密特征,以及所述第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:
    当所述指定处理包括压缩时,对所述处理后的加密特征进行解压缩处理;
    基于解压缩后的加密特征对所述待训练神经网络的可训练层进行训练,以及,利用所述待训练神经网络的固定层对所述第二类型训练数据进行处理,并基于处理后的第二类型训练数据对所述待训练神经网络的可训练层进行训练。
  5. 根据权利要求1所述的方法,其特征在于,所述基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:
    对所述加密特征进行特征增强;
    基于特征增强后的加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练。
  6. 一种神经网络训练装置,其特征在于,包括:
    数据处理单元,用于利用待训练神经网络的固定层对第一类型训练数据进行加密处 理,以得到第一类型训练数据的加密特征;其中,所述第一类型训练数据为原始的有监督数据,所述固定层为所述待训练神经网络的前N层,所述固定层包括至少一个非线性层,N为正整数;
    训练单元,用于基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,直至所述待训练神经网络收敛,所述第二类型训练数据为在线获取的训练数据。
  7. 根据权利要求6所述的装置,其特征在于,所述数据处理单元利用待训练神经网络的固定层对第一类型训练数据进行加密处理之后,还对所述加密特征进行指定处理,其中,所述指定处理的类型包括用于提高所述加密特征的安全性的处理,或/和,用于减少所述加密特征占用的存储空间的处理;
    所述训练单元基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:
    基于处理后的加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练。
  8. 根据权利要求7所述的装置,其特征在于,所述指定处理包括以下处理之一或多个:
    量化、裁剪以及压缩。
  9. 根据权利要求8所述的装置,其特征在于,所述训练单元基于处理后的加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:
    当所述指定处理包括压缩时,对所述处理后的加密特征进行解压缩处理;
    基于解压缩后的加密特征对所述待训练神经网络的可训练层进行训练,以及,利用所述待训练神经网络的固定层对所述第二类型训练数据进行处理,并基于处理后的第二类型训练数据对所述待训练神经网络的可训练层进行训练。
  10. 根据权利要求6所述的装置,其特征在于,所述训练单元基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:
    对所述加密特征进行特征增强;
    基于特征增强后的加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练。
  11. 一种电子设备,其特征在于,包括处理器和存储器,所述存储器存储有能够被所述处理器执行的机器可执行指令,所述处理器在执行机器可执行指令时被促使:
    利用待训练神经网络的固定层对第一类型训练数据进行加密处理,以得到第一类型 训练数据的加密特征;其中,所述第一类型训练数据为原始的有监督数据,所述固定层为所述待训练神经网络的前N层,所述固定层包括至少一个非线性层,N为正整数;
    基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,直至所述待训练神经网络收敛,所述第二类型训练数据为在线获取的训练数据。
  12. 根据权利要求11所述的设备,其特征在于,在利用待训练神经网络的固定层对第一类型训练数据进行加密处理之后,所述处理器还被促使:
    对所述加密特征进行指定处理,其中,所述指定处理的类型包括用于提高所述加密特征的安全性的处理,或/和,用于减少所述加密特征占用的存储空间的处理;
    所述基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练,包括:
    基于处理后的加密特征,以及所述第二类型训练数据,对所述待训练神经网络的可训练层进行训练。
  13. 根据权利要求12所述的设备,其特征在于,所述指定处理包括以下处理之一或多个:
    量化、裁剪以及压缩。
  14. 根据权利要求13所述的设备,其特征在于,在基于处理后的加密特征,以及所述第二类型训练数据,对所述待训练神经网络的可训练层进行训练时,所述处理器被促使:
    当所述指定处理包括压缩时,对所述处理后的加密特征进行解压缩处理;
    基于解压缩后的加密特征对所述待训练神经网络的可训练层进行训练,以及,利用所述待训练神经网络的固定层对所述第二类型训练数据进行处理,并基于处理后的第二类型训练数据对所述待训练神经网络的可训练层进行训练。
  15. 根据权利要求11所述的设备,其特征在于,在基于所述加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练时,所述处理器被促使:
    对所述加密特征进行特征增强;
    基于特征增强后的加密特征,以及第二类型训练数据,对所述待训练神经网络的可训练层进行训练。
  16. 一种机器可读存储介质,其特征在于,所述机器可读存储介质内存储有机器可执行指令,所述机器可执行指令被处理器执行时实现权利要求1-5任一项所述的方法。
PCT/CN2021/096109 2020-05-26 2021-05-26 一种神经网络训练方法、装置、电子设备及可读存储介质 WO2021238992A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010456574.5 2020-05-26
CN202010456574.5A CN113723604B (zh) 2020-05-26 2020-05-26 一种神经网络训练方法、装置、电子设备及可读存储介质

Publications (1)

Publication Number Publication Date
WO2021238992A1 true WO2021238992A1 (zh) 2021-12-02

Family

ID=78672063

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/096109 WO2021238992A1 (zh) 2020-05-26 2021-05-26 一种神经网络训练方法、装置、电子设备及可读存储介质

Country Status (2)

Country Link
CN (1) CN113723604B (zh)
WO (1) WO2021238992A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117874794B (zh) * 2024-03-12 2024-07-05 北方健康医疗大数据科技有限公司 一种大语言模型的训练方法、***、装置及可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9436835B1 (en) * 2012-01-05 2016-09-06 Gokay Saldamli Homomorphic encryption in computing systems and environments
CN108564587A (zh) * 2018-03-07 2018-09-21 浙江大学 一种基于全卷积神经网络的大范围遥感影像语义分割方法
CN110830515A (zh) * 2019-12-13 2020-02-21 支付宝(杭州)信息技术有限公司 流量检测方法、装置、电子设备
CN111027632A (zh) * 2019-12-13 2020-04-17 支付宝(杭州)信息技术有限公司 一种模型训练方法、装置及设备

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9946970B2 (en) * 2014-11-07 2018-04-17 Microsoft Technology Licensing, Llc Neural networks for encrypted data
JP6746139B2 (ja) * 2016-09-08 2020-08-26 公立大学法人会津大学 携帯端末を用いた察知エージェントシステム、察知エージェントシステムにおける機械学習方法、及びこれを実施するためのプログラム
FR3057090B1 (fr) * 2016-09-30 2018-10-19 Safran Identity & Security Procedes d'apprentissage securise de parametres d'un reseau de neurones a convolution, et de classification securisee d'une donnee d'entree
CN109214193B (zh) * 2017-07-05 2022-03-22 创新先进技术有限公司 数据加密、机器学习模型训练方法、装置以及电子设备
CN108876864B (zh) * 2017-11-03 2022-03-08 北京旷视科技有限公司 图像编码、解码方法、装置、电子设备及计算机可读介质
CN108921282B (zh) * 2018-05-16 2022-05-31 深圳大学 一种深度神经网络模型的构建方法和装置
CN108776790A (zh) * 2018-06-06 2018-11-09 海南大学 云环境下基于神经网络的加密人脸识别方法
US11575500B2 (en) * 2018-07-25 2023-02-07 Sap Se Encrypted protection system for a trained neural network
CN109325584B (zh) * 2018-08-10 2021-06-25 深圳前海微众银行股份有限公司 基于神经网络的联邦建模方法、设备及可读存储介质
CN110674941B (zh) * 2019-09-25 2023-04-18 南开大学 基于神经网络的数据加密传输方法及***

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9436835B1 (en) * 2012-01-05 2016-09-06 Gokay Saldamli Homomorphic encryption in computing systems and environments
CN108564587A (zh) * 2018-03-07 2018-09-21 浙江大学 一种基于全卷积神经网络的大范围遥感影像语义分割方法
CN110830515A (zh) * 2019-12-13 2020-02-21 支付宝(杭州)信息技术有限公司 流量检测方法、装置、电子设备
CN111027632A (zh) * 2019-12-13 2020-04-17 支付宝(杭州)信息技术有限公司 一种模型训练方法、装置及设备

Also Published As

Publication number Publication date
CN113723604A (zh) 2021-11-30
CN113723604B (zh) 2024-03-26

Similar Documents

Publication Publication Date Title
Xiong et al. An integer wavelet transform based scheme for reversible data hiding in encrypted images
Chang et al. Privacy-preserving reversible information hiding based on arithmetic of quadratic residues
Manohar et al. Data encryption & decryption using steganography
US12033233B2 (en) Image steganography utilizing adversarial perturbations
US11275866B2 (en) Image processing method and image processing system for deep learning
Wu et al. Separable reversible data hiding in encrypted images based on scalable blocks
El-Bendary FEC merged with double security approach based on encrypted image steganography for different purpose in the presence of noise and different attacks
WO2021238992A1 (zh) 一种神经网络训练方法、装置、电子设备及可读存储介质
Sadhya et al. Design of a cancelable biometric template protection scheme for fingerprints based on cryptographic hash functions
Yang et al. Efficient color image encryption by color-grayscale conversion based on steganography
Yu et al. Reversible data hiding in encrypted images for coding channel based on adaptive steganography
Saeidi et al. High performance image steganography integrating IWT and Hamming code within secret sharing
CN112529974B (zh) 一种二值图像的彩色视觉密码共享方法和装置
Roselinkiruba et al. Dynamic optimal pixel block selection data hiding approach using bit plane and image encryption
Chai et al. TPE-ADE: Thumbnail-Preserving Encryption Based on Adaptive Deviation Embedding for JPEG Images
Chen et al. Reversible data hiding in encrypted images based on reversible integer transformation and quadtree-based partition
CN111275603B (zh) 一种基于风格转换的安全图像隐写方法与电子装置
Kaur et al. Image steganography using hybrid edge detection and first component alteration technique
CN111598765B (zh) 基于同态加密域的三维模型鲁棒水印方法
Hasan et al. A Novel Compressed Domain Technique of Reversible Steganography
Ishikawa et al. Learnable Cube-based Video Encryption for Privacy-Preserving Action Recognition
CN109660695B (zh) 一种基于遗传模拟退火算法和混沌映射的彩色图像加密方法
Asif et al. High-Capacity Reversible Data Hiding using Deep Learning
Panchikkil et al. A Machine Learning based Reversible Data Hiding Scheme in Encrypted Images using Fibonacci Transform
Choudhary et al. Reversible watermarking scheme for authentication and integrity control in biometric images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21812204

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21812204

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 21812204

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 060723)

122 Ep: pct application non-entry in european phase

Ref document number: 21812204

Country of ref document: EP

Kind code of ref document: A1