CN114358268A - Software and hardware combined convolutional neural network model intellectual property protection method - Google Patents

Software and hardware combined convolutional neural network model intellectual property protection method Download PDF

Info

Publication number
CN114358268A
CN114358268A CN202210018007.0A CN202210018007A CN114358268A CN 114358268 A CN114358268 A CN 114358268A CN 202210018007 A CN202210018007 A CN 202210018007A CN 114358268 A CN114358268 A CN 114358268A
Authority
CN
China
Prior art keywords
neural network
network model
convolutional neural
accelerator
correct
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210018007.0A
Other languages
Chinese (zh)
Other versions
CN114358268B (en
Inventor
张吉良
廖慧芝
伍麟珺
洪庆辉
陈卓俊
关振宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202210018007.0A priority Critical patent/CN114358268B/en
Publication of CN114358268A publication Critical patent/CN114358268A/en
Application granted granted Critical
Publication of CN114358268B publication Critical patent/CN114358268B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a software and hardware combined convolutional neural network model intellectual property protection method, which constructs subnetworks and non-subnetworks by retraining a neural network model twice and modifies an accelerator computing unit circuit structure according to the distribution of the subnetworks and the non-subnetworks. And (3) establishing a unique key corresponding to hardware by using a DRAM PUF (dynamic random access memory), generating different input signals according to the correctness of the key, and if the key is correct, controlling the subnet weight of the circuit selection model of the accelerator calculation unit to participate in calculation by the generated input signals, so that the calculation result is correct. On the contrary, the generated input signal controls all weights of the accelerator calculation unit circuit selection model to participate in calculation, and the calculation result is wrong. The weight selection does not need extra selection time, the DRAM in the accelerator is used as the PUF verification key, a specific decryption process is not needed, the hardware cost is extremely low, and the weight intellectual property protection of the neural network model with high efficiency, low cost and high safety can be realized.

Description

Software and hardware combined convolutional neural network model intellectual property protection method
Technical Field
The invention relates to the technical field of information, in particular to a method for protecting intellectual property of a convolutional neural network model by combining software and hardware.
Background
CNN has wide applications in the fields of character recognition, face recognition, speech recognition, image classification, etc. The success of the CNN model directly benefits from a high quality data set. Many business data sets tend to be private because they contain business secrets of the enterprise or customer privacy, etc. The collection and processing of data sets requires a significant human and material overhead. In addition, training a high performance CNN model often requires expensive training resources. Such as accelerators (TPU, GPU, FPGA, etc.) used for model training, require high energy consumption, and also require certain human resources and time resources during training. If the parameter adjustment work is carried out on the model, the parameter adjustment is carried out by using self knowledge and experience by experienced parameter adjustment technicians. Model providers rely on selling CNN usage rights to make profits, and if the IP of the CNN model is not protected, once a malicious user or attacker eavesdrops or purchases the IP of the CNN model with weights and biases, the attacker may copy and distribute to unauthorized end users, not only reducing the profit, market share, but also potentially damaging brand reputation of the enterprise. Therefore, protection of the IP of the CNN model is required.
The IP protection work for the neural network model mainly has three directions: training data set IP protection, accelerator IP protection and model parameter IP protection. The model provider and the accelerator provider are usually trusted, and the neural network model is provided for the user from the model provider to the model inference phase of the user by using the accelerator provided by the accelerator provider, namely, the model weight parameters may be stolen during the model provider distributing the model to the user or during the accelerator inference phase. Resulting in that the corresponding neural network model can be used for no charge by the stealer.
Therefore, if the model parameters IP are to be protected, the weights of the model need to be encrypted before being provided to the user by the model provider. If the user needs to use the model normally, the correct key must be obtained from the model provider for decryption. There are two main methods of encrypting the model in the existing work. The most common is the first, namely, encrypting the weights using a conventional encryption algorithm. When the encrypted model needs to be used, the weights are recovered by decryption with the key in the model inference phase. Generally, accelerators used for model reasoning are considered to be protected by, e.g., SGX schemes, and are not directly accessible or operational within the accelerator by runtime phase attackers. However, if all weights are decrypted before training, the decrypted model data may still be stolen by an attacker when the accelerator reads the external weight data. If the encrypted weight is decrypted in the accelerator, the operation cost of the accelerator is greatly increased, and the performance of the accelerator is reduced. The second method is a method of using an obfuscation algorithm to obfuscate the weights and exchanging weight positions to protect intellectual property rights of the weights, which reduces time and hardware costs possibly caused in the process of encrypting and decrypting the weights compared with the conventional encryption algorithm, but the obfuscation algorithm is relatively simple, and the complex obfuscation algorithm also causes large time and space costs.
Disclosure of Invention
In order to solve the problems, the invention provides a method for protecting intellectual property of a convolutional neural network model by combining software and hardware. And (3) establishing a unique key corresponding to hardware by using a DRAM PUF (dynamic random Access memory) and generating different input signals according to the correctness of the key, if the key is correct, controlling a modification part of an accelerator calculation unit circuit by the generated input signals, selecting the weight of a subnet part of the model to participate in calculation, and using the model normally if the result is a correct result. On the contrary, the generated input signal controls the modifying part of the accelerator computing unit circuit to select all weights of the model to participate in the calculation, the obtained result is an error result, and the model can not be normally used. The method binds the neural network model with the specific hardware of the accelerator, effectively improves the security of the weight data of the convolutional neural network model, does not need a specific decryption process, and has extremely low time overhead and hardware overhead. The method can realize the protection of high efficiency, low cost and high safety of intellectual property of the weight of the neural network model.
In order to achieve the technical effects, the technical scheme of the invention is as follows:
a method for protecting intellectual property of a convolution neural network model by combining software and hardware comprises the following steps:
step one, a convolutional neural network model provider side obtains a correct training data set D and an incorrect training data set D2;
secondly, a convolutional neural network model provider side adopts a correct training data set D to retrain the convolutional neural network model for the first time, and a subnet part and a non-subnet part are obtained through division, and the weights of the non-subnet part are all set to be 0; then, training the subnet part by using a correct training data set D to obtain weight data of the subnet part and obtain a correct trained convolutional neural network model;
thirdly, the convolutional neural network model provider side adopts an error training data set D2 to retrain the convolutional neural network model for the second time, the weight of the subnet part is kept unchanged in the training process, the weight data of the non-subnet part is obtained by changing the weight of the non-subnet part, and the error trained convolutional neural network model is obtained, so that the error trained convolutional neural network model outputs an error expected result;
dividing DRAM areas in the accelerator, using the divided DRAM areas as DRAM PUF areas, respectively starting the accelerator to electrify the DRAM PUF areas, measuring DRAM starting initial values in different address ranges, wherein the DRAM address ranges are used as stimuli C of the DRAM PUF, and the DRAM initial values obtained corresponding to the stimuli C are responses R; obtaining a plurality of C-R pairs, CRP for short, as secret keys;
step five, setting an accelerator calculation module, so that when the input secret key is correct, the accelerator calculates and outputs a correct result by adopting correct weight data of the subnet part of the convolutional neural network model after training and input data of the convolutional neural network model; otherwise, calculating the weight data of the non-subnet part and the subnet part in the convolutional neural network model after error training and the input data of the convolutional neural network model, and outputting an error result;
step six, a user purchases a convolutional neural network model and a corresponding accelerator from a supplier party, and a key is obtained from the convolutional neural network model supplier party, wherein the key comprises a plurality of CRPs;
step seven, a user inputs any CRP in the acquired keys into an accelerator, the accelerator is started, the input CRP comprises an excitation C and a corresponding response R, the excitation C corresponds to a DRAM region to obtain a starting value, the response R' and the response R are subjected to similarity calculation, if the similarity is not smaller than a preset threshold value sigma, the key is considered to be correct, and the accelerator calculates input data of the convolutional neural network model and correct results by adopting correct weight data of a subnet part of the trained convolutional neural network model; otherwise, the accelerator calculates by adopting the weight data of the subnet part and the weight data of the non-subnet part in the convolutional neural network model after error training and the input data of the convolutional neural network model, and outputs an error result.
In a further refinement, the erroneous training data set D2 is obtained by re-error labeling of the correct training data set D.
In a further refinement, both the correct training data set D and the incorrect training data set D2 are image data sets.
In a further improvement, the similarity calculation method for R' and R is as follows:
Figure BDA0003460787670000041
wherein J (R, R ') represents the similarity between R' and R.
In a further improvement, σ is 0.95.
In a further improvement, in the fifth step, when the input key pair CRP is set to be correct, the computing unit of the accelerator selects signal set to 0, otherwise, the computing unit of the accelerator selects signal set to 1.
The invention has the beneficial effects that:
the invention constructs the sub-network and the non-sub-network by retraining the neural network model twice, and modifies part of the circuit structure of the accelerator computing unit according to the distribution of the sub-network and the non-sub-network. And establishing a unique key corresponding to hardware by using a DRAM PUF (dynamic random Access memory) PUF (physical random Access memory), generating different input signals according to the correctness of the key, controlling a modification part of an accelerator calculation unit circuit by the generated input signals if the key is correct, selecting the weight of a subnet part of the model to participate in calculation, and enabling the model to be available if the obtained result is a correct result. On the contrary, the generated input signal controls the modifying part of the accelerator computing unit circuit to select all weights of the model to participate in the calculation, the obtained result is an error result, and the model can not be normally used. The key correctness verification of the method adopts lightweight security primitives: the DRAM PUF has the advantages that the safety of the model is bound with the specific accelerator hardware, and the safety is improved. The model uses pre-retraining to construct a subnet, when the model is used, a calculation mode of generating a signal selection weight is utilized according to the verification condition of a secret key, weight selection and weight calculation are synchronous, extra selection time is not needed, DRAM (dynamic random access memory) carried in an accelerator is used as a PUF (physical unclonable function) verification secret key, a specific decryption process is not needed, hardware overhead needed by circuit modification of the accelerator is extremely low, and high-efficiency, low-overhead and high-safety neural network model weight intellectual property protection can be realized.
Drawings
FIG. 1 is a block diagram of the frame of the present invention;
FIG. 2 is a flow chart of the operation of the present invention;
FIG. 3 is an exemplary diagram of subnet training;
FIG. 4 is a diagram showing a circuit modification example of the accelerator;
figure 5DRAM PUF example diagram.
Detailed Description
The technical solution of the present invention will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1 and fig. 2, a CNN model IP protection method with software and hardware combined mainly includes two major parts:
(1) hardware architecture: modifying the working mode of a calculation unit of the CNN accelerator, and performing circuit level encryption by using CRP of DRAM PUF;
(2) and (3) software architecture: and performing sparse-dense encryption training on the CNN model to modify the CNN weight distribution.
The invention combines software and hardware architecture to construct a neural network IP protection framework with the synergistic action of software and hardware. The model supplier distributes the key to the legal user, and the legal user uses the accelerator and the neural network model provided by the model supplier to carry out model prediction. If the user is legitimate, has the correct key and runs the model on a particular accelerator, then the entire neural network model is reasoning for normal operation, giving the correct model prediction result. Once a user is illegal or runs on an illegal accelerator, a key error may trigger a preset fault of the model, for example, the prediction effect is reduced or the model gives a specific error prediction result, and the model is not available.
The specific contents are as follows:
(1) hardware architecture: modifying the working mode of a calculation unit of the CNN accelerator, and performing circuit level encryption by using CRP of DRAM PUF;
the key distributed to the user is CRP of DRAM PUF, and the preset model fault is designed according to the circuit structure of the accelerator. The CRP of the DRAM PUF is obtained by dividing a partial area into a PUF area in a DRAM area inside an accelerator, exciting C into an address range of the DRAM, and responding to a starting value R for dividing the DRAM area when the accelerator is started. After a plurality of measurements, reliable and stable DRAM PUF CRPs are selected and used as keys. As shown in the DRAM PUF example diagram of fig. 5, there are 8 x 8 DRAM arrays, each referred to as 1 DRAM bank.As shown, in a bank, the address range C is selected1,C2,C3Four DRAM cells per region, three regions as three stimuli. Starting the accelerator, powering on the DRAM cell to generate an initial value, C1,C2,C3The distribution of the initial values of the corresponding DRAM areas is marked as R1,R2,R3。C1、R1Is an excitation response pair, and is denoted as CRP1. In the same way, C2、R2Is denoted as CRP2,C3、R3Is denoted as CRP3. When a user predicts input data on an accelerator by using a model, a secret key is input, namely CRP of a DRAM PUF is randomly taken out, an excitation C is input in the area of the DRAM PUF of the accelerator to obtain a response R ', and if the response R' is not obtained, the CRP is used for obtaining the response R
Figure BDA0003460787670000071
The secret key is correct, an input signal 0 is generated, and the signal control calculation unit selects the weight of the model subnet to calculate. If J (R, R') < sigma, the key is determined to be wrong, an input signal 1 is generated, the signal control calculation unit selects all model weights to calculate, a wrong prediction result is obtained, and the model is unavailable. Where J (R, R ') is the Jaccard coefficient, the closer J (R, R ') is to 1, the closer R and R ' are, and σ represents a specific threshold, which ranges from 0 ≦ σ ≦ 1, such as 0.95.
Here, the circuit triggering mechanism for calculating whether the accelerator selects the model subnet weight or selects the model total weight is designed as follows:
the accelerator is mainly realized by addition and multiplication when processing convolution operation. In order to achieve partial addition of the weights in the presence of the correct key, it is necessary to insert Multiplexers (MUXs) in the calculation unit of the accelerator to select the weights to be calculated. The computational structure of the addition tree uses MUXs to insert the non-subnet weight computation portion. In a MAC computation structure, finite state machines may be used, which are counted to determine the lanes belonging to a subnet.
First, we need to explain the accelerated computation mode of a common addition tree, as shown in the circuit modification example of the accelerator of fig. 4: by oneA multiply-add tree with four multipliers and three adders, denoted A-M-T. The four multipliers are respectively marked as M1,M2,M3,M4The adder is marked as A1,A2,A0. The input vector IM (i) needs to be calculated1,i2,i3,i4) With convolution kernel vector KN (k)1,k2,k3,k4). Then the result should be W ═ i1×k1+i2×k2+i3×k3+i4×k4. Here, a-M-T is used for acceleration calculations. The A-M-T original acceleration mode of the multiplication and addition tree is as follows: IM vectors respectively will i1,i2,i3,i4Is distributed to multiplier M1,M2,M3,M4Similarly, the convolution kernel vectors KN respectively combine k1,k2,k3,k4To M1,M2,M3,M4Four multipliers compute i in parallel1×k1,i2×k2,i3×k3,i4×k4Obtaining the result m1,m2,m3,m4. Next m1,m2Is input to an adder A1Sum to sum1,m3,m4Is input to an adder A2Sum to sum2. Final sum1、sum2Input to the last adder A0The result W was obtained.
As introduced above, the hardware modification to the addition tree may be at the multiplier and adder A1,A2With the addition of two multiplexers MUX1,MUX2. Multiplexer MUX1Selecting the result of the multiplier calculation, e.g. m2Whether or not to transmit to adder A1If normal afferent is selected, then A1The calculation result of (2) is normal sum1=m1+m2(ii) a If the selection is not transmitted, the calculation result is sum1=m1。MUX2The same is true. If MUX1,MUX2All choose not to transmit, the final input result W ═ m1+m4
The selection signal of the multiplexer depends on the magnitude relation between J (R, R') and sigma. If the Jaccard coefficient is larger than or equal to sigma, the selection signal is set to be 0, namely the signal control selector selects only part of weight input, if the Jaccard coefficient is smaller than sigma, the selection signal is set to be 1, and all weight input is selected.
(2) And (3) carrying out sparse-dense retraining on the CNN model to modify CNN weight distribution to obtain a software encryption architecture.
Here, the sub-networks of CNN need to be divided. I.e. to divide the original neural network weights into sub-networks as the original neural network model. According to the relevant work of model pruning, the partial weight of the neural network is removed, so that the performance of the neural network model is not obviously reduced, and therefore, the neural network model is usually pruned for better prediction efficiency. The present invention requires pruning and retraining of the neural network to construct the subnet.
The construction of the sub-network needs to be designed according to the hardware structure of the neural network accelerator. When each input feature map is denoted as X, the graph is (w)x,hx,cx) Wherein w isxRepresenting the width, h, of the input imagexRepresenting the height of the input image, cxRepresenting the number of channels of the input image, the parameter of the CNN model is W, and then the shape of the CNN is represented as: (w, h, c)in,cout) Wherein w represents width, h represents height, cin,coutRepresenting the input and output channels.
The acceleration operation of the CNN model accelerator is accelerated mainly through the parallelization of calculation, wherein the parallel modes of the accelerator mainly include two types: input channel parallel and pixel parallel. Input channels are parallel, i.e. different input channels are computed in parallel, and the result of the computation at that channel is computed in the same addition tree or multiplication tree (MAC). Pixel level parallelism is that for a convolution kernel with height h and width w, the same convolution kernel performs parallel computations in the same addition tree or multiplication tree.
The CNN subnet is constructed and needs to be designed according to the parallelization mode of the accelerator acceleration operation. As shown in the subnet training example diagram of fig. 3. If the accelerator is pixel level parallel, then the convolution kernel weights are chosen as the subnet. That is, taking a convolution kernel of 3 × 3 as an example, taking the intersection of the convolution kernel as a subnet, the 5 weights obtained by intersecting the middle row and the middle column form a subnet, and the subnet is partially filled with gray color in the figure, and the rest is partially filled with black color in the non-subnet, e.g., the figure. If the accelerator is channel level parallel, the first i of every n input channels are taken as sub-networks, and the last n-i are non-sub-networks. The choice of i here is determined by the encryption performance, and it is necessary to balance performance and subnet concealment to keep i as small as possible. As shown in the figure, the original weights of 6 channels, and the first 1 channel is taken as a subnet for every 2 channels, and as shown in the figure, the subnet part is filled with gray and the non-subnet part is filled with black.
The training process for the subnet is as follows:
the subnet shapes are designed according to the accelerator hardware architecture. And pruning the model according to the subnetworks, and training the subnetworks to enable the subnetworks to achieve the prediction effect of the original model.
Keeping the trained subnet weight unchanged, only changing the non-subnet weight, carrying out error training on the model, reducing the prediction precision or carrying out error classification on the model to make the model unavailable.
The method comprises the following steps:
model provider side:
a first software stage step: preparing data for training a correct training data set D and a backup error training data set D2 of the model;
a second software stage step: and (3) performing primary retraining on the model by using the training data set D, dividing the subnets, setting the non-subnet part to be zero, and training the subnet part to change the weight so as to train the subnet model to achieve the expected model effect. As in FIG. 3, the training of the subnet is a step
Figure BDA0003460787670000101
The subnet weight portion of the original weight is trained using dataset D, with the remaining non-subnet region weights set to zero. After obtaining the subnet, the model is retrained for the second time by using the training data set D2, and the training processIn the method, the trained subnet weight is kept unchanged, the non-subnet weight is changed, and the trained model is an error unavailable model, as shown in fig. 3, the training stage of the non-subnet weight is the step
Figure BDA0003460787670000102
Usage data set D2Training the non-subnet weight part, and steps in the training process
Figure BDA0003460787670000103
The resulting subnet weights remain unchanged.
A first hardware stage step: dividing a DRAM area in an accelerator, using the DRAM area as a DRAM PUF area, starting the accelerator for multiple times to electrify the DRAM PUF area, measuring DRAM starting initial values in different address ranges, wherein the DRAM address range is the excitation C of the DRAM PUF, the DRAM initial value obtained corresponding to the C is a response R, obtaining a plurality of CR pairs, namely CRP for short, and using the CR pairs as a secret key;
a second hardware stage step: and modifying the circuit structure of the accelerator calculation module. As shown in the accelerator (circuit modification) example diagram of fig. 4. If the key is correct, the signal in the modification module is set to be 0, and when the calculation module calculates, part of the weights participate in the calculation, wherein the part of the weights participating in the calculation is all the weights of the sub-networks, and the non-sub-network weights and the calculation results thereof are not transmitted into the subsequent calculation steps, namely do not participate in the calculation. If the key is wrong, the signal is set to be 1, all weights participate in calculation when the calculation module performs calculation, and the model prediction result is wrong and cannot be normally used.
The model uses the user side:
the method comprises the following steps: the model and corresponding accelerator are purchased from a supplier party, and the secret key (CRP) is obtained from the model supplier party.
Step two: using purchased neural network models and accelerators, such as performing graph classification, etc. Starting the accelerator, and taking any CRP input into the accelerator. Stimulus C, the corresponding DRAM region, determines the measured DRAM address range. And (3) starting the accelerator, wherein C corresponds to a DRAM area to obtain a starting value, the starting value is response R ', R' and R corresponding to C are calculated by adopting a Jaccard coefficient, if the obtained Jaccard coefficient value is close to 1 (not less than a threshold value sigma), the key is considered to be correct, the model is normally used, otherwise, the key is considered to be wrong, and the model is unavailable.
And generating different input signals according to the key result input by the user, if the key is correct, controlling a modification part of the accelerator calculation unit circuit by the generated input signals, selecting the weight of the subnet part of the model to participate in calculation, and normally using the model if the obtained result is the correct result. On the contrary, the generated input signal controls the modifying part of the accelerator computing unit circuit to select all weights of the model to participate in the calculation, and the obtained result is an error result, so that the model is unavailable. The method effectively improves the safety of the weight data of the convolutional neural network model, and the method is realized without a specific decryption process and too much hardware overhead. The method can realize the protection of high efficiency, low cost and high safety of intellectual property of the weight of the neural network model.
The above description is only one specific guiding embodiment of the present invention, but the design concept of the present invention is not limited thereto, and any insubstantial modification of the present invention using this concept shall fall within the scope of the invention.

Claims (6)

1. A method for protecting intellectual property of a convolution neural network model by combining software and hardware is characterized by comprising the following steps:
step one, a convolutional neural network model provider side obtains a correct training data set D and an incorrect training data set D2;
secondly, a convolutional neural network model provider side adopts a correct training data set D to retrain the convolutional neural network model for the first time, and a subnet part and a non-subnet part are obtained through division, and the weights of the non-subnet part are all set to be 0; then, training the subnet part by using a correct training data set D to obtain weight data of the subnet part and obtain a correct trained convolutional neural network model;
thirdly, the convolutional neural network model provider side adopts an error training data set D2 to retrain the convolutional neural network model for the second time, the weight of the subnet part is kept unchanged in the training process, the weight data of the non-subnet part is obtained by changing the weight of the non-subnet part, and the error trained convolutional neural network model is obtained, so that the error trained convolutional neural network model outputs an error result;
dividing DRAM areas in the accelerator, using the divided DRAM areas as DRAM PUF areas, respectively starting the accelerator to electrify the DRAM PUF areas, measuring DRAM starting initial values in different address ranges, wherein the DRAM address ranges are used as stimuli C of the DRAM PUF, and the DRAM initial values obtained corresponding to the stimuli C are responses R; obtaining a plurality of C-R pairs, CRP for short, as secret keys;
step five, setting an accelerator calculation module, so that when the input secret key is correct, the accelerator calculates and outputs a correct result by adopting correct weight data of the subnet part of the convolutional neural network model after training and input data of the convolutional neural network model; otherwise, calculating the weight data of the non-subnet part and the subnet part in the convolutional neural network model after error training and the input data of the convolutional neural network model, and outputting an error result;
step six, a user purchases a convolutional neural network model and a corresponding accelerator from a supplier party, and a key is obtained from the convolutional neural network model supplier party, wherein the key comprises a plurality of CRPs;
step seven, a user inputs any CRP in the acquired keys into an accelerator, the accelerator is started, the input CRP comprises an excitation C and a corresponding response R, the excitation C corresponds to a DRAM region to obtain a starting value, the response R' and the response R are subjected to similarity calculation, if the similarity is not smaller than a preset threshold value sigma, the key is considered to be correct, and the accelerator calculates input data of the convolutional neural network model and correct results by adopting correct weight data of a subnet part of the trained convolutional neural network model; otherwise, the accelerator calculates by adopting the weight data of the subnet part and the weight data of the non-subnet part in the convolutional neural network model after error training and the input data of the convolutional neural network model, and outputs an error result.
2. The method of claim 1, wherein the erroneous training data set D2 is obtained by re-error labeling of the correct training data set D.
3. The method for intellectual property protection based on a convolutional neural network model combining software and hardware as claimed in claim 1, wherein the correct training data set D and the incorrect training data set D2 are both image data sets.
4. The method for protecting intellectual property of convolutional neural network model based on combination of software and hardware as claimed in claim 1, wherein the method for calculating similarity between R' and R is as follows:
Figure FDA0003460787660000021
wherein J (R, R ') represents the similarity between R' and R.
5. The method of claim 1, wherein σ is 0.95.
6. The method for intellectual property protection based on a convolutional neural network model combining software and hardware as claimed in claim 1, wherein in step five, when the inputted key pair CRP is set to be correct, the computing unit of the accelerator selects signal set 0, otherwise the computing unit of the accelerator selects signal set 1.
CN202210018007.0A 2022-01-07 2022-01-07 Software and hardware combined convolutional neural network model intellectual property protection method Active CN114358268B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210018007.0A CN114358268B (en) 2022-01-07 2022-01-07 Software and hardware combined convolutional neural network model intellectual property protection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210018007.0A CN114358268B (en) 2022-01-07 2022-01-07 Software and hardware combined convolutional neural network model intellectual property protection method

Publications (2)

Publication Number Publication Date
CN114358268A true CN114358268A (en) 2022-04-15
CN114358268B CN114358268B (en) 2024-04-19

Family

ID=81106842

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210018007.0A Active CN114358268B (en) 2022-01-07 2022-01-07 Software and hardware combined convolutional neural network model intellectual property protection method

Country Status (1)

Country Link
CN (1) CN114358268B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110113392A1 (en) * 2009-11-09 2011-05-12 Rajat Subhra Chakraborty Protection of intellectual property (ip) cores through a design flow
US20150195088A1 (en) * 2014-01-03 2015-07-09 William Marsh Rice University PUF Authentication and Key-Exchange by Substring Matching
US20180262331A1 (en) * 2017-03-07 2018-09-13 Fujitsu Limited Key generation device and key generation method
WO2018171663A1 (en) * 2017-03-24 2018-09-27 中国科学院计算技术研究所 Weight management method and system for neural network processing, and neural network processor
CN109002883A (en) * 2018-07-04 2018-12-14 中国科学院计算技术研究所 Convolutional neural networks model computing device and calculation method
WO2020012061A1 (en) * 2018-07-12 2020-01-16 Nokia Technologies Oy Watermark embedding techniques for neural networks and their use
US20200193292A1 (en) * 2018-12-04 2020-06-18 Jinan University Auditable privacy protection deep learning platform construction method based on block chain incentive mechanism
CN112272094A (en) * 2020-10-23 2021-01-26 国网江苏省电力有限公司信息通信分公司 Internet of things equipment identity authentication method, system and storage medium based on PUF (physical unclonable function) and CPK (compact public key) algorithm
CN113361682A (en) * 2021-05-08 2021-09-07 南京理工大学 Reconfigurable neural network training with IP protection and using method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110113392A1 (en) * 2009-11-09 2011-05-12 Rajat Subhra Chakraborty Protection of intellectual property (ip) cores through a design flow
US20150195088A1 (en) * 2014-01-03 2015-07-09 William Marsh Rice University PUF Authentication and Key-Exchange by Substring Matching
US20180262331A1 (en) * 2017-03-07 2018-09-13 Fujitsu Limited Key generation device and key generation method
WO2018171663A1 (en) * 2017-03-24 2018-09-27 中国科学院计算技术研究所 Weight management method and system for neural network processing, and neural network processor
CN109002883A (en) * 2018-07-04 2018-12-14 中国科学院计算技术研究所 Convolutional neural networks model computing device and calculation method
WO2020012061A1 (en) * 2018-07-12 2020-01-16 Nokia Technologies Oy Watermark embedding techniques for neural networks and their use
US20200193292A1 (en) * 2018-12-04 2020-06-18 Jinan University Auditable privacy protection deep learning platform construction method based on block chain incentive mechanism
CN112272094A (en) * 2020-10-23 2021-01-26 国网江苏省电力有限公司信息通信分公司 Internet of things equipment identity authentication method, system and storage medium based on PUF (physical unclonable function) and CPK (compact public key) algorithm
CN113361682A (en) * 2021-05-08 2021-09-07 南京理工大学 Reconfigurable neural network training with IP protection and using method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIALONG ZHANG ER AL: "Protecting Intellectual Property of Deep Neural Networks with Watermarking", 《ASIACCS \'18: PROCEEDINGS OF THE 2018 ON ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY》, 31 May 2018 (2018-05-31) *
张吉良等: "基于混沌的公开可验证FPGA知识产权核水印检测方案", 《中国科学:信息科学》, vol. 43, no. 09, 31 December 2013 (2013-12-31) *
苗凤娟;王一鸣;陶佰睿;: "基于软件定义片上可编程***的卷积神经网络加速器设计", 科学技术与工程, no. 34, 8 December 2019 (2019-12-08) *

Also Published As

Publication number Publication date
CN114358268B (en) 2024-04-19

Similar Documents

Publication Publication Date Title
CN110995409B (en) Mimicry defense arbitration method and system based on partial homomorphic encryption algorithm
Manikandan et al. PRIVACY PRESERVING DATA MINING USING THRESHOLD BASED FUZZY CMEANS CLUSTERING.
US20190356666A1 (en) Generating Cryptographic Function Parameters From Compact Source Code
US10467389B2 (en) Secret shared random access machine
CN111400766B (en) Method and device for multi-party joint dimension reduction processing aiming at private data
CN1413320B (en) Method of authenticating anonymous users while reducing potential for &#39;middle man&#39; fraud
US11316665B2 (en) Generating cryptographic function parameters based on an observed astronomical event
CN112883387A (en) Privacy protection method for machine-learning-oriented whole process
US10079675B2 (en) Generating cryptographic function parameters from a puzzle
CN115455476A (en) Longitudinal federal learning privacy protection method and system based on multi-key homomorphic encryption
CN115276947A (en) Privacy data processing method, device, system and storage medium
Soykan et al. A survey and guideline on privacy enhancing technologies for collaborative machine learning
CN115481441A (en) Difference privacy protection method and device for federal learning
Pereteanu et al. Split HE: Fast secure inference combining split learning and homomorphic encryption
Liu et al. DHSA: efficient doubly homomorphic secure aggregation for cross-silo federated learning
CN114036581A (en) Privacy calculation method based on neural network model
CN116170142B (en) Distributed collaborative decryption method, device and storage medium
CN117134945A (en) Data processing method, system, device, computer equipment and storage medium
CN116132017B (en) Method and system for accelerating privacy protection machine learning reasoning
CN114358268B (en) Software and hardware combined convolutional neural network model intellectual property protection method
CN112995189B (en) Method for publicly verifying matrix multiplication correctness based on privacy protection
Wu et al. Efficient privacy-preserving federated learning for resource-constrained edge devices
CN108632033B (en) Homomorphic encryption method based on random weighted unitary matrix in outsourcing calculation
Wang et al. A publicly verifiable outsourcing matrix computation scheme based on smart contracts
Liu et al. Verifiable privacy-preserving neural network on encrypted data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant