CN110929859B - Memristor computing system security enhancement method - Google Patents

Memristor computing system security enhancement method Download PDF

Info

Publication number
CN110929859B
CN110929859B CN201911015821.1A CN201911015821A CN110929859B CN 110929859 B CN110929859 B CN 110929859B CN 201911015821 A CN201911015821 A CN 201911015821A CN 110929859 B CN110929859 B CN 110929859B
Authority
CN
China
Prior art keywords
rram
stealing
positive
neural network
negative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911015821.1A
Other languages
Chinese (zh)
Other versions
CN110929859A (en
Inventor
邹敏辉
王添
张欢欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201911015821.1A priority Critical patent/CN110929859B/en
Publication of CN110929859A publication Critical patent/CN110929859A/en
Application granted granted Critical
Publication of CN110929859B publication Critical patent/CN110929859B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/71Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Neurology (AREA)
  • Storage Device Security (AREA)

Abstract

The invention discloses a security enhancing method of an RRAM computing system, which is used for preventing the attack of stealing the weight of a neural network stored in an RRAM crossbar. Firstly, analyzing a method for mapping the weight of the neural network to the RRAM crossbar and two methods for stealing the weight of the neural network from the RRAM crossbar; then, a prevention method is respectively provided for the two stealing methods; and finally, optimizing the hardware overhead of the second prevention method by using two heuristic algorithms. The method is simple to operate and high in practicability, and can improve the safety of the RRAM computing system.

Description

Memristor computing system security enhancement method
Technical Field
The invention belongs to the field of memristors of new devices, and particularly relates to a method for enhancing safety of a memristor computing system.
Background
Neural Networks (NN) have enjoyed great success in visual object recognition and natural language processing, but such data-intensive applications require significant data movement between computing units and memory. Emerging memristor (RRAM) computing systems show great potential in avoiding large data moves by performing matrix-vector-multiplication operations in memory. However, the non-volatility of RRAM devices may cause the neural network weights stored in the crossbar to be stolen, and an attacker may extract the neural network model from the stolen weights. By stealing the weight of the neural network, an attacker can extract the trained neural network model from the weight, which greatly damages the intellectual property of the neural network model designer. Worse, malicious use of the extracted neural network model may lead to social crisis.
The existing solution is to encrypt NN weights and decrypt them each time they are used. However, these methods of encrypting/decrypting NN parameters inevitably require frequent write operations to the RRAM device. Currently, RRAM devices can only support up to 10 10 A write cycle. Therefore, the NN weight encryption/decryption scheme will shorten the lifetime of RRAM computing systems. In addition, frequent writing to RRAM devices also consumes a lot of energy, which causes long delay to the system and affects system performance.
Disclosure of Invention
The invention aims to provide a security enhancing method for an RRAM computing system, which does not affect the service life of the RRAM computing system and does not bring extra RRAM writing power consumption expense and delay.
The technical solution for realizing the purpose of the invention is as follows: a method for enhancing security of RRAM computing systems by obfuscating crossbar connections, comprising the steps of:
step 1, evaluating the safety of an RRAM cross switch mapping method, analyzing two methods of data stealing, and turning to step 2;
step 2, aiming at the two stealing methods, two different prevention methods are respectively used for enhancing the safety of the RRAM computing system, and the step 3 is carried out;
and 3, optimizing the hardware overhead of the confusion module by utilizing two heuristic algorithms.
Compared with the prior art, the invention has the remarkable advantages that:
(1) the writing operation of the RRAM unit in the RRAM computing system is not involved, so the service life of the RRAM computing system is not influenced, and the extra RRAM writing power consumption overhead and delay are not brought.
(2) The writing operation of the RRAM unit in the RRAM computing system is not involved, so that the additional RRAM writing power consumption overhead and delay are not brought.
Drawings
FIG. 1 is a flow chart of a method for security enhancement of a RRAM computing system in accordance with the present invention.
Fig. 2 is a schematic diagram of performing matrix vector multiplication in a RRAM crossbar and inserting a row obfuscation module between a positive RRAM crossbar and a negative RRAM crossbar.
FIG. 3 is a schematic diagram of different implementations of a row obfuscation module, where (a) is an implementation diagram of connecting m inputs and m outputs at a time, (b) is an implementation diagram of connecting 1 input and 1 output at a time, and (c) is a diagram of combining (a) and (b) to connect x inputs and x outputs at a time.
Fig. 4 is a result diagram of the classification accuracy of the NN model of only one layer of protection extracted without inputting the correct key.
Detailed Description
The invention discloses a security enhancing method of an RRAM computing system, which is used for preventing the attack of stealing of network weights. Firstly, a stealing method of the weight of the neural network is analyzed; then, a security enhancement technique based on obfuscating the row connection between the quadrature and negative cross bar switches is proposed; and finally, optimizing the hardware overhead of the confusion module by utilizing two heuristic algorithms.
The main components of a Neural Network (NN) are the complete connection layer (FC) and the convolutional layer (Conv). The computation of the FC layer is a Matrix Vector Multiplication (MVM) described as:
Figure BDA0002245661960000021
wherein x i (i∈[1,m]) Is input feature mapping, m is the number of rows of the weight matrix of the neural network and m > 1, y j (j∈[1,n]) Is output activation, n is the number of columns of the weight matrix of the neural network and n > 1, w i,j Is the ith row and the jth column element of the weight matrix of the neural network. The calculation of the Conv layer is slightly different but can be converted to MVM.
As shown in fig. 2, in a RRAM computing system, the input is the voltage (V) on the crossbar Word Line (WL) and the output is the accumulated current (I) on the Bit Line (BL). The input voltage, the conductance of the cross switch unit and the output current all obey kirchhoff's law, and can be regarded as MVM operation:
Figure BDA0002245661960000031
wherein g is i,j Is the conductance of the cell in row i and column j of the crossbar.
However, the neural network weights w ij Cannot map directly to g ij Above because of w ij May be positive, negative or zero, while the RRAM conductance of the crossbar switch can only be positive. To solve this problem, a pair of cross-bar switches, i.e., a positive cross-bar switch and a negative cross-bar switch, is required to represent the weight matrix. The input voltage is converted into an opposite voltage and then input to the negative cross switch, and then BL currents of the positive cross switch and the negative cross switch are added to obtain an MVM result, as shown in fig. 2.
RRAM has a gradual reset process, which means that RRAM devices can be continuously tuned from a Low Resistance State (LRS) to a High Resistance State (HRS). Thus, an ideal RRAM device can be tuned to any conductance state between LRS and HRS.
With reference to fig. 1, the method for enhancing security of RRAM computing system according to the present invention includes the following steps:
step 1, evaluating the safety of the RRAM cross switch mapping method, and analyzing a data stealing method.
And 2, aiming at the two stealing methods, respectively using two different prevention methods to enhance the safety of the RRAM computing system.
And 3, optimizing the hardware overhead of the line confusion module by using two heuristic algorithms.
Further, the step 1 of evaluating the security of the RRAM crossbar mapping method and analyzing the data stealing method specifically includes:
step 1.1, assume the maximum conductance of the RRAM cell is G on Minimum conductance of G off (ii) a Each neural network weight matrix is represented by a positive RRAM crossbar connected to a positive voltage and a negative RRAM crossbar connected to a negative voltage; element w in ith row and jth column of neural network weight matrix ij By the ith row and jth column unit in positive RRAM crossbar
Figure BDA0002245661960000032
And ith row and jth column unit in negative RRAM crossbar
Figure BDA0002245661960000033
Represents;
to obtain
Figure BDA0002245661960000034
Step 1.2, a mapping method 1 of the RRAM device:
Figure BDA0002245661960000035
Figure BDA0002245661960000036
mapping method 2 of RRAM device:
Figure BDA0002245661960000041
Figure BDA0002245661960000042
step 1.3, according to the two mapping methods, two stealing modes exist, namely a stealing method 1 for accessing one cross switch of each positive/negative cross switch pair, and a stealing method 2 for accessing two cross switches of each positive/negative cross switch pair; then deducing the corresponding w ij
Further, the use of two different methods to enhance the security of the RRAM computing system described in step 2 is as follows:
step 2.1, exploring a bias space aiming at the stealing method 1 prevention method of the step 1.3, and applying different biases to each matrix element;
and 2.2, hiding the row connection of each pair of positive/negative crossbar switch aiming at the stealing method 2 prevention method of the step 1.3.
Further, the step 3 optimizes the hardware overhead of the obfuscation module by using two heuristic algorithms, which is as follows:
step 3.1, optimization technique 1: reducing the number of multiplexers by adding a layer of inverse multiplexers;
step 3.2, optimization technology 2: only part of the neural network layer is protected.
The invention is described in further detail below with reference to the figures and the embodiments.
Examples
With reference to fig. 1, the present embodiment discloses a method for enhancing security of an RRAM computing system, which includes the following specific steps:
step 1, analysis of a stealing method: the security of the RRAM crossbar mapping method is evaluated, and two methods of data stealing are analyzed.
Suppose the maximum conductance of an RRAM cell is G on Minimum conductance of G off . We use
Figure BDA0002245661960000043
Indicating the conductance of the battery connected to a positive voltage,
Figure BDA0002245661960000044
representing the conductance of the battery connected to a negative voltage. We can then get:
Figure BDA0002245661960000045
according to the literature, there are two main mapping methods for simulating RRAM devices, and mapping method 1 is:
Figure BDA0002245661960000051
Figure BDA0002245661960000052
wherein all RRAM cells are initialized to bias
Figure BDA0002245661960000053
And then adjusted accordingly.
The mapping method 2 comprises the following steps:
Figure BDA0002245661960000054
similarly, where all RRAM cells are initialized to bias G off And then adjusted accordingly.
Stealing method 1: accessing each positive/negative crossbar pairA cross-bar switch; in both mapping methods, the RRAM conductance varies linearly with the weighting value. The attacker can easily get from
Figure BDA0002245661960000055
Or
Figure BDA0002245661960000056
In deducing w ij
Stealing method 2: accessing both crossbars of each positive/negative crossbar pair; the attacker can obtain
Figure BDA0002245661960000057
And can obtain
Figure BDA0002245661960000058
It is therefore easy to deduce the corresponding w by simple subtraction ij
And 2, aiming at the two stealing methods, respectively using two different prevention methods to enhance the safety of the RRAM computing system.
A prevention method of the stealing method 1 comprises the following steps: the bias space is explored and a different bias is applied to each matrix element.
To combat the stealing method 1, we first demonstrate that the bias of mapping method 1 and mapping method 2 has a large value space. Stealing method 1 would then not be able to recover the weight matrix with a single crossbar accessing the positive/negative crossbar pair by applying a different bias to each matrix element.
1) Bias space exploration: suppose that
G on =ηG off (5)
Wherein eta is G on /G off The ratio of (a), eta > 1000.
Theoretically, an ideal RRAM cell can be tuned to G off And G on Any state of electrical conductance therebetween. Then, the weight mapping method of the RRAM device may be described as:
Figure BDA0002245661960000061
wherein for w ≧ 0, the offset value b 1 ∈[G off ,G on ](ii) a For w < 0, the offset value b 2 ∈[G off ,G on ];x 1 And x 2 Are all intermediate variables.
To ensure
Figure BDA0002245661960000062
And
Figure BDA0002245661960000063
is continuous in the range of the temperature,
Figure BDA0002245661960000064
and
Figure BDA0002245661960000065
must satisfy separately
Figure BDA0002245661960000066
Figure BDA0002245661960000067
And
Figure BDA0002245661960000068
thus, we can get b 1 =b 2 . Let b be 1 =λG off Where λ (λ ∈ [1, η ])]) Is an offset scaling value, as can be seen from equation (6),
Figure BDA0002245661960000069
and
Figure BDA00022456619600000610
with w ij Monotonically increasing or decreasing.
Therefore, only when w ij At either end of its range of values, the RRAM can reach either maximum or minimum conductance. That is to say that the first and second electrodes,
Figure BDA00022456619600000611
as can be seen from the formulas (5), (6) and (7),
Figure BDA00022456619600000612
formula (6) can be rewritten as
Figure BDA00022456619600000613
For can be adjusted to G off And G on Any state of conductance value in between, the bias scaling value λ may be any random value between 1 and η. Although the accuracy of the peripheral circuit limits the bias b 1 (or b) 2 ) But the patent considers that the range of values of λ is quite large.
2) Applying a random bias to the weight matrix: bias b based on the above value range 1 (or b) 2 ) The invention proposes to randomize the bias chosen by each element in the weight matrix. The number of bias choices is denoted N b The number of cells in the crossbar being denoted N c . Thus, the temporal complexity of inferring its corresponding matrix weights from an analog crossbar is
Figure BDA0002245661960000071
Suppose N b =1000,N c 256, then the trial number of recovering its right weight matrix from an analog RRAM crossbar using brute force is 1000 256
Therefore, the present invention can defend against the stealing method 1 by randomly assigning a bias to each weight.
A prevention method of the stealing method 2 comprises the following steps: hiding the row connections of each pair of positive/negative crossbars.
By accessing the positive and negative crossbars and performing subtraction, an attacker can easily deduce their stored neural network weights. In this section we propose to hide the connections between the positive and negative crossbars so that the weights of the neural network cannot be inferred even if an attacker could access both crossbars simultaneously.
For example, table I shows the comparison of the classification accuracy of the erroneously extracted NN model and the original NN model after the prevention method 1 and the prevention method 2 are applied at the same time. NN models are LeNet, AlexNet, and VGG 16.
TABLE I comparison of classification accuracy of original NN model and incorrectly extracted NN model
NN model Original NN model Misextracted NN model
LeNet 65.13% 11.82%
AlexNet 73.57% 9·81%
VGG16 90.07% 9·61%
Step 3, optimizing hardware overhead of the confusion module by using two heuristic algorithms, which is specifically as follows:
to hide the crossbar row connections, we have designed a row connection obfuscation module based on multiplexers (as shown in fig. 2 and 3) and inserted it between the positive crossbar and the negative crossbar. The input and output of the inserted alias block are both analogue signals, replaced by analogue switches consisting of a pair of MOSFET transistors, rather than using digital multiplexers, making fig. 3(a) suitable for our case, the alias block hiding the connections between the crossbar rows. Unless specific connection relationships (keys) are known, the accuracy of the neural network is greatly reduced by directly using the wrongly extracted neural network weights stored in the RRAM for calculation.
For one m: m obfuscating modules, having m! Possible combinations of connections between inputs and outputs. Take m as 64 as an example, m! Is 2 295 It is very difficult to break with a brute force attack.
However, to implement the obfuscation module in fig. 3(a), it requires m: 1 multiplexer. Applying such an obfuscation module to each crossbar pair in a RRAM computing system would incur non-negligible additional area overhead. To address this problem, we propose two techniques to reduce the overhead of the multiplexer-based obfuscation module:
(1) optimization technique 1: the number of multiplexers is reduced by adding a layer of inverse multiplexers.
To reduce the area cost of the obfuscated module in fig. 3(a), we can use one m: 1 multiplexer and a 1: m inverse multiplexers as shown in fig. 3 (b). However, the result of this solution is that only one row of quadrature cross switches and its corresponding row of negative cross switches participate in the computation for each clock cycle, which results in an unacceptable delay cost.
To reduce the area/delay overhead in fig. 3(a), 3(b), we combine these two solutions together, as shown in fig. 3 (c). Assuming m ═ xk, (x, k ∈ Z), the combined solution uses 2x k: 1-multiplexer and x 1: k inverse multiplexers. The security of an m: m combination obfuscation module is x! - (k!) x . In this scheme, the x-row positive and negative crossbars are computed at a time. However, in the RRAM computing systemIn the system, only a part of the WLs are activated at a time due to the current limitations of the crossbar BL. Thus, if x is equal to the number of WLs enabled in each cycle, the combined solution does not incur any additional delay overhead.
For example, table II shows 256: the area/delay overhead comparison of these three implementations of 256 rows of obfuscation modules. Assuming that 16 WLs are turned on every clock cycle, x equals 16, which brings about 16 times the delay overhead in fig. 3(c) compared to fig. 3 (a). However, in the case of enabling 16 WLs, the additional delay overhead incurred by fig. 3(c) is the same as the delay overhead incurred by fig. 3(a), and the area overhead of fig. 3(c) is only 1.10% of fig. 3 (a).
Table II delay/area overhead comparison of fig. 3
Implementation of FIG. 3(a) FIG. 3(b) FIG. 3(c)
Normalized delay 256× 16×
Normalized area 0.0078× 0.0110×
(2) Optimization technique 2: only part of the layers of the neural network are protected.
In order to further reduce the area overhead of the row confusion module and ensure the safety of the system, the patent researches the sensitivity to the classification accuracy of the neural network when the cross switch row corresponding to each layer is confused. In the experiment, the crossbar connections corresponding to each layer network are respectively confused, and the classification accuracy of the neural network model extracted by errors is tested. The low classification accuracy ensures the safety of the obfuscation method. The results are shown in fig. 4, and the invention was tested on three neural networks, LeNet, AlexNet and VGG 16. It can be noted that confusing each layer has a different impact on the neural network classification accuracy. The layer that results in lower precision after obfuscation is called the saliency layer, the most salient layer being the MSL. We can see that the layer close to the model input is a significant layer because the errors caused by the protection method propagate through the remaining layers. For example, when we confuse the row connections of the crossbar of the first layer, the classification accuracy of all wrongly extracted NNs is less than 45%. In addition, the obfuscation method has little effect on layers near the output of the model, and obfuscating only the FC layer hardly affects the classification accuracy of the model.
For example, table III shows the classification accuracy of the NN model after an attacker mistakenly proposes by confusing only part of the layers of the neural network. It can be seen that only confusing the two most significant layers of the NN model can reduce the classification accuracy of the erroneously extracted NN model to below 17%.
TABLE III Classification accuracy of NN models wrongly extracted that protect only partial layers of NNs
NN model 2MSLs 3MSLs
LeNet 14.89% 13.08%
AlexNet 10.01% 9.34%
VGG16 16.44% 10.44%
Using both optimization technique 1 and optimization technique 2, the hardware overhead of the row obfuscation module may be significantly reduced. For example, table IV shows the proportion of the reduction in hardware overhead compared to the absence of optimization techniques used to reduce the classification accuracy of the erroneously proposed NN model to the threshold α. It can be seen that, after the optimization technique is used, the hardware overhead of the row obfuscating module can be reduced by 97.45% at most and 33.33% at least.
TABLE IV proportion of hardware overhead reduction compared to optimization without optimization
NN model a=14% a=17% a=20%
LeNet 33.33% 50.00% 50.00%
AlexNet 95.49% 95.49% 95.49%
VGG16 96.17% 97.45% 97.45%
For example, table V shows that to reduce the classification accuracy of the erroneously proposed NN model to the threshold α, the row obfuscation module after using the optimization technique accounts for the hardware area proportion of the RRAM crossbar in the RRAM computing system. It can be seen that our method incurs very little hardware overhead.
Table V hardware area ratio of row obfuscation module to RRAM crossbar using optimization technique
NN model a=14% a=17% a=20%
LeNet 1.6220% 1.2166% 1.2166%
AlexNet 0.1098% 0.1098% 0.1098%
VGG16 0.0932% 0.0622% 0.0622%
Overall workflow of security enhanced RRAM computing system:
for RRAM computing systems, in addition to obfuscation modules, Hardware Random Number Generators (HRNG) and tamper-resistant memories (TPM) are embedded. HRNG is used to generate a random offset scaling value λ; the TPM is used to store the key of the obfuscation module. In an uninitialized emulated RRAM computing system, an authorized user first loads his/her NN weights into the on-chip buffer and loads his/her obfuscated key into the TPM. Determining conductance g for each pair of RRAMs using HRNG production lambda + /g - . Each RRAM in the positive crossbar is then adjusted according to the corresponding matrix. The negative crossbar rows are then aligned according to the TPM's obfuscation key, and then each RRAM in the negative crossbar is adjusted. After setting, before using the RRAM computing system to perform inference computation each time, the row confusion module needs to be configured according to the key in the TPM.

Claims (1)

1. A memristor computing system security enhancement method is characterized by comprising the following steps:
step 1, evaluating the safety of the RRAM cross switch mapping method, and analyzing two methods of data stealing, specifically as follows:
step 1.1, assume the maximum conductance of the RRAM cell is G on Minimum conductance of G off (ii) a Each neural network weight matrix consists of one linkA positive RRAM crossbar connected to a positive voltage and a negative RRAM crossbar connected to a negative voltage; element w in ith row and jth column of neural network weight matrix ij By the ith row and jth column unit in positive RRAM crossbar
Figure FDA0003708629890000011
And ith row and jth column unit in negative RRAM crossbar
Figure FDA0003708629890000012
Represents;
to obtain
Figure FDA0003708629890000013
Step 1.2, a mapping method 1 of the RRAM device:
Figure FDA0003708629890000014
Figure FDA0003708629890000015
mapping method 2 of RRAM device:
Figure FDA0003708629890000016
Figure FDA0003708629890000017
step 1.3, according to the two mapping methods, two stealing modes exist, namely a stealing method 1 for accessing one cross switch of each positive/negative cross switch pair, and a stealing method 2 for accessing two cross switches of each positive/negative cross switch pair; then deducing the corresponding w ij Go to step 2
Step 2, aiming at the two stealing methods, two different prevention methods are respectively used for enhancing the safety of the RRAM computing system, and the method specifically comprises the following steps:
step 2.1, aiming at the stealing method 1 prevention method in the step 1.3, exploring a bias space, and applying different biases to each neural network weight matrix element;
step 2.2, aiming at the stealing method 2 prevention method of the step 1.3, the line connection of each pair of positive/negative cross switches is hidden by inserting a line confusion module, and the step 3 is carried out;
step 3, optimizing hardware overhead of the confusion module by using two heuristic algorithms, which is specifically as follows:
step 3.1, optimization technique 1: reducing the number of multiplexers by adding a layer of inverse multiplexers;
step 3.2, optimization technology 2: only part of the layers of the neural network are protected.
CN201911015821.1A 2019-10-24 2019-10-24 Memristor computing system security enhancement method Active CN110929859B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911015821.1A CN110929859B (en) 2019-10-24 2019-10-24 Memristor computing system security enhancement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911015821.1A CN110929859B (en) 2019-10-24 2019-10-24 Memristor computing system security enhancement method

Publications (2)

Publication Number Publication Date
CN110929859A CN110929859A (en) 2020-03-27
CN110929859B true CN110929859B (en) 2022-09-06

Family

ID=69849426

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911015821.1A Active CN110929859B (en) 2019-10-24 2019-10-24 Memristor computing system security enhancement method

Country Status (1)

Country Link
CN (1) CN110929859B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113553793B (en) * 2021-06-08 2024-07-09 南京理工大学 Method for improving memory logic calculation efficiency based on memristor

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107533668A (en) * 2016-03-11 2018-01-02 慧与发展有限责任合伙企业 For the hardware accelerator for the nodal value for calculating neutral net
CN109657787A (en) * 2018-12-19 2019-04-19 电子科技大学 A kind of neural network chip of two-value memristor

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107533668A (en) * 2016-03-11 2018-01-02 慧与发展有限责任合伙企业 For the hardware accelerator for the nodal value for calculating neutral net
CN109657787A (en) * 2018-12-19 2019-04-19 电子科技大学 A kind of neural network chip of two-value memristor

Also Published As

Publication number Publication date
CN110929859A (en) 2020-03-27

Similar Documents

Publication Publication Date Title
Cai et al. Enabling Secure in-Memory Neural Network Computing by Sparse Fast Gradient Encryption.
Cai et al. Enabling secure nvm-based in-memory neural network computing by sparse fast gradient encryption
Li et al. Low cost LSTM implementation based on stochastic computing for channel state information prediction
Zou et al. Security enhancement for rram computing system through obfuscating crossbar row connections
KR101542280B1 (en) Method for protecting a programmable cryptography circuit, and circuit protected by said method
CN110929859B (en) Memristor computing system security enhancement method
Fey et al. Using memristor technology for multi-value registers in signed-digit arithmetic circuits
Khedkar et al. Power profile obfuscation using nanoscale memristive devices to counter DPA attacks
Mavroeidis et al. PCA, eigenvector localization and clustering for side-channel attacks on cryptographic hardware devices
Xu et al. Using deep learning to combine static and dynamic power analyses of cryptographic circuits
Dubey et al. High-fidelity model extraction attacks via remote power monitors
Borowczak et al. S* FSM: a paradigm shift for attack resistant FSM designs and encodings
Gaspar et al. Hardware implementation and side-channel analysis of lapin
Ajmi et al. Efficient and lightweight in-memory computing architecture for hardware security
Ahmed et al. Detection of Crucial Power Side Channel Data Leakage in Neural Networks
CN101783924B (en) Image encrypting and decrypting system and method based on field programmable gate array (FPGA) platform and evolvable hardware
Khedkar et al. RRAM motifs for mitigating differential power analysis attacks (DPA)
Lin et al. ChaoPIM: A PIM-based protection framework for DNN accelerators using chaotic encryption
Khedkar et al. Towards leakage resiliency: memristor-based AES design for differential power attack mitigation
Guan et al. Extending memory capacity of neural associative memory based on recursive synaptic bit reuse
Nomikos et al. Evaluation of Hiding-based Countermeasures against Deep Learning Side Channel Attacks with Pre-trained Networks
Tang et al. Polar differential power attacks and evaluation
Lumbiarres-Lopez et al. A new countermeasure against side-channel attacks based on hardware-software co-design
Carper et al. Transition Recovery Attack on Embedded State Machines Using Power Analysis
Bock et al. Vulnerability assessment of an IHP ECC implementation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant