INSTRUCTION ISSUE IN A PROCESSOR
This invention relates to the issue of instructions in a processor, and particularly to a method of issuing instructions and to a processor.
The era of digital communications has brought about many technological advancements which make our lives easier, but at the same time pose a new set of problems that need attention. A particular area of concern is data security where businesses and customers alike have their own security requirements of the services which they supply or receive. Computer hackers are seen by business as potential hazards for attracting new e-commerce customers, unless customers can be assured that their transactions will be secure. Many encryption schemes have been suggested in an attempt to overcome 'eavesdropping' on private or personal digital communications such as confidential email messages or receiving television broadcasts which have not been paid for, i.e. pay-TV.
Modern cryptography is about ensuring the integrity, confidentiality and authenticity of digital communications. Secret keys are used to encrypt and decrypt the data and it is essential that these keys remain secure. Whereas in the past secret keys were stored in centralised secure vaults, today's network-aware devices have embedded keys making the hardware an attractive target for hackers. A great deal of research has gone into algorithm design and hackers are more prone to concentrate their efforts on the hardware in which the cryptographic unit is housed.
One such attack is performed by taking physical measurements of the cryptographic unit as described by P.Kocher et al in the two articles entitled "Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS and other systems" and "Differential Power Analysis", both in the Advances in Cryptology journal, CRYPTO '96 pages 104-113 (1996) and CRYPTO '99 pages 388-397 (1999) respectively. By taking measurements of power consumption, computing time or EMF radiations over a large number of encryption operations and using known statistical techniques, it is possible to discover the identity of the secret
keys. Kocher goes on to describe three main techniques: i) timing attacks, ii) Simple Power Analysis (SPA) and iii) Differential Power Analysis (DPA).
DPA provides the most powerful attack using very cheap resources. Many people have started to examine this problem and S. Chari et al provides a worrying analysis regarding the weakness of AES (Advanced Encryption Standard) algorithms on Smart cards, see the article entitled "A Cautionary Note Regarding the Evaluation of AES Candidates on Smart-Cards" in the Second Advanced Encryption Standard Conference, Rome, March 1999.
L. Goubin et al proposes three general strategies to combat Differential Power Analysis attacks in his article entitled "DES and Differential Power Analysis, The Duplication Method' in Cryptographic Hardware and Embedded Systems pages 158-172, 1999. These are:
i) Make algorithmic changes to the cryptographic primitives under consideration.
ii) Replace critical assembler instructions with ones whose signature is hard to analyse, or re-engineer the crucial circuitry that performs arithmetic operations or memory transfers.
iii) Introduce random timing shifts so as to decorrelate the output traces on individual runs.
The first approach has been attempted before. For example, Goubin et al suggests splitting the operands into two and duplicating the workload. This however means at least doubling the required computer resources. Similarly, Chari proposes masking the internal bits by splitting them up and processing the bit shares in a certain way so that once recombined the correct result is obtained.
Kocher et al have attempted the second approach by balancing the Hamming weights of the operands, physical shielding or adding noise circuitry, as discussed
for example in "Timing Attacks on Implementations of Diffie-H'ellman, RSA, DSS and other systems"
The present invention seeks to improve tamper resistance according to the third approach, that is, by decorrelating the timing of power traces on successive program executions.
Kocher et al also describe two ways of producing the required temporal misalignment by introducing: i) introducing random clock signals, and ii) introducing randomness into the execution order. Kocher et al, in "Differential Power Analysis" mention that randomising execution order can help defeat DPA, but can lead to other problems if not done carefully. One randomising approach uses the idea of randomised multi-threading at an instruction level using a set of essentially "shadow" registers. This allows auxiliary threads to execute random encryptions, hence hoping to mask the correct encryption operation. The disadvantage is that additional computational tasks are again required and this requires a more complex processor architecture having separate banks of registers, one for each thread.
In particular, Chari et al in an article entitled "Towards Sound Approaches to Counteract Power-Analysis Attack" in Advances in Cryptology, CRYPTO '99, pages 398-412, shows that for a randomised execution sequence to be effective the randomisation needs to be done extensively. However, no mechanism is disclosed in Chari to enable extensive randomised execution. For example, if only the XOR instruction in each DES (Data Encryption Standard) round is randomised then DPA is still possible by taking around 8 times as much data. DES (Data Encryption Standard) is the most widely used encryption algorithm and is known as a "block cipher", which operates on plaintext blocks of a given size (64-bits) and returns ciphertext blocks of the same size. DES operates on the 64-bit blocks using key sizes of 56- bits. The keys are actually stored as being 64 bits long, but every 8th bit in the key is not used (i.e. bits numbered 7, 15, 23, 31 , 39, 47, 55, and 63).
Hence for randomised execution order to work it needs to be done in a highly aggressive manner which would preclude the type of local randomisation implied by the descriptions above. In addition this cannot be achieved in software since a software randomiser would work at too high a level of abstraction. The randomised multi-threading idea is close to a solution but suffers from increased CPU time and requires a more complex processor with separate banks of registers, one for each thread.
The aim of the present invention is to allow extensive randomised execution of instructions to be performed at run time so that successive program executions are uncorrelated.
According to one aspect of the present invention there is provided a method of issuing instructions to an execution unit in a processor, the method comprising: identifying in an ordered sequence of instructions a set of instructions for which the order of execution is not critical; and selecting instructions in said set for successive execution on a random basis each time the ordered sequence of instructions is executed.
In the described embodiment, instructions for which the order of execution is not critical are those where the outcome does not depend on their order of execution.
According to a further aspect of the present invention there is provided a method of executing a computer program a plurality of times, the program comprising a sequence of instructions, wherein each time the program is executed a set of said instructions is identified for which the order of execution is not critical, and instructions from that set are selected for successive execution on a random basis whereby the execution profile of the program differs each time the program is executed.
In the described embodiment, the execution profile of the program is the physical indicators that result from execution of a code sequence, for example a power trace.
According to another aspect of the present invention there is provided a processor comprising: a fetch unit for fetching instructions from an instruction memory which holds an ordered sequence of instructions to be executed; an execution unit for executing instructions supplied from the fetch unit; an instruction selection unit connected to control the fetch unit and arranged to select successive executable instructions from said ordered sequence on a random basis, for supply to the execution unit.
A still further aspect of the invention provides a method of operating a computer to effect a series of operations, the method comprising: selectively controlling the order in which the operations are effected between a random order and predetermined order.
The step of selectively controlling the order in which the operations are effected can be carried out in response to a mode control signal (e.g. mode bit) which selectively enables a random number generator connected to control said order.
A still further aspect provides a processor comprising: a functional unit for effecting a series of operations; a random number generator connected to said functional unit which, when enabled, causes said functional unit to effect the operation in a random sequence; and means for selectively enabling the random number generator to allow said operation to be effected in one of a random order and a predetermined order.
In the following description a processor which provides for the selection of successive executable instructions on a random basis is referred to as a non- deterministic processor.
The present invention will now be described by way of an example with reference to the accompanying drawings, in which:-
Figure 1 shows a block diagram of a generic CPU architecture;
Figure 2 shows a non-deterministic processor executing two instructions compared to other processors;
Figure 3 shows an embodiment of the random issue unit; Figure 4 shows a flow chart explaining how instructions are issued at random;
Figure 5 shows an example of two input random selection unit; Figures 6A and 6B show a generic model and a 16 input random selection unit; and
Figure 7 shows a flow chart describing a method for choosing which random instruction in the issue buffer to execute.
Figure 1 is a block diagram illustrating the standard functional units that make up a pipelined computer system. A program memory 2 contains program instructions, which are addressable at different memory locations. An ADDRESS bus 6 and a DATA bus 4 transfer information to and from the various elements that make up the processor 8. The system contains an instruction fetch unit 10 having a program counter 12 that stores the address of the next instruction to be fetched. For sequential execution of instructions the program counter will normally be incremented by a single addressing unit. However, if a branch instruction is encountered, the program flow is broken and the program counter 12 needs to be loaded with the address of a target instruction (that is, the first instruction of the branch sequence). The instructions are fetched from the program memory and stored in an instruction issue buffer 14. It is worth noting that the program counter referred to herein is used to control instruction fetches from memory. There may also be an execution counter which is used by the execution unit 18 to specify which instruction is currently being executed. Next, the instructions are decoded and supplied to relevant execution units. In this example, only one execution unit 18 or pipeline is shown, however the present invention is intended to be used in conjunction with modern processors which may have several execution units allowing parallel execution paths. Encryption algorithms need a substantial level of computational power and modern processor architectures such as superscalar, VLIW (Very Long Instruction Word) and SIMD (Single Instruction Multiple Data) are ideally suited to the present invention. Finally, the results of the operations
are written back by a result write stage 22 into temporary registers of a register file 20, which is used to load and store data in and out of main memory.
The present invention is concerned mainly with the block of functionality denoted by the reference numeral 24. In particular, the present invention deals with a modified issue buffer 14 which will be described in more detail later. The issue buffer generates an instruction fetch signal 13 to control which instructions are supplied from the fetch unit 10. Also, part of the decode circuitry may be used to decode the instruction dependencies. This will also be described in more detail at a later stage.
The present invention is concerned with a non-deterministic processor. Non- deterministic processing as described herein means that for successive runs of the program, although the result will be the same the order of execution of the instructions will be random. This reduces the impact of a DPA-type attack in that the power traces resulting from successive program runs will be different.
Figure 2 serves to highlight the differences between a non-deterministic processor and other known processors when executing a simple program consisting of the following two lines of code:
>4DD a, b XOR c, d
The execution flow on the left of Figure 2 represents a standard processor having a single execution pipeline where the two instructions are executed sequentially, i.e. the ADD instruction is executed in cycle 1 followed by the XOR instruction in cycle 2. The middle execution flow represents a modern Pentium processor having a plurality of execution paths, which execute independent instructions in parallel. The execution flow on the right of Figure 2 represents a non-deterministic processor having a single pipeline.
The important point to note is that the non-deterministic processor allows the instructions to be executed in any order provided that it has been established that the instructions are independent. So in the first cycle either the >ADD or the XOR instruction can be carried out and in the second cycle the other instruction will be executed. In contrast, the standard processor executes instructions sequentially and although there is a little "out of order" execution to help with branch prediction, this occurs on a small scale. In any event, in such a processor each time a program is run containing a certain sequence of instructions, the execution sequence will be identical. Although the Pentium processor has a plurality of execution units (A) and (B), which execute the independent instructions in parallel the processor is still deterministic in that the ADD and the XOR instructions are executed concurrently in pipes (A) and (B).
A slightly more complex code sequence comprising eight instructions is shown in Table 1.
Table 1
It is apparent from the code listing above that the sequential execution of these eight instructions I0, 11... 17 is merely one way that the code sequence may be correctly executed. There are in fact 80 different code sequences, i.e. instruction orderings, for executing these eight instructions which will all give the right
answer. For example, the LOAD instruction 10 reads the value of register R1 holding a memory address and the value stored at this address is written into the register R8. It can be seen that the LOAD instructions I0, 11 , 13 and 14 are all independent instructions, and an equally valid execution sequence could be, for example, 11 , I0, 13, 14, 15, 12, 16, 17 in that none of them are dependent on the results of execution of another. However, an incorrect result occurs if the ADD instruction 12 is executed before the LOAD instruction 11. That is, the purpose of 11 is to LOAD a value addressed by register R2 into register R9. The intention of the code sequence is to add the loaded value from R8 to the value in R9. Therefore, if the ADD instruction 12 is carried out before 11 , the old value of R9 will be added to R8 yielding an incorrect value for the resulting summation R10. We say that there is a dependency between the ADD instruction 12 and the LOAD instruction 11.
The present invention makes use of the fact that in many code sequences a number of instructions are independent and thus can, in theory, be executed in any order. The invention exploits this by executing the instructions in a random order at run time. This causes the access patterns to memory for either data or instructions to be uncorrelated for successive program executions, and thus causes the power trace to be different each time.
Figure 3 shows an example of the implementation of a random issue unit. The random issue unit comprises an instruction table 32 with an associated dependency matrix table 30. Instructions are prefetched into the instruction table 32 using conventional instruction fetch circuitry. The dependency matrix table has slots and columns, where the slots represent bit-masks associated with each instruction in the instruction table 32. The bit-masks or dependency bits are an indication as to whether an instruction has a dependency on another instruction. Broadly speaking there are two types of dependencies that need to be considered for an instruction:
1 ) Use dependencies - which are the dependencies of the source registers that an instruction uses to read data from.
2) Defined dependencies - which are the dependencies of the destination registers that an instruction defines to write data to.
In Figure 3, a particular instruction will be decoded and the mask bits will be set accordingly in the Used Registers table 34 and Defined Registers table 36. The Used and Defined Register tables 34, 36 shown in Figure 3 each comprise a number of rows and columns. Each row corresponds to a register (or operand) and each column corresponds to a particular slot (or instruction) in the instruction issue table 32. Each register comprises a plurality of slots corresponding to the number of instructions in the instruction table 32 and is the sp-called bit-mask for a register. The bit-mask for a register is a binary stream where a "1 " indicates which instruction has a dependency on that register. As an example, consider the Used and Defined Register tables 34, 36 of Figure 3 where each table has five rows corresponding to registers R1 to R5, i.e. R1 corresponds to the top row and R5 to the bottom row. At run-time the processor performs a logical OR operation 38 of the bit mask of the Used Registers table 34 and the Defined Registers table 36 thereby creating a new bit-mask stored in a free slot of the dependency matrix 30.
A test can be performed by OR-ing with OR gates 40 each of the dependency bits of a slot of the dependency matrix. If all the dependency bits of a slot associated with a particular instruction are set to zero, then the instruction can be executed and a FIRE signal 42 is generated to the Random Selection Unit 44. Given the result of the OR for each row of the table, a number of zeros (indicating instructions to be executed) and a number of ones (indicating instructions that are blocked) are obtained. The random selection unit 44 selects one of the slots which is indicated at value zero, at random, and causes that instruction to be executed next. In the described embodiment, the dependency bits are overwritten with new values when the dependencies of the next instruction are loaded into the matrix.
All the instructions that have no dependencies are thus identified by fire signals 42 to the random selection unit 44. For purposes of clarity we will assume a single execution pipeline where for each execution cycle the random selection unit
selects by selection signal 46 only one of the fired instructions. ' However it should be appreciated that for example, in a superscalar architecture having parallel execution pipelines, a number of instructions could be issued in parallel under the control of the Random Selection Unit 44. The selection signal 46 of the Random Selection Unit 44 points to an instruction to be executed, while at the same time issues a feedback signal 48 to "free-up" future instructions that may have been dependent on the instruction currently being executed.
The random issue unit supplies an instruction to be executed from the instruction table 32 along instruction supply path 50 and loads an instruction into the instruction table 32 along instruction load path 52 at the same time. Figure 4 is a flow chart indicating how the instructions in the instruction issue buffer 14 are issued for execution and loaded concurrently. The load operations are represented by the left branch flow (C), while the issue operations are represented by the right branch flow (D).
The left branch flow (C) of figure 4 relates to an instruction load operation starting at step S1 where the next instruction, specified by the program counter 12, is loaded into the instruction table 32 of the issue buffer 14. The load operations will firstly be described in general terms, and then more specifically in relation to one example. Each instruction defines two source operands 54 and a destination operand 56. These will nearly always be defined as registers although that is not necessary. Direct addresses or immediates are possible. The source and destination operands 54,56 are simultaneously decoded. At S2, the decoded information is translated into bit-masks that are set in the Used Registers and Defined Registers tables 34,36. These bit-masks are OR-ed by OR gate 38 (Figure 3) to create dependency bits indicating on which instructions the loaded instruction depends. At S3, the empty slot E associated with the loaded instruction is then selected for replacement by setting the InValid flag 58 to zero. The dependency bits are loaded into the selected slot E of the dependency matrix. At S4, the bit-masks in column E of the Registers Used and Registers Defined tables 34,36 are set to "1" along path 62 for the corresponding rows of these tables to ensure that future instructions that use those registers are going to wait for the instruction to finish.
A specific example of the load operation will now be described.
The Used and Defined Register tables 34, 36 are set-up during the instruction fetch or LOAD sequence, as already indicated. The fetched instruction is decoded and the bit-masks associated with each of the registers specified in the instruction are checked for dependencies with other instructions. For example, assume the instruction: ADD R2, R3, R4 is fetched. The bit masks associated with the registers R2 and R3 in the Used Registers table 34 (i.e. the source registers) are sent to OR gate 38. Also, the bit mask associated with register R4 in the Defined Registers Table (i.e. the destination register) is sent to the OR gate 38. Assuming there are N instructions in the instruction table 32, therefore each bit mask has N slots where each slot corresponds to a particular instruction.
The OR gate 38 receives the bit-masks and performs a bit-wise logical OR operation for each slot simultaneously. For example, assume the following bit- masks exist:
The resulting set of dependency bit (or dependency mask) is shown as
0010000 0, which is then sent from the OR gate 38 to a horizontal slot in the dependency matrix 30 that is associated with the corresponding instruction of the instruction table 32. During the execution stage (which is discussed more fully below with reference to the right branch of figure 4), the first step includes simultaneously performing a second OR operation 40 across all the dependency bits for each slot of the dependency matrix 30 to determine which instructions have no dependencies. For the example, a "1" set in the third bit of the dependency mask for the instruction in question means that the OR'ed result will
be a "1". Therefore this instruction still has dependencies stage and cannot be fired at the random selection unit 44.
Returning to the load operation (i.e. left branch of figure 4), the final step is to set the appropriate bit masks associated with the currently loaded instruction. The appropriate bit-masks being the registers that cannot be used by future instructions until the current instruction has been issued. Thus, for the example instruction (i.e. ADD R2, R3, R4), register R4 in the Used Registers table 34 for the present instruction column in set to "1" to inform all future instructions that R4 cannot be used as a source register (i.e. read from), because the present instruction uses this as a destination register (i.e. write to). Similarly, registers R2 and R3 are source registers for the present instruction and thus these registers are set to "1" in the Defined Registers table 36 to indicate that these registers cannot be written to until the present instruction has completed.
The right branch flow (D) of Figure 4 relates to random instruction issue starting at S1 where the dependency bits associated with each instruction are checked using an OR operation via OR gate 40. Then all of the independent instructions are flagged as ready for issue and appropriate fire signals are sent to the Random Selection Unit. At step S2, the Random Selection Unit 44 selects one of the instructions 46 for example the instruction X, which is issued along instruction supply path 50 to the relevant execution unit. At S3, column X is then cleared (i.e. bits are set to zero) from the dependency matrix 30 as well as from the Registers Used and Registers Defined tables 34, 36. Also, the InValid flag is set (i.e. to 1 ). Thus, the dependency column for the instructions currently being executed is erased, indicating that any instruction waiting for this instruction can now be executed. According to step S4, a pointer E is initialised for the next iteration. E is a pointer that points to an empty slot which is available in the issue table. After every instruction has been loaded, E must point to another free slot. One could, for example, use the instruction previously executed to initialise E. In that way, the pointer E would follow the executed instructions around the table.
Figure 5 represents a two input example of how a random selection unit 44 may be implemented. The truth table for the random selector is shown below:
ii R E A
0 0 0 0 0
0 0 1 0 0
0 1 0 0 0
0 1 1 0 0
1 0 0 0 1
1 0 1 0 1
0 0 0 1 0
0 0 1 1 1
0 1 0 1 0
0 1 1 1 0
1 0 0 1 1
1 0 1 1 1
Table 2
Figure 5 shows two inputs 70 and 72 for the random selection unit 44. It should be apparent from figure 3 that each input lo or Ii will either be a '0' or a '1'. More generally, a '0' will appear if all of the dependency bits of the relevant slot are '0'. Thus, a '0' indicates an independent instruction, which can be selected by the Random Selection Unit 44. An inspection of truth table 2 reveals that if one of the inputs is a '1', then the output 46 of the random selector will always take the logical value of the other input. Input is shown coupled to an AND gate 76 through an inverting element 75. The AND gate 76 accepts two other inputs, i.e. a random signal R 80 and an enable signal E 78. The output of the AND gate is OR-ed 74 with input l0 to produce the selected output 46 of the random selection unit 44.
As illustrated in Figure 6A, the enable signal E, 78 can be controlled by a mode bit MB. That allows the random number generator to be selectively controlled between an on and an off state. When the random selection unit is on, the output 46 is pseudo-randomly generated and is used as discussed herein. When the random selector is off, the instruction issue operation is carried out normally, that
is in the order of the instruction sequence. This is useful to allow the processor to be operated in a deterministic fashion, for example for debugging and other control purposes.
The random signal R does not have to be truly random. It could be typically generated using a pseudo-random generator that is reseeded regularly with some entropy. The enable signal 78 allows random issue to be disabled, i.e. non- determinism can be turned off, for example to allow a programmer to debug code by stepping through the instructions.
Figures 6A and 6B show a slightly more complex example of a random selection unit having 16 inputs. As shown a 16 input random issue unit can be provided by adapting the simple two input structure shown in Figure 5 and connecting it in a cascaded structure. Figure 6A shows a generalised stage of one of the random selection units. The inputs run from l0 to l2 K+1-1. The generalised stage can be applied to the 16 input random selector shown by Figure 6B.
Sixteen inputs means the selector has inputs lo to I15 and from the generalised case we can say:
2K+1 - 1 = 15
2K+1 = 16
Therefore, k = 3
Therefore in the final stage (i.e. R-box3), the 16 inputs are divided in half with the even inputs 10, I2...I14 being input to a first multiplexer 82 and the odd inputs 11 , 13, ...115 being input to a second multiplexer 84. Each multiplexer selects 1 output from 2k inputs (i.e. 8:1 in the final stage) and each multiplexer accepts control signals from the lower stages A0...AK-ι (i.e. A0, At, A2 in the final stage). This is confirmed by diagram on the right, which shows the selected signals from the lower stages being feedback into the higher stages. Then the relevant stage behaves as the two input model shown in Figure 5.
Figure 7 is a flow chart illustrating a method to choose which instruction in the instruction buffer to execute. At S11 , the issue buffer is assigned the symbol B. At S12, the number of instructions remaining in the issue buffer 14 is examined and if the buffer contains only one instruction then step S13a issues this instruction to the relevant execution unit and the program sequence is completed i.e. EXIT. If however, there is more than one instruction in the buffer, step S13b involves dividing the buffer into two sets of roughly equal size and assigning the symbols L and R respectively. Then at S14, the instructions within the L buffer are examined to see if any independent instructions can be issued. If not, step S15b sets the active issue buffer B to look at buffer R and the process is repeated from step S12. If however, buffer L does contain instructions that are ready for issue, then at step S15a, the R buffer is examined to see if it contains any instructions ready for issue. If not, step S16b sets the active buffer B to be buffer L and the process is repeated from step S12. If both L and R contain instructions that are ready for issue, the flow proceeds to step S16a where a random bit is generated. If the random bit is '1' then the process moves to step S16b where the L buffer is selected or if the bit is a '0' then the process moves to step S15b where the R buffer is selected. In both cases, the process will be repeated until there is only one instruction in one of the buffers in which case step S13a is invoked and the program sequence is completed.
Although the specific example outlined in the invention is directed at cryptography, it should be understood that this invention may be equally applied to any situation where it is desired to keep the environmental impact of the processor non- deterministic, for example reducing resonances in small computing devices. Furthermore, it should be appreciated that the random selection unit described herein is only an example of a possible implementation. The present invention may include any features disclosed herein either implicitly or explicitly or any generalisation thereof, irrespective of whether it relates to the presently claimed invention. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the invention.