WO2002027478A1 - Instruction issue in a processor - Google Patents

Instruction issue in a processor Download PDF

Info

Publication number
WO2002027478A1
WO2002027478A1 PCT/GB2001/004298 GB0104298W WO0227478A1 WO 2002027478 A1 WO2002027478 A1 WO 2002027478A1 GB 0104298 W GB0104298 W GB 0104298W WO 0227478 A1 WO0227478 A1 WO 0227478A1
Authority
WO
WIPO (PCT)
Prior art keywords
instructions
instruction
execution
random
dependency
Prior art date
Application number
PCT/GB2001/004298
Other languages
French (fr)
Inventor
Nigel Paul Smart
Michael David May
Hendrik Lambertus Muller
Original Assignee
University Of Bristol
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University Of Bristol filed Critical University Of Bristol
Priority to AU2001290111A priority Critical patent/AU2001290111A1/en
Publication of WO2002027478A1 publication Critical patent/WO2002027478A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/52Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/71Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information
    • G06F21/75Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information by inhibiting the analysis of circuitry or operation
    • G06F21/755Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information by inhibiting the analysis of circuitry or operation with measures against power attack
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • G06F9/3838Dependency mechanisms, e.g. register scoreboarding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/002Countermeasures against attacks on cryptographic mechanisms
    • H04L9/003Countermeasures against attacks on cryptographic mechanisms for power analysis, e.g. differential power analysis [DPA] or simple power analysis [SPA]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2209/00Additional information or applications relating to cryptographic mechanisms or cryptographic arrangements for secret or secure communication H04L9/00
    • H04L2209/12Details relating to cryptographic hardware or logic circuitry
    • H04L2209/125Parallelization or pipelining, e.g. for accelerating processing of cryptographic operations

Definitions

  • This invention relates to the issue of instructions in a processor, and particularly to a method of issuing instructions and to a processor.
  • DPA provides the most powerful attack using very cheap resources. Many people have started to examine this problem and S. Chari et al provides a worrying analysis regarding the weakness of AES (Advanced Encryption Standard) algorithms on Smart cards, see the article entitled “A Cautionary Note Regarding the Evaluation of AES Candidates on Smart-Cards" in the Second Advanced Encryption Standard Conference, Rome, March 1999.
  • AES Advanced Encryption Standard
  • the present invention seeks to improve tamper resistance according to the third approach, that is, by decorrelating the timing of power traces on successive program executions.
  • Kocher et al also describe two ways of producing the required temporal misalignment by introducing: i) introducing random clock signals, and ii) introducing randomness into the execution order.
  • Kocher et al in "Differential Power Analysis” mention that randomising execution order can help defeat DPA, but can lead to other problems if not done carefully.
  • One randomising approach uses the idea of randomised multi-threading at an instruction level using a set of essentially "shadow" registers. This allows auxiliary threads to execute random encryptions, hence hoping to mask the correct encryption operation.
  • the disadvantage is that additional computational tasks are again required and this requires a more complex processor architecture having separate banks of registers, one for each thread.
  • DES operates on the 64-bit blocks using key sizes of 56- bits.
  • the keys are actually stored as being 64 bits long, but every 8th bit in the key is not used (i.e. bits numbered 7, 15, 23, 31 , 39, 47, 55, and 63).
  • bits numbered 7, 15, 23, 31 , 39, 47, 55, and 63 bits numbered 7, 15, 23, 31 , 39, 47, 55, and 63.
  • the aim of the present invention is to allow extensive randomised execution of instructions to be performed at run time so that successive program executions are uncorrelated.
  • a method of issuing instructions to an execution unit in a processor comprising: identifying in an ordered sequence of instructions a set of instructions for which the order of execution is not critical; and selecting instructions in said set for successive execution on a random basis each time the ordered sequence of instructions is executed.
  • instructions for which the order of execution is not critical are those where the outcome does not depend on their order of execution.
  • a method of executing a computer program a plurality of times comprising a sequence of instructions, wherein each time the program is executed a set of said instructions is identified for which the order of execution is not critical, and instructions from that set are selected for successive execution on a random basis whereby the execution profile of the program differs each time the program is executed.
  • the execution profile of the program is the physical indicators that result from execution of a code sequence, for example a power trace.
  • a processor comprising: a fetch unit for fetching instructions from an instruction memory which holds an ordered sequence of instructions to be executed; an execution unit for executing instructions supplied from the fetch unit; an instruction selection unit connected to control the fetch unit and arranged to select successive executable instructions from said ordered sequence on a random basis, for supply to the execution unit.
  • a still further aspect of the invention provides a method of operating a computer to effect a series of operations, the method comprising: selectively controlling the order in which the operations are effected between a random order and predetermined order.
  • the step of selectively controlling the order in which the operations are effected can be carried out in response to a mode control signal (e.g. mode bit) which selectively enables a random number generator connected to control said order.
  • a mode control signal e.g. mode bit
  • a still further aspect provides a processor comprising: a functional unit for effecting a series of operations; a random number generator connected to said functional unit which, when enabled, causes said functional unit to effect the operation in a random sequence; and means for selectively enabling the random number generator to allow said operation to be effected in one of a random order and a predetermined order.
  • a processor which provides for the selection of successive executable instructions on a random basis is referred to as a non- deterministic processor.
  • Figure 1 shows a block diagram of a generic CPU architecture
  • Figure 2 shows a non-deterministic processor executing two instructions compared to other processors
  • Figure 3 shows an embodiment of the random issue unit
  • Figure 4 shows a flow chart explaining how instructions are issued at random
  • Figure 5 shows an example of two input random selection unit
  • Figures 6A and 6B show a generic model and a 16 input random selection unit
  • Figure 7 shows a flow chart describing a method for choosing which random instruction in the issue buffer to execute.
  • FIG. 1 is a block diagram illustrating the standard functional units that make up a pipelined computer system.
  • a program memory 2 contains program instructions, which are addressable at different memory locations.
  • An ADDRESS bus 6 and a DATA bus 4 transfer information to and from the various elements that make up the processor 8.
  • the system contains an instruction fetch unit 10 having a program counter 12 that stores the address of the next instruction to be fetched. For sequential execution of instructions the program counter will normally be incremented by a single addressing unit. However, if a branch instruction is encountered, the program flow is broken and the program counter 12 needs to be loaded with the address of a target instruction (that is, the first instruction of the branch sequence).
  • the instructions are fetched from the program memory and stored in an instruction issue buffer 14.
  • the program counter referred to herein is used to control instruction fetches from memory. There may also be an execution counter which is used by the execution unit 18 to specify which instruction is currently being executed.
  • the instructions are decoded and supplied to relevant execution units. In this example, only one execution unit 18 or pipeline is shown, however the present invention is intended to be used in conjunction with modern processors which may have several execution units allowing parallel execution paths. Encryption algorithms need a substantial level of computational power and modern processor architectures such as superscalar, VLIW (Very Long Instruction Word) and SIMD (Single Instruction Multiple Data) are ideally suited to the present invention.
  • the results of the operations are written back by a result write stage 22 into temporary registers of a register file 20, which is used to load and store data in and out of main memory.
  • the present invention is concerned mainly with the block of functionality denoted by the reference numeral 24.
  • the present invention deals with a modified issue buffer 14 which will be described in more detail later.
  • the issue buffer generates an instruction fetch signal 13 to control which instructions are supplied from the fetch unit 10.
  • part of the decode circuitry may be used to decode the instruction dependencies. This will also be described in more detail at a later stage.
  • Non-deterministic processing as described herein means that for successive runs of the program, although the result will be the same the order of execution of the instructions will be random. This reduces the impact of a DPA-type attack in that the power traces resulting from successive program runs will be different.
  • Figure 2 serves to highlight the differences between a non-deterministic processor and other known processors when executing a simple program consisting of the following two lines of code:
  • the execution flow on the left of Figure 2 represents a standard processor having a single execution pipeline where the two instructions are executed sequentially, i.e. the ADD instruction is executed in cycle 1 followed by the XOR instruction in cycle 2.
  • the middle execution flow represents a modern Pentium processor having a plurality of execution paths, which execute independent instructions in parallel.
  • the execution flow on the right of Figure 2 represents a non-deterministic processor having a single pipeline. The important point to note is that the non-deterministic processor allows the instructions to be executed in any order provided that it has been established that the instructions are independent. So in the first cycle either the >ADD or the XOR instruction can be carried out and in the second cycle the other instruction will be executed.
  • the standard processor executes instructions sequentially and although there is a little "out of order" execution to help with branch prediction, this occurs on a small scale. In any event, in such a processor each time a program is run containing a certain sequence of instructions, the execution sequence will be identical.
  • the Pentium processor has a plurality of execution units (A) and (B), which execute the independent instructions in parallel the processor is still deterministic in that the ADD and the XOR instructions are executed concurrently in pipes (A) and (B).
  • the purpose of 11 is to LOAD a value addressed by register R2 into register R9.
  • the intention of the code sequence is to add the loaded value from R8 to the value in R9. Therefore, if the ADD instruction 12 is carried out before 11 , the old value of R9 will be added to R8 yielding an incorrect value for the resulting summation R10. We say that there is a dependency between the ADD instruction 12 and the LOAD instruction 11.
  • the present invention makes use of the fact that in many code sequences a number of instructions are independent and thus can, in theory, be executed in any order.
  • the invention exploits this by executing the instructions in a random order at run time. This causes the access patterns to memory for either data or instructions to be uncorrelated for successive program executions, and thus causes the power trace to be different each time.
  • FIG. 3 shows an example of the implementation of a random issue unit.
  • the random issue unit comprises an instruction table 32 with an associated dependency matrix table 30. Instructions are prefetched into the instruction table 32 using conventional instruction fetch circuitry.
  • the dependency matrix table has slots and columns, where the slots represent bit-masks associated with each instruction in the instruction table 32.
  • the bit-masks or dependency bits are an indication as to whether an instruction has a dependency on another instruction. Broadly speaking there are two types of dependencies that need to be considered for an instruction:
  • the Used and Defined Register tables 34, 36 shown in Figure 3 each comprise a number of rows and columns. Each row corresponds to a register (or operand) and each column corresponds to a particular slot (or instruction) in the instruction issue table 32. Each register comprises a plurality of slots corresponding to the number of instructions in the instruction table 32 and is the sp-called bit-mask for a register.
  • the bit-mask for a register is a binary stream where a "1 " indicates which instruction has a dependency on that register.
  • each table has five rows corresponding to registers R1 to R5, i.e. R1 corresponds to the top row and R5 to the bottom row.
  • the processor performs a logical OR operation 38 of the bit mask of the Used Registers table 34 and the Defined Registers table 36 thereby creating a new bit-mask stored in a free slot of the dependency matrix 30.
  • a test can be performed by OR-ing with OR gates 40 each of the dependency bits of a slot of the dependency matrix. If all the dependency bits of a slot associated with a particular instruction are set to zero, then the instruction can be executed and a FIRE signal 42 is generated to the Random Selection Unit 44. Given the result of the OR for each row of the table, a number of zeros (indicating instructions to be executed) and a number of ones (indicating instructions that are blocked) are obtained. The random selection unit 44 selects one of the slots which is indicated at value zero, at random, and causes that instruction to be executed next. In the described embodiment, the dependency bits are overwritten with new values when the dependencies of the next instruction are loaded into the matrix.
  • the random issue unit supplies an instruction to be executed from the instruction table 32 along instruction supply path 50 and loads an instruction into the instruction table 32 along instruction load path 52 at the same time.
  • Figure 4 is a flow chart indicating how the instructions in the instruction issue buffer 14 are issued for execution and loaded concurrently.
  • the load operations are represented by the left branch flow (C), while the issue operations are represented by the right branch flow (D).
  • the left branch flow (C) of figure 4 relates to an instruction load operation starting at step S1 where the next instruction, specified by the program counter 12, is loaded into the instruction table 32 of the issue buffer 14.
  • the load operations will firstly be described in general terms, and then more specifically in relation to one example.
  • Each instruction defines two source operands 54 and a destination operand 56. These will nearly always be defined as registers although that is not necessary. Direct addresses or immediates are possible.
  • the source and destination operands 54,56 are simultaneously decoded.
  • the decoded information is translated into bit-masks that are set in the Used Registers and Defined Registers tables 34,36. These bit-masks are OR-ed by OR gate 38 ( Figure 3) to create dependency bits indicating on which instructions the loaded instruction depends.
  • the empty slot E associated with the loaded instruction is then selected for replacement by setting the InValid flag 58 to zero.
  • the dependency bits are loaded into the selected slot E of the dependency matrix.
  • the bit-masks in column E of the Registers Used and Registers Defined tables 34,36 are set to "1" along path 62 for the corresponding rows of these tables to ensure that future instructions that use those registers are going to wait for the instruction to finish.
  • the Used and Defined Register tables 34, 36 are set-up during the instruction fetch or LOAD sequence, as already indicated.
  • the fetched instruction is decoded and the bit-masks associated with each of the registers specified in the instruction are checked for dependencies with other instructions. For example, assume the instruction: ADD R2, R3, R4 is fetched.
  • the bit masks associated with the registers R2 and R3 in the Used Registers table 34 i.e. the source registers
  • OR gate 38 the bit mask associated with register R4 in the Defined Registers Table (i.e. the destination register) is sent to the OR gate 38.
  • each bit mask has N slots where each slot corresponds to a particular instruction.
  • the OR gate 38 receives the bit-masks and performs a bit-wise logical OR operation for each slot simultaneously. For example, assume the following bit- masks exist:
  • the first step includes simultaneously performing a second OR operation 40 across all the dependency bits for each slot of the dependency matrix 30 to determine which instructions have no dependencies. For the example, a "1" set in the third bit of the dependency mask for the instruction in question means that the OR'ed result will be a "1". Therefore this instruction still has dependencies stage and cannot be fired at the random selection unit 44.
  • the final step is to set the appropriate bit masks associated with the currently loaded instruction.
  • the appropriate bit-masks being the registers that cannot be used by future instructions until the current instruction has been issued.
  • register R4 in the Used Registers table 34 for the present instruction column in set to "1" to inform all future instructions that R4 cannot be used as a source register (i.e. read from), because the present instruction uses this as a destination register (i.e. write to).
  • registers R2 and R3 are source registers for the present instruction and thus these registers are set to "1" in the Defined Registers table 36 to indicate that these registers cannot be written to until the present instruction has completed.
  • the right branch flow (D) of Figure 4 relates to random instruction issue starting at S1 where the dependency bits associated with each instruction are checked using an OR operation via OR gate 40. Then all of the independent instructions are flagged as ready for issue and appropriate fire signals are sent to the Random Selection Unit.
  • the Random Selection Unit 44 selects one of the instructions 46 for example the instruction X, which is issued along instruction supply path 50 to the relevant execution unit.
  • column X is then cleared (i.e. bits are set to zero) from the dependency matrix 30 as well as from the Registers Used and Registers Defined tables 34, 36. Also, the InValid flag is set (i.e. to 1 ).
  • step S4 a pointer E is initialised for the next iteration.
  • E is a pointer that points to an empty slot which is available in the issue table. After every instruction has been loaded, E must point to another free slot. One could, for example, use the instruction previously executed to initialise E. In that way, the pointer E would follow the executed instructions around the table.
  • Figure 5 represents a two input example of how a random selection unit 44 may be implemented.
  • the truth table for the random selector is shown below: ii R E A
  • Figure 5 shows two inputs 70 and 72 for the random selection unit 44. It should be apparent from figure 3 that each input lo or Ii will either be a '0' or a '1'. More generally, a '0' will appear if all of the dependency bits of the relevant slot are '0'. Thus, a '0' indicates an independent instruction, which can be selected by the Random Selection Unit 44. An inspection of truth table 2 reveals that if one of the inputs is a '1', then the output 46 of the random selector will always take the logical value of the other input. Input is shown coupled to an AND gate 76 through an inverting element 75. The AND gate 76 accepts two other inputs, i.e. a random signal R 80 and an enable signal E 78. The output of the AND gate is OR-ed 74 with input l 0 to produce the selected output 46 of the random selection unit 44.
  • each input lo or Ii will either be a '0' or a '1'. More generally,
  • the enable signal E, 78 can be controlled by a mode bit MB. That allows the random number generator to be selectively controlled between an on and an off state.
  • the output 46 is pseudo-randomly generated and is used as discussed herein.
  • the random selector is off, the instruction issue operation is carried out normally, that is in the order of the instruction sequence. This is useful to allow the processor to be operated in a deterministic fashion, for example for debugging and other control purposes.
  • the random signal R does not have to be truly random. It could be typically generated using a pseudo-random generator that is reseeded regularly with some entropy.
  • the enable signal 78 allows random issue to be disabled, i.e. non- determinism can be turned off, for example to allow a programmer to debug code by stepping through the instructions.
  • Figures 6A and 6B show a slightly more complex example of a random selection unit having 16 inputs.
  • a 16 input random issue unit can be provided by adapting the simple two input structure shown in Figure 5 and connecting it in a cascaded structure.
  • Figure 6A shows a generalised stage of one of the random selection units. The inputs run from l 0 to l 2 K+1 -1. The generalised stage can be applied to the 16 input random selector shown by Figure 6B.
  • the 16 inputs are divided in half with the even inputs 10, I2...I14 being input to a first multiplexer 82 and the odd inputs 11 , 13, ...115 being input to a second multiplexer 84.
  • Each multiplexer selects 1 output from 2 k inputs (i.e. 8:1 in the final stage) and each multiplexer accepts control signals from the lower stages A 0 ...A K - ⁇ (i.e. A 0 , At, A 2 in the final stage). This is confirmed by diagram on the right, which shows the selected signals from the lower stages being feedback into the higher stages. Then the relevant stage behaves as the two input model shown in Figure 5.
  • Figure 7 is a flow chart illustrating a method to choose which instruction in the instruction buffer to execute.
  • the issue buffer is assigned the symbol B.
  • step S13a issues this instruction to the relevant execution unit and the program sequence is completed i.e. EXIT. If however, there is more than one instruction in the buffer, step S13b involves dividing the buffer into two sets of roughly equal size and assigning the symbols L and R respectively. Then at S14, the instructions within the L buffer are examined to see if any independent instructions can be issued. If not, step S15b sets the active issue buffer B to look at buffer R and the process is repeated from step S12.
  • step S15a the R buffer is examined to see if it contains any instructions ready for issue. If not, step S16b sets the active buffer B to be buffer L and the process is repeated from step S12. If both L and R contain instructions that are ready for issue, the flow proceeds to step S16a where a random bit is generated. If the random bit is '1' then the process moves to step S16b where the L buffer is selected or if the bit is a '0' then the process moves to step S15b where the R buffer is selected. In both cases, the process will be repeated until there is only one instruction in one of the buffers in which case step S13a is invoked and the program sequence is completed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Mathematical Physics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Advance Control (AREA)

Abstract

A method of issuing instructions to an execution unit in a processor, the method comprising identifying in an ordered sequence of instructions a set of instructions for which the order of execution is not critical, and selecting instructions in said set for successive execution on a random basis each time the ordered sequence of instructions is executed.

Description

INSTRUCTION ISSUE IN A PROCESSOR
This invention relates to the issue of instructions in a processor, and particularly to a method of issuing instructions and to a processor.
The era of digital communications has brought about many technological advancements which make our lives easier, but at the same time pose a new set of problems that need attention. A particular area of concern is data security where businesses and customers alike have their own security requirements of the services which they supply or receive. Computer hackers are seen by business as potential hazards for attracting new e-commerce customers, unless customers can be assured that their transactions will be secure. Many encryption schemes have been suggested in an attempt to overcome 'eavesdropping' on private or personal digital communications such as confidential email messages or receiving television broadcasts which have not been paid for, i.e. pay-TV.
Modern cryptography is about ensuring the integrity, confidentiality and authenticity of digital communications. Secret keys are used to encrypt and decrypt the data and it is essential that these keys remain secure. Whereas in the past secret keys were stored in centralised secure vaults, today's network-aware devices have embedded keys making the hardware an attractive target for hackers. A great deal of research has gone into algorithm design and hackers are more prone to concentrate their efforts on the hardware in which the cryptographic unit is housed.
One such attack is performed by taking physical measurements of the cryptographic unit as described by P.Kocher et al in the two articles entitled "Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS and other systems" and "Differential Power Analysis", both in the Advances in Cryptology journal, CRYPTO '96 pages 104-113 (1996) and CRYPTO '99 pages 388-397 (1999) respectively. By taking measurements of power consumption, computing time or EMF radiations over a large number of encryption operations and using known statistical techniques, it is possible to discover the identity of the secret keys. Kocher goes on to describe three main techniques: i) timing attacks, ii) Simple Power Analysis (SPA) and iii) Differential Power Analysis (DPA).
DPA provides the most powerful attack using very cheap resources. Many people have started to examine this problem and S. Chari et al provides a worrying analysis regarding the weakness of AES (Advanced Encryption Standard) algorithms on Smart cards, see the article entitled "A Cautionary Note Regarding the Evaluation of AES Candidates on Smart-Cards" in the Second Advanced Encryption Standard Conference, Rome, March 1999.
L. Goubin et al proposes three general strategies to combat Differential Power Analysis attacks in his article entitled "DES and Differential Power Analysis, The Duplication Method' in Cryptographic Hardware and Embedded Systems pages 158-172, 1999. These are:
i) Make algorithmic changes to the cryptographic primitives under consideration.
ii) Replace critical assembler instructions with ones whose signature is hard to analyse, or re-engineer the crucial circuitry that performs arithmetic operations or memory transfers.
iii) Introduce random timing shifts so as to decorrelate the output traces on individual runs.
The first approach has been attempted before. For example, Goubin et al suggests splitting the operands into two and duplicating the workload. This however means at least doubling the required computer resources. Similarly, Chari proposes masking the internal bits by splitting them up and processing the bit shares in a certain way so that once recombined the correct result is obtained.
Kocher et al have attempted the second approach by balancing the Hamming weights of the operands, physical shielding or adding noise circuitry, as discussed for example in "Timing Attacks on Implementations of Diffie-H'ellman, RSA, DSS and other systems"
The present invention seeks to improve tamper resistance according to the third approach, that is, by decorrelating the timing of power traces on successive program executions.
Kocher et al also describe two ways of producing the required temporal misalignment by introducing: i) introducing random clock signals, and ii) introducing randomness into the execution order. Kocher et al, in "Differential Power Analysis" mention that randomising execution order can help defeat DPA, but can lead to other problems if not done carefully. One randomising approach uses the idea of randomised multi-threading at an instruction level using a set of essentially "shadow" registers. This allows auxiliary threads to execute random encryptions, hence hoping to mask the correct encryption operation. The disadvantage is that additional computational tasks are again required and this requires a more complex processor architecture having separate banks of registers, one for each thread.
In particular, Chari et al in an article entitled "Towards Sound Approaches to Counteract Power-Analysis Attack" in Advances in Cryptology, CRYPTO '99, pages 398-412, shows that for a randomised execution sequence to be effective the randomisation needs to be done extensively. However, no mechanism is disclosed in Chari to enable extensive randomised execution. For example, if only the XOR instruction in each DES (Data Encryption Standard) round is randomised then DPA is still possible by taking around 8 times as much data. DES (Data Encryption Standard) is the most widely used encryption algorithm and is known as a "block cipher", which operates on plaintext blocks of a given size (64-bits) and returns ciphertext blocks of the same size. DES operates on the 64-bit blocks using key sizes of 56- bits. The keys are actually stored as being 64 bits long, but every 8th bit in the key is not used (i.e. bits numbered 7, 15, 23, 31 , 39, 47, 55, and 63). Hence for randomised execution order to work it needs to be done in a highly aggressive manner which would preclude the type of local randomisation implied by the descriptions above. In addition this cannot be achieved in software since a software randomiser would work at too high a level of abstraction. The randomised multi-threading idea is close to a solution but suffers from increased CPU time and requires a more complex processor with separate banks of registers, one for each thread.
The aim of the present invention is to allow extensive randomised execution of instructions to be performed at run time so that successive program executions are uncorrelated.
According to one aspect of the present invention there is provided a method of issuing instructions to an execution unit in a processor, the method comprising: identifying in an ordered sequence of instructions a set of instructions for which the order of execution is not critical; and selecting instructions in said set for successive execution on a random basis each time the ordered sequence of instructions is executed.
In the described embodiment, instructions for which the order of execution is not critical are those where the outcome does not depend on their order of execution.
According to a further aspect of the present invention there is provided a method of executing a computer program a plurality of times, the program comprising a sequence of instructions, wherein each time the program is executed a set of said instructions is identified for which the order of execution is not critical, and instructions from that set are selected for successive execution on a random basis whereby the execution profile of the program differs each time the program is executed.
In the described embodiment, the execution profile of the program is the physical indicators that result from execution of a code sequence, for example a power trace. According to another aspect of the present invention there is provided a processor comprising: a fetch unit for fetching instructions from an instruction memory which holds an ordered sequence of instructions to be executed; an execution unit for executing instructions supplied from the fetch unit; an instruction selection unit connected to control the fetch unit and arranged to select successive executable instructions from said ordered sequence on a random basis, for supply to the execution unit.
A still further aspect of the invention provides a method of operating a computer to effect a series of operations, the method comprising: selectively controlling the order in which the operations are effected between a random order and predetermined order.
The step of selectively controlling the order in which the operations are effected can be carried out in response to a mode control signal (e.g. mode bit) which selectively enables a random number generator connected to control said order.
A still further aspect provides a processor comprising: a functional unit for effecting a series of operations; a random number generator connected to said functional unit which, when enabled, causes said functional unit to effect the operation in a random sequence; and means for selectively enabling the random number generator to allow said operation to be effected in one of a random order and a predetermined order.
In the following description a processor which provides for the selection of successive executable instructions on a random basis is referred to as a non- deterministic processor.
The present invention will now be described by way of an example with reference to the accompanying drawings, in which:-
Figure 1 shows a block diagram of a generic CPU architecture; Figure 2 shows a non-deterministic processor executing two instructions compared to other processors;
Figure 3 shows an embodiment of the random issue unit; Figure 4 shows a flow chart explaining how instructions are issued at random;
Figure 5 shows an example of two input random selection unit; Figures 6A and 6B show a generic model and a 16 input random selection unit; and
Figure 7 shows a flow chart describing a method for choosing which random instruction in the issue buffer to execute.
Figure 1 is a block diagram illustrating the standard functional units that make up a pipelined computer system. A program memory 2 contains program instructions, which are addressable at different memory locations. An ADDRESS bus 6 and a DATA bus 4 transfer information to and from the various elements that make up the processor 8. The system contains an instruction fetch unit 10 having a program counter 12 that stores the address of the next instruction to be fetched. For sequential execution of instructions the program counter will normally be incremented by a single addressing unit. However, if a branch instruction is encountered, the program flow is broken and the program counter 12 needs to be loaded with the address of a target instruction (that is, the first instruction of the branch sequence). The instructions are fetched from the program memory and stored in an instruction issue buffer 14. It is worth noting that the program counter referred to herein is used to control instruction fetches from memory. There may also be an execution counter which is used by the execution unit 18 to specify which instruction is currently being executed. Next, the instructions are decoded and supplied to relevant execution units. In this example, only one execution unit 18 or pipeline is shown, however the present invention is intended to be used in conjunction with modern processors which may have several execution units allowing parallel execution paths. Encryption algorithms need a substantial level of computational power and modern processor architectures such as superscalar, VLIW (Very Long Instruction Word) and SIMD (Single Instruction Multiple Data) are ideally suited to the present invention. Finally, the results of the operations are written back by a result write stage 22 into temporary registers of a register file 20, which is used to load and store data in and out of main memory.
The present invention is concerned mainly with the block of functionality denoted by the reference numeral 24. In particular, the present invention deals with a modified issue buffer 14 which will be described in more detail later. The issue buffer generates an instruction fetch signal 13 to control which instructions are supplied from the fetch unit 10. Also, part of the decode circuitry may be used to decode the instruction dependencies. This will also be described in more detail at a later stage.
The present invention is concerned with a non-deterministic processor. Non- deterministic processing as described herein means that for successive runs of the program, although the result will be the same the order of execution of the instructions will be random. This reduces the impact of a DPA-type attack in that the power traces resulting from successive program runs will be different.
Figure 2 serves to highlight the differences between a non-deterministic processor and other known processors when executing a simple program consisting of the following two lines of code:
>4DD a, b XOR c, d
The execution flow on the left of Figure 2 represents a standard processor having a single execution pipeline where the two instructions are executed sequentially, i.e. the ADD instruction is executed in cycle 1 followed by the XOR instruction in cycle 2. The middle execution flow represents a modern Pentium processor having a plurality of execution paths, which execute independent instructions in parallel. The execution flow on the right of Figure 2 represents a non-deterministic processor having a single pipeline. The important point to note is that the non-deterministic processor allows the instructions to be executed in any order provided that it has been established that the instructions are independent. So in the first cycle either the >ADD or the XOR instruction can be carried out and in the second cycle the other instruction will be executed. In contrast, the standard processor executes instructions sequentially and although there is a little "out of order" execution to help with branch prediction, this occurs on a small scale. In any event, in such a processor each time a program is run containing a certain sequence of instructions, the execution sequence will be identical. Although the Pentium processor has a plurality of execution units (A) and (B), which execute the independent instructions in parallel the processor is still deterministic in that the ADD and the XOR instructions are executed concurrently in pipes (A) and (B).
A slightly more complex code sequence comprising eight instructions is shown in Table 1.
Figure imgf000009_0001
Table 1
It is apparent from the code listing above that the sequential execution of these eight instructions I0, 11... 17 is merely one way that the code sequence may be correctly executed. There are in fact 80 different code sequences, i.e. instruction orderings, for executing these eight instructions which will all give the right answer. For example, the LOAD instruction 10 reads the value of register R1 holding a memory address and the value stored at this address is written into the register R8. It can be seen that the LOAD instructions I0, 11 , 13 and 14 are all independent instructions, and an equally valid execution sequence could be, for example, 11 , I0, 13, 14, 15, 12, 16, 17 in that none of them are dependent on the results of execution of another. However, an incorrect result occurs if the ADD instruction 12 is executed before the LOAD instruction 11. That is, the purpose of 11 is to LOAD a value addressed by register R2 into register R9. The intention of the code sequence is to add the loaded value from R8 to the value in R9. Therefore, if the ADD instruction 12 is carried out before 11 , the old value of R9 will be added to R8 yielding an incorrect value for the resulting summation R10. We say that there is a dependency between the ADD instruction 12 and the LOAD instruction 11.
The present invention makes use of the fact that in many code sequences a number of instructions are independent and thus can, in theory, be executed in any order. The invention exploits this by executing the instructions in a random order at run time. This causes the access patterns to memory for either data or instructions to be uncorrelated for successive program executions, and thus causes the power trace to be different each time.
Figure 3 shows an example of the implementation of a random issue unit. The random issue unit comprises an instruction table 32 with an associated dependency matrix table 30. Instructions are prefetched into the instruction table 32 using conventional instruction fetch circuitry. The dependency matrix table has slots and columns, where the slots represent bit-masks associated with each instruction in the instruction table 32. The bit-masks or dependency bits are an indication as to whether an instruction has a dependency on another instruction. Broadly speaking there are two types of dependencies that need to be considered for an instruction:
1 ) Use dependencies - which are the dependencies of the source registers that an instruction uses to read data from. 2) Defined dependencies - which are the dependencies of the destination registers that an instruction defines to write data to.
In Figure 3, a particular instruction will be decoded and the mask bits will be set accordingly in the Used Registers table 34 and Defined Registers table 36. The Used and Defined Register tables 34, 36 shown in Figure 3 each comprise a number of rows and columns. Each row corresponds to a register (or operand) and each column corresponds to a particular slot (or instruction) in the instruction issue table 32. Each register comprises a plurality of slots corresponding to the number of instructions in the instruction table 32 and is the sp-called bit-mask for a register. The bit-mask for a register is a binary stream where a "1 " indicates which instruction has a dependency on that register. As an example, consider the Used and Defined Register tables 34, 36 of Figure 3 where each table has five rows corresponding to registers R1 to R5, i.e. R1 corresponds to the top row and R5 to the bottom row. At run-time the processor performs a logical OR operation 38 of the bit mask of the Used Registers table 34 and the Defined Registers table 36 thereby creating a new bit-mask stored in a free slot of the dependency matrix 30.
A test can be performed by OR-ing with OR gates 40 each of the dependency bits of a slot of the dependency matrix. If all the dependency bits of a slot associated with a particular instruction are set to zero, then the instruction can be executed and a FIRE signal 42 is generated to the Random Selection Unit 44. Given the result of the OR for each row of the table, a number of zeros (indicating instructions to be executed) and a number of ones (indicating instructions that are blocked) are obtained. The random selection unit 44 selects one of the slots which is indicated at value zero, at random, and causes that instruction to be executed next. In the described embodiment, the dependency bits are overwritten with new values when the dependencies of the next instruction are loaded into the matrix.
All the instructions that have no dependencies are thus identified by fire signals 42 to the random selection unit 44. For purposes of clarity we will assume a single execution pipeline where for each execution cycle the random selection unit selects by selection signal 46 only one of the fired instructions. ' However it should be appreciated that for example, in a superscalar architecture having parallel execution pipelines, a number of instructions could be issued in parallel under the control of the Random Selection Unit 44. The selection signal 46 of the Random Selection Unit 44 points to an instruction to be executed, while at the same time issues a feedback signal 48 to "free-up" future instructions that may have been dependent on the instruction currently being executed.
The random issue unit supplies an instruction to be executed from the instruction table 32 along instruction supply path 50 and loads an instruction into the instruction table 32 along instruction load path 52 at the same time. Figure 4 is a flow chart indicating how the instructions in the instruction issue buffer 14 are issued for execution and loaded concurrently. The load operations are represented by the left branch flow (C), while the issue operations are represented by the right branch flow (D).
The left branch flow (C) of figure 4 relates to an instruction load operation starting at step S1 where the next instruction, specified by the program counter 12, is loaded into the instruction table 32 of the issue buffer 14. The load operations will firstly be described in general terms, and then more specifically in relation to one example. Each instruction defines two source operands 54 and a destination operand 56. These will nearly always be defined as registers although that is not necessary. Direct addresses or immediates are possible. The source and destination operands 54,56 are simultaneously decoded. At S2, the decoded information is translated into bit-masks that are set in the Used Registers and Defined Registers tables 34,36. These bit-masks are OR-ed by OR gate 38 (Figure 3) to create dependency bits indicating on which instructions the loaded instruction depends. At S3, the empty slot E associated with the loaded instruction is then selected for replacement by setting the InValid flag 58 to zero. The dependency bits are loaded into the selected slot E of the dependency matrix. At S4, the bit-masks in column E of the Registers Used and Registers Defined tables 34,36 are set to "1" along path 62 for the corresponding rows of these tables to ensure that future instructions that use those registers are going to wait for the instruction to finish. A specific example of the load operation will now be described.
The Used and Defined Register tables 34, 36 are set-up during the instruction fetch or LOAD sequence, as already indicated. The fetched instruction is decoded and the bit-masks associated with each of the registers specified in the instruction are checked for dependencies with other instructions. For example, assume the instruction: ADD R2, R3, R4 is fetched. The bit masks associated with the registers R2 and R3 in the Used Registers table 34 (i.e. the source registers) are sent to OR gate 38. Also, the bit mask associated with register R4 in the Defined Registers Table (i.e. the destination register) is sent to the OR gate 38. Assuming there are N instructions in the instruction table 32, therefore each bit mask has N slots where each slot corresponds to a particular instruction.
The OR gate 38 receives the bit-masks and performs a bit-wise logical OR operation for each slot simultaneously. For example, assume the following bit- masks exist:
Figure imgf000013_0001
The resulting set of dependency bit (or dependency mask) is shown as
0010000 0, which is then sent from the OR gate 38 to a horizontal slot in the dependency matrix 30 that is associated with the corresponding instruction of the instruction table 32. During the execution stage (which is discussed more fully below with reference to the right branch of figure 4), the first step includes simultaneously performing a second OR operation 40 across all the dependency bits for each slot of the dependency matrix 30 to determine which instructions have no dependencies. For the example, a "1" set in the third bit of the dependency mask for the instruction in question means that the OR'ed result will be a "1". Therefore this instruction still has dependencies stage and cannot be fired at the random selection unit 44.
Returning to the load operation (i.e. left branch of figure 4), the final step is to set the appropriate bit masks associated with the currently loaded instruction. The appropriate bit-masks being the registers that cannot be used by future instructions until the current instruction has been issued. Thus, for the example instruction (i.e. ADD R2, R3, R4), register R4 in the Used Registers table 34 for the present instruction column in set to "1" to inform all future instructions that R4 cannot be used as a source register (i.e. read from), because the present instruction uses this as a destination register (i.e. write to). Similarly, registers R2 and R3 are source registers for the present instruction and thus these registers are set to "1" in the Defined Registers table 36 to indicate that these registers cannot be written to until the present instruction has completed.
The right branch flow (D) of Figure 4 relates to random instruction issue starting at S1 where the dependency bits associated with each instruction are checked using an OR operation via OR gate 40. Then all of the independent instructions are flagged as ready for issue and appropriate fire signals are sent to the Random Selection Unit. At step S2, the Random Selection Unit 44 selects one of the instructions 46 for example the instruction X, which is issued along instruction supply path 50 to the relevant execution unit. At S3, column X is then cleared (i.e. bits are set to zero) from the dependency matrix 30 as well as from the Registers Used and Registers Defined tables 34, 36. Also, the InValid flag is set (i.e. to 1 ). Thus, the dependency column for the instructions currently being executed is erased, indicating that any instruction waiting for this instruction can now be executed. According to step S4, a pointer E is initialised for the next iteration. E is a pointer that points to an empty slot which is available in the issue table. After every instruction has been loaded, E must point to another free slot. One could, for example, use the instruction previously executed to initialise E. In that way, the pointer E would follow the executed instructions around the table.
Figure 5 represents a two input example of how a random selection unit 44 may be implemented. The truth table for the random selector is shown below: ii R E A
0 0 0 0 0
0 0 1 0 0
0 1 0 0 0
0 1 1 0 0
1 0 0 0 1
1 0 1 0 1
0 0 0 1 0
0 0 1 1 1
0 1 0 1 0
0 1 1 1 0
1 0 0 1 1
1 0 1 1 1
Table 2
Figure 5 shows two inputs 70 and 72 for the random selection unit 44. It should be apparent from figure 3 that each input lo or Ii will either be a '0' or a '1'. More generally, a '0' will appear if all of the dependency bits of the relevant slot are '0'. Thus, a '0' indicates an independent instruction, which can be selected by the Random Selection Unit 44. An inspection of truth table 2 reveals that if one of the inputs is a '1', then the output 46 of the random selector will always take the logical value of the other input. Input is shown coupled to an AND gate 76 through an inverting element 75. The AND gate 76 accepts two other inputs, i.e. a random signal R 80 and an enable signal E 78. The output of the AND gate is OR-ed 74 with input l0 to produce the selected output 46 of the random selection unit 44.
As illustrated in Figure 6A, the enable signal E, 78 can be controlled by a mode bit MB. That allows the random number generator to be selectively controlled between an on and an off state. When the random selection unit is on, the output 46 is pseudo-randomly generated and is used as discussed herein. When the random selector is off, the instruction issue operation is carried out normally, that is in the order of the instruction sequence. This is useful to allow the processor to be operated in a deterministic fashion, for example for debugging and other control purposes.
The random signal R does not have to be truly random. It could be typically generated using a pseudo-random generator that is reseeded regularly with some entropy. The enable signal 78 allows random issue to be disabled, i.e. non- determinism can be turned off, for example to allow a programmer to debug code by stepping through the instructions.
Figures 6A and 6B show a slightly more complex example of a random selection unit having 16 inputs. As shown a 16 input random issue unit can be provided by adapting the simple two input structure shown in Figure 5 and connecting it in a cascaded structure. Figure 6A shows a generalised stage of one of the random selection units. The inputs run from l0 to l2 K+1-1. The generalised stage can be applied to the 16 input random selector shown by Figure 6B.
Sixteen inputs means the selector has inputs lo to I15 and from the generalised case we can say:
2K+1 - 1 = 15
2K+1 = 16
Therefore, k = 3
Therefore in the final stage (i.e. R-box3), the 16 inputs are divided in half with the even inputs 10, I2...I14 being input to a first multiplexer 82 and the odd inputs 11 , 13, ...115 being input to a second multiplexer 84. Each multiplexer selects 1 output from 2k inputs (i.e. 8:1 in the final stage) and each multiplexer accepts control signals from the lower stages A0...AK-ι (i.e. A0, At, A2 in the final stage). This is confirmed by diagram on the right, which shows the selected signals from the lower stages being feedback into the higher stages. Then the relevant stage behaves as the two input model shown in Figure 5. Figure 7 is a flow chart illustrating a method to choose which instruction in the instruction buffer to execute. At S11 , the issue buffer is assigned the symbol B. At S12, the number of instructions remaining in the issue buffer 14 is examined and if the buffer contains only one instruction then step S13a issues this instruction to the relevant execution unit and the program sequence is completed i.e. EXIT. If however, there is more than one instruction in the buffer, step S13b involves dividing the buffer into two sets of roughly equal size and assigning the symbols L and R respectively. Then at S14, the instructions within the L buffer are examined to see if any independent instructions can be issued. If not, step S15b sets the active issue buffer B to look at buffer R and the process is repeated from step S12. If however, buffer L does contain instructions that are ready for issue, then at step S15a, the R buffer is examined to see if it contains any instructions ready for issue. If not, step S16b sets the active buffer B to be buffer L and the process is repeated from step S12. If both L and R contain instructions that are ready for issue, the flow proceeds to step S16a where a random bit is generated. If the random bit is '1' then the process moves to step S16b where the L buffer is selected or if the bit is a '0' then the process moves to step S15b where the R buffer is selected. In both cases, the process will be repeated until there is only one instruction in one of the buffers in which case step S13a is invoked and the program sequence is completed.
Although the specific example outlined in the invention is directed at cryptography, it should be understood that this invention may be equally applied to any situation where it is desired to keep the environmental impact of the processor non- deterministic, for example reducing resonances in small computing devices. Furthermore, it should be appreciated that the random selection unit described herein is only an example of a possible implementation. The present invention may include any features disclosed herein either implicitly or explicitly or any generalisation thereof, irrespective of whether it relates to the presently claimed invention. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the invention.

Claims

CLAIMS:
1. A method of issuing instructions to an execution unit in a processor, the method comprising: identifying in an ordered sequence of instructions a set of instructions for which the order of execution is not critical; and selecting instructions in said set for successive execution on a random basis each time the ordered sequence of instructions is executed.
2. A method according to claim 1 , wherein a set of instruction for which the order is not critical is obtained by determining which instructions are independent of the other instructions in the ordered sequence.
3. A method according to claim 2, wherein determining which instructions are independent comprises the following steps: i) loading a sequence of instructions; ii) decoding each instruction to identify its operands; and iii) comparing the operands of the decoded instruction with the operands of each of the preceding instructions in the sequence
4. A method according to claim 3, wherein the result of said comparison step iii) is stored as a set of dependency bits, wherein there is a set of dependency bits associated with each instruction in the sequence.
5. A method according to claim 4, wherein said comparison step iii) is performed by a logical OR operation executed on corresponding bits of the instruction operands.
6. A method according to claim 4 or 5, which comprises the step of checking for each set of dependency bits whether there is a dependency bit set to indicate a dependency between instructions.
7. A method according to claim 6, wherein a dependency bit having a logical value "1" denote a dependent instruction.
8. A method according to any of claims 3 to 7, wherein the decoding step ii) comprises decoding the source registers from which the instruction intends to read data and setting corresponding bit masks in a used registers table.
9. A method according to any of claims 3 to 7 wherein the decoding step ii) comprises decoding the destination registers into which the instruction intends to write data and setting corresponding bit masks in a defined registers table.
10. A method of executing a computer program a plurality of times, the program comprising a sequence of instructions wherein each time the program is executed a set of said instructions is identified for which the order of execution is not critical, and instructions from that set are selected for successive execution on a random basis whereby the execution profile of the program differs each time the program is executed, while the end result of each execution sequence is the same.
11. A method according to claim 10, wherein the set of instructions which are selected for successive execution is obtained by determining which instructions are independent of other instruction in the ordered sequence.
12. A method according to claim 11 , wherein determining which instructions are independent comprises the following steps: i) loading a sequence of instructions; ii) decoding each instruction to identify its operands; and iii) comparing the operands of the decoded instruction with the operands of each of the preceding instructions in the sequence.
13. A method according to claim 11 , wherein the result of said comparison step iii) is stored as a set of dependency bits.
14. A processor comprising: a fetch unit for fetching instructions from an instruction memory which holds an ordered sequence of instructions to be executed: an execution unit for executing instructions supplied from the fetch unit; an instruction selection unit connected to control the fetch unit and arranged to select successive executable instructions from said ordered sequence on a random basis, for supply to the execution unit.
15. A processor according to claim 14, wherein the instruction selection unit comprises: an instruction issue table holding the instructions supplied from the fetch unit; a used registers table holding bit masks corresponding to decoded source operands of said instructions; a defined registers table holding bit masks corresponding to decoded destination operands of said instructions; comparison means arranged to compare said bit masks of said source and destination operands; and a dependency matrix holding set of dependency bits generated from said comparison means, each set of dependency bits being associated with an instruction of the ordered sequence.
16. A processor according to claim 15, which comprises a random selection unit arranged to select successive instructions from said instruction issue table for supply to said execution unit.
17. A processor according to claim 16, wherein the random selection unit comprises means for checking for each set of dependency bits whether there is a dependency bit set to indicate a dependency between instructions, and for flagging, as a result of said check, a set of independent instructions, wherein the random selection unit is arranged to select from said set of independent instructions on a random basis.
18. A method of operating a computer to effect a series of operations, the method comprising: selectively controlling the order in which the operations are effected between a random order and a predetermined order.
9. A method according to claim 18, wherein the step of selectively controlling is carried out in response to a mode control signal which selectively enables a random number generator connected to control said order.
20. A processor comprising: a functional unit for effecting a series of operations; a random number generator connected to said functional unit which, when enabled, causes said functional unit to effect the operation in a random sequence; and means for selectively enabling the random number generator to allow said operation to be effected in one of a random order and a predetermined order.
PCT/GB2001/004298 2000-09-27 2001-09-26 Instruction issue in a processor WO2002027478A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001290111A AU2001290111A1 (en) 2000-09-27 2001-09-26 Instruction issue in a processor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB0023698.4A GB0023698D0 (en) 2000-09-27 2000-09-27 Instruction issue in a processor
GB0023698.4 2000-09-27

Publications (1)

Publication Number Publication Date
WO2002027478A1 true WO2002027478A1 (en) 2002-04-04

Family

ID=9900250

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2001/004298 WO2002027478A1 (en) 2000-09-27 2001-09-26 Instruction issue in a processor

Country Status (3)

Country Link
AU (1) AU2001290111A1 (en)
GB (1) GB0023698D0 (en)
WO (1) WO2002027478A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012066458A1 (en) * 2010-11-16 2012-05-24 Nds Limited Obfuscated hardware multi-threading
EP2860656A2 (en) 2013-10-01 2015-04-15 Commissariat à l'Énergie Atomique et aux Énergies Alternatives Method for execution by a microprocessor of a polymorphic binary code of a predetermined function
CN108027762A (en) * 2016-06-24 2018-05-11 Arm 有限公司 The apparatus and method for the tracking stream that generation and the instruction of processing instruction process circuit perform
EP3891635A4 (en) * 2018-12-05 2022-09-14 Micron Technology, Inc. Protection against timing-based security attacks on re-order buffers

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5710902A (en) * 1995-09-06 1998-01-20 Intel Corporation Instruction dependency chain indentifier
US5745726A (en) * 1995-03-03 1998-04-28 Fujitsu, Ltd Method and apparatus for selecting the oldest queued instructions without data dependencies
US5881308A (en) * 1991-06-13 1999-03-09 International Business Machines Corporation Computer organization for multiple and out-of-order execution of condition code testing and setting instructions out-of-order
WO2000011548A1 (en) * 1998-08-24 2000-03-02 Advanced Micro Devices, Inc. Mechanism for load block on store address generation and universal dependency vector

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5881308A (en) * 1991-06-13 1999-03-09 International Business Machines Corporation Computer organization for multiple and out-of-order execution of condition code testing and setting instructions out-of-order
US5745726A (en) * 1995-03-03 1998-04-28 Fujitsu, Ltd Method and apparatus for selecting the oldest queued instructions without data dependencies
US5710902A (en) * 1995-09-06 1998-01-20 Intel Corporation Instruction dependency chain indentifier
WO2000011548A1 (en) * 1998-08-24 2000-03-02 Advanced Micro Devices, Inc. Mechanism for load block on store address generation and universal dependency vector

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FORREST ET AL: "Building diverse computer systems", OPERATING SYSTEMS, 1997., THE SIXTH WORKSHOP ON HOT TOPICS IN CAPE COD, MA, USA 5-6 MAY 1997, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 5 May 1997 (1997-05-05), pages 67 - 72, XP010226847, ISBN: 0-8186-7834-8 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012066458A1 (en) * 2010-11-16 2012-05-24 Nds Limited Obfuscated hardware multi-threading
US8621186B2 (en) 2010-11-16 2013-12-31 Cisco Technology Inc. Obfuscated hardware multi-threading
EP2860656A2 (en) 2013-10-01 2015-04-15 Commissariat à l'Énergie Atomique et aux Énergies Alternatives Method for execution by a microprocessor of a polymorphic binary code of a predetermined function
US9489315B2 (en) 2013-10-01 2016-11-08 Commissariat à l'énergie atomique et aux énergies alternatives Method of executing, by a microprocessor, a polymorphic binary code of a predetermined function
CN108027762A (en) * 2016-06-24 2018-05-11 Arm 有限公司 The apparatus and method for the tracking stream that generation and the instruction of processing instruction process circuit perform
EP3891635A4 (en) * 2018-12-05 2022-09-14 Micron Technology, Inc. Protection against timing-based security attacks on re-order buffers

Also Published As

Publication number Publication date
GB0023698D0 (en) 2000-11-08
AU2001290111A1 (en) 2002-04-08

Similar Documents

Publication Publication Date Title
May et al. Non-deterministic processors
EP3757854B1 (en) Microprocessor pipeline circuitry to support cryptographic computing
May et al. Random register renaming to foil DPA
US7949883B2 (en) Cryptographic CPU architecture with random instruction masking to thwart differential power analysis
US8417961B2 (en) Apparatus and method for implementing instruction support for performing a cyclic redundancy check (CRC)
US20100246814A1 (en) Apparatus and method for implementing instruction support for the data encryption standard (des) algorithm
US9317286B2 (en) Apparatus and method for implementing instruction support for the camellia cipher algorithm
JP2006510126A (en) Processing action masking in data processing system
US20100246815A1 (en) Apparatus and method for implementing instruction support for the kasumi cipher algorithm
Albert et al. Combatting software piracy by encryption and key management
US7570760B1 (en) Apparatus and method for implementing a block cipher algorithm
CN113673002A (en) Memory overflow defense method based on pointer encryption mechanism and RISC-V coprocessor
JP2000501541A (en) Unpredictable microprocessor or microcomputer
US20210350030A1 (en) Data Protection in Computer Processors
WO2002027478A1 (en) Instruction issue in a processor
WO2002054228A1 (en) Register renaming
JP2004310752A (en) Error detection in data processor
WO2002027474A1 (en) Executing a combined instruction
Boneh et al. Hardware support for tamper-resistant and copy-resistant software
WO2002027479A1 (en) Computer instructions
WO2002027476A1 (en) Register assignment in a processor
US20220075901A1 (en) Attack protection by power signature blurring
US7711955B1 (en) Apparatus and method for cryptographic key expansion
WO2022029443A1 (en) Method and apparatus for reducing the risk of successful side channel and fault injection attacks
Hossain et al. Hexon: Protecting firmware using hardware-assisted execution-level obfuscation

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP