WO2002052409A2 - Method and apparatus for increasing performance of an interpreter - Google Patents

Method and apparatus for increasing performance of an interpreter Download PDF

Info

Publication number
WO2002052409A2
WO2002052409A2 PCT/US2001/046840 US0146840W WO02052409A2 WO 2002052409 A2 WO2002052409 A2 WO 2002052409A2 US 0146840 W US0146840 W US 0146840W WO 02052409 A2 WO02052409 A2 WO 02052409A2
Authority
WO
WIPO (PCT)
Prior art keywords
platform
sequence
independent
loop
code
Prior art date
Application number
PCT/US2001/046840
Other languages
French (fr)
Other versions
WO2002052409A3 (en
Inventor
David Wallman
Original Assignee
Sun Microsystems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Microsystems, Inc. filed Critical Sun Microsystems, Inc.
Priority to GB0308866A priority Critical patent/GB2384089B/en
Priority to AU2002245075A priority patent/AU2002245075A1/en
Publication of WO2002052409A2 publication Critical patent/WO2002052409A2/en
Publication of WO2002052409A3 publication Critical patent/WO2002052409A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45504Abstract machines for programme code execution, e.g. Java virtual machine [JVM], interpreters, emulators

Definitions

  • the present invention relates to interpreters for computer languages. More specifically, the present invention relates to a method and apparatus for increasing the performance of a platform-independent code interpreter.
  • a person desiring to write a device-independent program typically writes source code, which includes a series of instructions in a high-level language.
  • a compiler then translates the source-code into platform-independent codes (pi-codes), which can be executed on various end devices. Examples of pi-codes are JAVA bytecodes, Pascal P-codes, and COBOL GNT instructions.
  • Each of the various end devices includes a program, usually called an interpreter, which reads these pi-codes and executes a corresponding set of instructions in the native language of the end device.
  • Interpreters can be very slow because they must first translate a pi-code instruction into corresponding native code before executing the native code. During this process, the interpreter must keep track of where it is operating within the series of pi-codes, and it must look up the corresponding native code instruction or instructions needed to implement the pi-code.
  • JIT just-in-time
  • Another drawback to using a JIT compiler is that many of the end computing devices have limited resources.
  • the end computing devices may be cellphones, personal information managers, or other hand-held devices that have little excess memory to store the JIT compiler and the data structures that the JIT compiler uses to translate the pi-code to the native code of the end computing device.
  • One embodiment of the present invention provides a system for increasing the performance of a platform-independent virtual machine in executing a sequence of platform-independent codes (pi-codes) generated by a high-level language compiler.
  • the system operates by first obtaining a pi-code to be executed by the platform- independent virtual machine. Next, the system locates a sequence of native code instructions that, when executed, will perform the action required by the pi-code. The system then executes the sequence of native code instructions. After the code has been executed, the system stores a copy of the sequence of native code instructions associated with the pi-code in a cache. Finally, if the pi-code defines a loop, the system saves a pointer to a beginning of the loop within the cache which, when used as a reference during execution, will cause the loop to be executed from the cache rather than from the pi ⁇ code.
  • the system repeats the steps of obtaining a pi-code, locating the associated native code, executing the native code, storing the native code, and saving the pointer until there are no more pi-codes to be executed.
  • the system saves the pointer in a table indexed by the position of the pi-code in the sequence of pi-codes.
  • the system stores a branch instruction in the cache at an end of the sequence of native code instructions associated with the pi-code so that the sequence of native code instructions can be repeated.
  • the system executes the sequence of native code instructions stored in the cache that is associated with the loop.
  • the system executes the branch instruction stored in the cache when executing the sequence of native code instructions associated with the loop.
  • the system repeats the execution of the sequence of native code instructions for the loop after executing the branch instruction stored in the cache without any additional reference to the sequence of pi- codes. In one embodiment of the present invention, the system stops repeating the loop when specified conditions for terminating the loop are achieved.
  • FIG. 1 illustrates a computing device including a platform-independent virtual machine in accordance with an embodiment of the present invention.
  • FIG. 2 illustrates a sequence of pi-codes for execution by a pi-code interpreter in accordance with an embodiment of the present invention.
  • FIG. 3A illustrates the state of some of the internal data structures used by pi- code interpreter 114 prior to executing any of the pi-codes in accordance with an embodiment of the present invention.
  • FIG. 3B illustrates the state of some of the internal data structures used by pi- code interpreter 114 after executing some of the pi-codes in accordance with an embodiment of the present invention.
  • FIG. 3C illustrates the state of some of the internal data structures used by pi- code interpreter 114 after executing the loop pi-code for the first time in accordance with an embodiment of the present invention.
  • FIG. 3D illustrates the state of some of the internal data structures used by pi- code interpreter 114 just before executing the loop pi-code for the second and subsequent times in accordance with an embodiment of the present invention.
  • FIG. 3E illustrates the state of some of the internal data structures used by pi- code interpreter 114 after executing the loop pi-code for the second and subsequent times in accordance with an embodiment of the present invention.
  • FIG. 3F illustrates the state of some of the internal data structures used by pi- code interpreter 114 after executing the loop pi-code for the final time in accordance with an embodiment of the present invention.
  • FIG. 4 is a flowchart illustrating the process of executing pi-codes in accordance with an embodiment of the present invention.
  • Computer program listing 1 is a listing of code that implements an interpreter in accordance with an embodiment of the present invention.
  • CDs compact discs
  • DVDs digital versatile discs or digital video discs
  • computer instruction signals embodied in a transmission medium (with or without a carrier wave upon which the signals are modulated).
  • the transmission medium may include a communications network, such as the Internet.
  • FIG. 1 illustrates computing device 100 including platform-independent virtual machine 104 in accordance with an embodiment of the present invention.
  • Computing device 100 may include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a personal organizer, a device controller, and a computational engine within an appliance.
  • Platform- independent virtual machine 104 is located within computing device 100 and contains instruction pointer 106, cache pointer 108, and pi-code interpreter 110. Instruction pointer 106 indicates the current point of execution within pi-codes
  • Instruction pointer 106 also indicates the associated point within loop pointers 118.
  • Cache pointer 108 indicates the next place available within cache 116 to store native code instructions 112.
  • Native code instructions 112 contain instructions 112a through 112h. Each of native code instructions 112a through 112h contain the native code instructions associated with a specific pi-code stored within pi-codes 114. For example, instructions 112a may be associated with a "load" instruction, instructions 112d may be associated with an "add” instruction, instructions 112h may be associated with a
  • Instructions 112 may be the instructions associated with the cases of a switch statement within pi-code interpreter 110.
  • Cache 116 is used to hold instructions 112 in the order of execution as determined by pi-codes 114.
  • Loop pointers 118 contains pointers to the beginning of any loops of code within cache 116. Both of these data structures are described in more detail with reference to FIGs. 3A through 3F below.
  • FIG. 2 illustrates a sequence of pi-codes 114 for execution by pi-code interpreter 110 in accordance with an embodiment of the present invention.
  • Pi-codes 114 represent an example of a simple program used herein to illustrate the operation of the invention.
  • the pi-codes in the example do not relate to any specific pi-code generator, rather, they are representative of pi-codes generated from any high-level language by an arbitrary compiler.
  • Pi-code 202 is a load instruction that loads a value, say zero, into a register (not shown).
  • Pi-code 204 is an add instruction that adds a value, say one, to the register. This addition results in the value in the register being incremented.
  • Pi-code 206 is a print instruction that causes the value within the register to be printed.
  • pi-code 208 is a loop instruction that causes some number of previous pi-codes to be repeated a predetermined number of times.
  • the previous two pi-codes namely pi-codes 204 and 206, will be repeated nine times.
  • this sequence of pi-codes causes the numbers from one to ten to be printed.
  • Pi-code interpreter 110 copies sections from its own native code to build the native code stored in cache 116.
  • Computer program listing 1 presents a C language version of pi-code interpreter 110. Referring to computer program listing 1, the
  • JEND macro copies the section of code between the JSTART and the JEND macros associated with the current pi-code into cache 116.
  • pi-code interpreter 110 uses the corresponding native code stored in cache 116 to execute the loop rather than pi-codes 114.
  • the constructs used within computer program listing 1 can be found in the widely available GNU C compiler.
  • FIGs, 3 A through 3F illustrate the state of some of the internal data structures used by pi-code interpreter 110 during execution of pi-codes 114.
  • FIG. 3 A illustrates the state of some of the internal data structures used by pi-code interpreter 110 prior to executing any of the pi-codes in accordance with an embodiment of the present invention.
  • instruction pointer 124 points to pi-code 202 within pi-codes 114.
  • Instruction pointer 124 also points to the corresponding cell within loop pointers 118.
  • Cache pointer 108 points to the first cell of cache 116.
  • pi-code interpreter 110 accesses the pi-code pointed to by instruction pointer 124. In the first case, this is pi-code 202, the load instruction. After determining that pi-code 202 is not a loop instruction, pi-code interpreter 110 finds the corresponding instructions within native code instructions 112. As an example, the load instruction may be associated with instructions 112a. Pi-code interpreter 110 then executes instructions 112a. After executing instructions 112a, pi- code interpreter 110 stores a copy of instructions 112a in cache 116. Finally, pi-code interpreter increments cache pointer 108 and instruction pointer 124.
  • pi-code interpreter 110 accesses pi-code 204, the add instruction, pi- code interpreter 110 then follows the same steps as for pi-code 202 with the exception that the instructions from native code instructions 112 that are associated with the add instruction, say instructions 112d, are selected. Similarly, pi-code interpreter 110 accesses pi-code 206 and selects instructions from native code instructions 112, say instructions 112h, associated with the print instruction. After executing pi-codes 202, 204, and 206, the internal data structures are as shown in FIG. 3B.
  • FIG. 3B illustrates the state of some of the internal data structures used by pi- code interpreter 110 after executing some of the pi-codes in accordance with an embodiment of the present invention.
  • Instruction pointer 124 now points to pi-code 208, the loop instruction.
  • Pi-code interpreter 110 now accesses pi-code 208.
  • pi-code interpreter 110 determines if this is the first time that pi-code 208 has been executed by inspecting an internal loop counter (not shown). Since this is the first time executing pi-code 208, instruction pointer 124 is decremented by the first value, two, within pi-code 208 and the internal loop counter is set to the second value, nine, within pi-code 208.
  • cache pointer 108 After storing the instructions from native code instructions 112 associated with pi-code 208, say instructions 112f, cache pointer 108 is incremented. Next, a branch instruction is stored in cache 116 that, when executed, causes pi-code interpreter 110 to loop through the appropriate instructions stored in cache 116 and cache pointer 108 is again incremented. Continuing, pi-code interpreter 110 determines if a loop has been established by inspecting loop pointers 118 at the location pointed to by instruction pointer 124. Since this is the first time that loop pi-code 208 has been executed, this location is empty. Finally, pi-code interpreter 110 stores the value of cache pointer 108 in loop pointers 118 at the location pointed to by instruction pointer 124. This leaves the internal data structures are as shown in FIG. 3C.
  • FIG. 3C illustrates the state of some of the internal data structures used by pi- code interpreter 110 after executing the loop pi-code for the first time in accordance with an embodiment of the present invention.
  • Pi-code interpreter 110 continues executing pi-codes 114 as described above executing pi-codes 204 and 206 for the second time.
  • the internal data structures are as shown in FIG. 3D.
  • FIG. 3D illustrates the state of some of the internal data structures used by pi- code interpreter 110 just before executing the loop pi-code for the second and subsequent times in accordance with an embodiment of the present invention.
  • Pi- code interpreter 110 now accesses pi-code 208 for the second time.
  • pi-code interpreter 110 stores instructions 112f and the return instruction in cache 116 as before. This leaves the state of the internal data structures as shown in FIG. 3E.
  • FIG. 3E illustrates the state of some of the internal data structures used by pi- code interpreter 110 after executing the loop pi-code for the second and subsequent times in accordance with an embodiment of the present invention.
  • Pi-code interpreter 110 now determines that instruction pointer 124 is at the beginning of a loop because loop pointers 118 has a pointer in the cell pointed to by instruction pointer 124. At this point, pi-code interpreter 110 executes native code instructions from cache 116. Each time through the instructions associated with the loop, the internal loop counter is decremented. When the internal loop counter becomes zero, pi-code interpreter 110 sets instruction pointer 124 to the next cell in pi-codes 114. This leaves the state of the internal data structures as shown in FIG. 3F.
  • FIG. 3F illustrates the state of some of the internal data structures used by pi- code interpreter 110 after executing the loop pi-code for the final time in accordance with an embodiment of the present invention.
  • pi-code interpreter 110 determines if there are any more pi-codes within pi-codes 114. If there are more pi- codes, pi-code interpreter 110 continues executing pi-codes as before. If there are no more pi-codes, the program terminates.
  • FIG. 4 is a flowchart illustrating the process of executing pi-codes in accordance with an embodiment of the present invention.
  • the system starts when pi- code interpreter 110 gets the next pi-code from pi-codes 114 (step 402).
  • pi- code interpreter 110 determines if the pi-code is a loop instruction (step 404).
  • pi-code interpreter 110 locates the corresponding native code for the pi-code within native code instructions 112 (step 406). Pi-code interpreter 110 then executes the corresponding native code (step 408). Next, pi-code interpreter 110 stores a copy of the corresponding native code for the pi-code in cache 116 (step 410).
  • pi-code interpreter 110 determines if the current pi-code is a loop instruction (step 412). If the pi-code is a loop instruction, pi-code interpreter 110 determines if a loop pointer has been set in loop pointers 118 (step 413). If the loop pointer has not been set in loop pointers 118 at step 413, pi-code interpreter 110 stores a return instruction in cache
  • pi-code interpreter 110 stores the current cache pointer 108 in loop pointers 118 (step 416). If the pi-code is a loop instruction at step 404, pi-code interpreter 110 determines if a loop has already been established by examining an internal loop counter (step 420). If a loop has not already been established, pi-code interpreter 110 initializes the internal loop counter (step 422). After initializing the internal loop counter at step 422, pi-code interpreter 110 continues execution from step 406 as described above.
  • pi-code interpreter 110 executes the cached code associated with the loop (step 424). Next, pi-code interpreter determines if the end of the loop has been reached (step 426). If the end of the loop has not been reached, pi-code interpreter 110 returns to step 424 to repeat the cached code.
  • pi-code interpreter 11,0 determines if there are more pi-codes to be processed in pi-codes 114 (step 418). If there are more pi-codes in pi-codes 114 at step 418, pi-code interpreter 110 returns to step 402 to continue processing pi-codes. If there are no more pi-codes at step 418, processing terminates.
  • This interpreter does not check the correctness of commands
  • the only purpose for developing this interpreter was to prototype a JIT compilation technology .
  • gcc Jit.c -g -DJIT 1 -o jit gcc Jit.c -g -o int command line for printing 1..10 jit xO al p 129 command line for fibbonacci (10) jit xO XI p P M Ax r P 148 command line for fibbonacci ( 46) jit xO XI p P M Ax r P 149 M Ax r P 149 M Ax r P 149 M Ax r P 149 M Ax r P 149 M Ax r P 149 M Ax r P 149 M Ax r P 149 M Ax r P 149 M Ax r P 144*********************************************************•*•/
  • JEND (8) break; case sp, .

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Devices For Executing Special Programs (AREA)

Abstract

One embodiment of the present invention provides a system for increasing the performance of a platform-independent virtual machine in executing a sequence of platform-independent codes (pi-codes) generated by a high-level language compiler. The system operates by first obtaining a pi-code to be executed by the platform-independent virtual machine. Next, the system locates a sequence of native code instructions that, when executed, will perform the action required by the pi-code. The system then executes the sequence of native code instructions. After the code has been executed, the system stores a copy of the sequence of native code instructions associated with the pi-code in a cache. Finally, if the pi-code defines a loop, the system saves a pointer to a beginning of the loop within the cache which, when used as a reference during execution, will cause the loop to be executed from the cache rather than from the pi-code.

Description

METHOD AND APPARATUS FOR INCREASING PERFORMANCE OF AN INTERPRETER
Inventor: David Wallman
BACKGROUND
Field of the Invention The present invention relates to interpreters for computer languages. More specifically, the present invention relates to a method and apparatus for increasing the performance of a platform-independent code interpreter.
Related Art The proliferation of modern computing devices with widely varying architectures has prompted manufacturers to implement device-independent coding methods so that a computer program can be compiled once from a high-level language into a machine-independent form that can be executed on disparate computing devices without modification. The JAVA™ programming language is a well-known example of one of these high-level languages. The terms JAVA, JVM and JAVA VIRTUAL MACHINE are trademarks of SUN Microsystems, Inc. of Palo Alto, California.
A person desiring to write a device-independent program typically writes source code, which includes a series of instructions in a high-level language. A compiler then translates the source-code into platform-independent codes (pi-codes), which can be executed on various end devices. Examples of pi-codes are JAVA bytecodes, Pascal P-codes, and COBOL GNT instructions.
Each of the various end devices includes a program, usually called an interpreter, which reads these pi-codes and executes a corresponding set of instructions in the native language of the end device. Interpreters can be very slow because they must first translate a pi-code instruction into corresponding native code before executing the native code. During this process, the interpreter must keep track of where it is operating within the series of pi-codes, and it must look up the corresponding native code instruction or instructions needed to implement the pi-code.
In an attempt to speed up the execution of programs that have been compiled to pi-codes, some systems use just-in-time (JIT) compilers to convert the pi-codes to the native code of the end computing device. The JIT compiler makes this conversion for a block of code, (e.g. a class in the JAVA language), prior to the start of execution of the block of code. Making this conversion normally causes the code to execute faster on the end computing device. At times, though, using a JIT compiler actually slows down the execution of the code because the JIT compiler translates the entire class even though only a small portion may actually be used in the program.
Another drawback to using a JIT compiler is that many of the end computing devices have limited resources. For example, the end computing devices may be cellphones, personal information managers, or other hand-held devices that have little excess memory to store the JIT compiler and the data structures that the JIT compiler uses to translate the pi-code to the native code of the end computing device.
What is needed is a method and apparatus to increase the performance of an interpreter for platform-independent codes while minimizing the additional space and time required for this increase in performance.
SUMMARY
One embodiment of the present invention provides a system for increasing the performance of a platform-independent virtual machine in executing a sequence of platform-independent codes (pi-codes) generated by a high-level language compiler. The system operates by first obtaining a pi-code to be executed by the platform- independent virtual machine. Next, the system locates a sequence of native code instructions that, when executed, will perform the action required by the pi-code. The system then executes the sequence of native code instructions. After the code has been executed, the system stores a copy of the sequence of native code instructions associated with the pi-code in a cache. Finally, if the pi-code defines a loop, the system saves a pointer to a beginning of the loop within the cache which, when used as a reference during execution, will cause the loop to be executed from the cache rather than from the pi^code.
In one embodiment of the present invention, the system repeats the steps of obtaining a pi-code, locating the associated native code, executing the native code, storing the native code, and saving the pointer until there are no more pi-codes to be executed. In one embodiment of the present invention, the system saves the pointer in a table indexed by the position of the pi-code in the sequence of pi-codes.
In one embodiment of the present invention, if the pi-code defines a loop, the system stores a branch instruction in the cache at an end of the sequence of native code instructions associated with the pi-code so that the sequence of native code instructions can be repeated.
In one embodiment of the present invention, the system executes the sequence of native code instructions stored in the cache that is associated with the loop.
In one embodiment of the present invention, the system executes the branch instruction stored in the cache when executing the sequence of native code instructions associated with the loop.
In one embodiment of the present invention, the system repeats the execution of the sequence of native code instructions for the loop after executing the branch instruction stored in the cache without any additional reference to the sequence of pi- codes. In one embodiment of the present invention, the system stops repeating the loop when specified conditions for terminating the loop are achieved.
BRIEF DESCRIPTION OF THE FIGURES
FIG. 1 illustrates a computing device including a platform-independent virtual machine in accordance with an embodiment of the present invention. FIG. 2 illustrates a sequence of pi-codes for execution by a pi-code interpreter in accordance with an embodiment of the present invention.
FIG. 3A illustrates the state of some of the internal data structures used by pi- code interpreter 114 prior to executing any of the pi-codes in accordance with an embodiment of the present invention.
FIG. 3B illustrates the state of some of the internal data structures used by pi- code interpreter 114 after executing some of the pi-codes in accordance with an embodiment of the present invention.
FIG. 3C illustrates the state of some of the internal data structures used by pi- code interpreter 114 after executing the loop pi-code for the first time in accordance with an embodiment of the present invention.
FIG. 3D illustrates the state of some of the internal data structures used by pi- code interpreter 114 just before executing the loop pi-code for the second and subsequent times in accordance with an embodiment of the present invention. FIG. 3E illustrates the state of some of the internal data structures used by pi- code interpreter 114 after executing the loop pi-code for the second and subsequent times in accordance with an embodiment of the present invention.
FIG. 3F illustrates the state of some of the internal data structures used by pi- code interpreter 114 after executing the loop pi-code for the final time in accordance with an embodiment of the present invention.
FIG. 4 is a flowchart illustrating the process of executing pi-codes in accordance with an embodiment of the present invention.
Computer program listing 1 is a listing of code that implements an interpreter in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION The following description is presented to enable any person skilled in the art to make and use the invention, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein. The data structures and code described in this detailed description are typically stored on a computer readable storage medium, which may be any device or medium that can store code and/or data for use by a computer system. This includes, but is not limited to, magnetic and optical storage devices such as disk drives, magnetic tape,
CDs (compact discs) and DVDs (digital versatile discs or digital video discs), and computer instruction signals embodied in a transmission medium (with or without a carrier wave upon which the signals are modulated). For example, the transmission medium may include a communications network, such as the Internet.
Computing Device FIG. 1 illustrates computing device 100 including platform-independent virtual machine 104 in accordance with an embodiment of the present invention. Computing device 100 may include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a personal organizer, a device controller, and a computational engine within an appliance.
User 102 uses computing device 100 to execute a sequence of pi-codes 114 to achieve some desired result. Platform- independent virtual machine 104 is located within computing device 100 and contains instruction pointer 106, cache pointer 108, and pi-code interpreter 110. Instruction pointer 106 indicates the current point of execution within pi-codes
114. Instruction pointer 106 also indicates the associated point within loop pointers 118.
Cache pointer 108 indicates the next place available within cache 116 to store native code instructions 112. Native code instructions 112 contain instructions 112a through 112h. Each of native code instructions 112a through 112h contain the native code instructions associated with a specific pi-code stored within pi-codes 114. For example, instructions 112a may be associated with a "load" instruction, instructions 112d may be associated with an "add" instruction, instructions 112h may be associated with a
"print" instruction, and instructions 112f may be associated with a "loop" instruction.
Instructions 112 may be the instructions associated with the cases of a switch statement within pi-code interpreter 110.
Cache 116 is used to hold instructions 112 in the order of execution as determined by pi-codes 114. Loop pointers 118 contains pointers to the beginning of any loops of code within cache 116. Both of these data structures are described in more detail with reference to FIGs. 3A through 3F below.
Pi-codes FIG. 2 illustrates a sequence of pi-codes 114 for execution by pi-code interpreter 110 in accordance with an embodiment of the present invention. Pi-codes 114 represent an example of a simple program used herein to illustrate the operation of the invention. The pi-codes in the example do not relate to any specific pi-code generator, rather, they are representative of pi-codes generated from any high-level language by an arbitrary compiler. Pi-code 202 is a load instruction that loads a value, say zero, into a register (not shown). Pi-code 204 is an add instruction that adds a value, say one, to the register. This addition results in the value in the register being incremented. Pi-code 206 is a print instruction that causes the value within the register to be printed. Finally, pi-code 208 is a loop instruction that causes some number of previous pi-codes to be repeated a predetermined number of times. In this example, the previous two pi-codes, namely pi-codes 204 and 206, will be repeated nine times. Hence, this sequence of pi-codes causes the numbers from one to ten to be printed.
Pi-code interpreter 110 copies sections from its own native code to build the native code stored in cache 116. Computer program listing 1 presents a C language version of pi-code interpreter 110. Referring to computer program listing 1, the
JEND macro copies the section of code between the JSTART and the JEND macros associated with the current pi-code into cache 116. When a loop is encountered in pi- codes 114, pi-code interpreter 110 uses the corresponding native code stored in cache 116 to execute the loop rather than pi-codes 114. The constructs used within computer program listing 1 can be found in the widely available GNU C compiler.
Data Structures
FIGs, 3 A through 3F illustrate the state of some of the internal data structures used by pi-code interpreter 110 during execution of pi-codes 114.
More specifically, FIG. 3 A illustrates the state of some of the internal data structures used by pi-code interpreter 110 prior to executing any of the pi-codes in accordance with an embodiment of the present invention. Initially, instruction pointer 124 points to pi-code 202 within pi-codes 114. Instruction pointer 124 also points to the corresponding cell within loop pointers 118. Cache pointer 108 points to the first cell of cache 116.
During operation, pi-code interpreter 110 accesses the pi-code pointed to by instruction pointer 124. In the first case, this is pi-code 202, the load instruction. After determining that pi-code 202 is not a loop instruction, pi-code interpreter 110 finds the corresponding instructions within native code instructions 112. As an example, the load instruction may be associated with instructions 112a. Pi-code interpreter 110 then executes instructions 112a. After executing instructions 112a, pi- code interpreter 110 stores a copy of instructions 112a in cache 116. Finally, pi-code interpreter increments cache pointer 108 and instruction pointer 124. Next, pi-code interpreter 110 accesses pi-code 204, the add instruction, pi- code interpreter 110 then follows the same steps as for pi-code 202 with the exception that the instructions from native code instructions 112 that are associated with the add instruction, say instructions 112d, are selected. Similarly, pi-code interpreter 110 accesses pi-code 206 and selects instructions from native code instructions 112, say instructions 112h, associated with the print instruction. After executing pi-codes 202, 204, and 206, the internal data structures are as shown in FIG. 3B.
FIG. 3B illustrates the state of some of the internal data structures used by pi- code interpreter 110 after executing some of the pi-codes in accordance with an embodiment of the present invention. Instruction pointer 124 now points to pi-code 208, the loop instruction. Pi-code interpreter 110 now accesses pi-code 208. After determining that pi-code 208 is a loop instruction, pi-code interpreter 110 determines if this is the first time that pi-code 208 has been executed by inspecting an internal loop counter (not shown). Since this is the first time executing pi-code 208, instruction pointer 124 is decremented by the first value, two, within pi-code 208 and the internal loop counter is set to the second value, nine, within pi-code 208. After storing the instructions from native code instructions 112 associated with pi-code 208, say instructions 112f, cache pointer 108 is incremented. Next, a branch instruction is stored in cache 116 that, when executed, causes pi-code interpreter 110 to loop through the appropriate instructions stored in cache 116 and cache pointer 108 is again incremented. Continuing, pi-code interpreter 110 determines if a loop has been established by inspecting loop pointers 118 at the location pointed to by instruction pointer 124. Since this is the first time that loop pi-code 208 has been executed, this location is empty. Finally, pi-code interpreter 110 stores the value of cache pointer 108 in loop pointers 118 at the location pointed to by instruction pointer 124. This leaves the internal data structures are as shown in FIG. 3C.
FIG. 3C illustrates the state of some of the internal data structures used by pi- code interpreter 110 after executing the loop pi-code for the first time in accordance with an embodiment of the present invention. Pi-code interpreter 110 continues executing pi-codes 114 as described above executing pi-codes 204 and 206 for the second time. After executing pi-codes 204 and 206 for the second time, the internal data structures are as shown in FIG. 3D.
FIG. 3D illustrates the state of some of the internal data structures used by pi- code interpreter 110 just before executing the loop pi-code for the second and subsequent times in accordance with an embodiment of the present invention. Pi- code interpreter 110 now accesses pi-code 208 for the second time. Upon encountering the loop instruction for the second time, pi-code interpreter 110 stores instructions 112f and the return instruction in cache 116 as before. This leaves the state of the internal data structures as shown in FIG. 3E. FIG. 3E illustrates the state of some of the internal data structures used by pi- code interpreter 110 after executing the loop pi-code for the second and subsequent times in accordance with an embodiment of the present invention. Pi-code interpreter 110 now determines that instruction pointer 124 is at the beginning of a loop because loop pointers 118 has a pointer in the cell pointed to by instruction pointer 124. At this point, pi-code interpreter 110 executes native code instructions from cache 116. Each time through the instructions associated with the loop, the internal loop counter is decremented. When the internal loop counter becomes zero, pi-code interpreter 110 sets instruction pointer 124 to the next cell in pi-codes 114. This leaves the state of the internal data structures as shown in FIG. 3F. FIG. 3F illustrates the state of some of the internal data structures used by pi- code interpreter 110 after executing the loop pi-code for the final time in accordance with an embodiment of the present invention. At this point, pi-code interpreter 110 determines if there are any more pi-codes within pi-codes 114. If there are more pi- codes, pi-code interpreter 110 continues executing pi-codes as before. If there are no more pi-codes, the program terminates.
Pi-code Interpreter
FIG. 4 is a flowchart illustrating the process of executing pi-codes in accordance with an embodiment of the present invention. The system starts when pi- code interpreter 110 gets the next pi-code from pi-codes 114 (step 402). Next, pi- code interpreter 110 determines if the pi-code is a loop instruction (step 404).
If the pi-code is not a loop instruction at step 404, pi-code interpreter 110 locates the corresponding native code for the pi-code within native code instructions 112 (step 406). Pi-code interpreter 110 then executes the corresponding native code (step 408). Next, pi-code interpreter 110 stores a copy of the corresponding native code for the pi-code in cache 116 (step 410).
After the native code has been stored in cache 116 in step 410, pi-code interpreter 110 determines if the current pi-code is a loop instruction (step 412). If the pi-code is a loop instruction, pi-code interpreter 110 determines if a loop pointer has been set in loop pointers 118 (step 413). If the loop pointer has not been set in loop pointers 118 at step 413, pi-code interpreter 110 stores a return instruction in cache
116 (step 414). Next, pi-code interpreter 110 stores the current cache pointer 108 in loop pointers 118 (step 416). If the pi-code is a loop instruction at step 404, pi-code interpreter 110 determines if a loop has already been established by examining an internal loop counter (step 420). If a loop has not already been established, pi-code interpreter 110 initializes the internal loop counter (step 422). After initializing the internal loop counter at step 422, pi-code interpreter 110 continues execution from step 406 as described above.
If the loop has already been established at step 420, pi-code interpreter 110 executes the cached code associated with the loop (step 424). Next, pi-code interpreter determines if the end of the loop has been reached (step 426). If the end of the loop has not been reached, pi-code interpreter 110 returns to step 424 to repeat the cached code.
If the instruction is not a loop instruction at step 412, if the return is already in the cache at step 413, if the end of the loop has been reached at step 426, or the loop pointer has been saved at step 416, pi-code interpreter 11,0 determines if there are more pi-codes to be processed in pi-codes 114 (step 418). If there are more pi-codes in pi-codes 114 at step 418, pi-code interpreter 110 returns to step 402 to continue processing pi-codes. If there are no more pi-codes at step 418, processing terminates.
The foregoing descriptions of embodiments of the present invention have been presented for purposes of illustration and description only. They are not intended to be exhaustive or to limit the present invention to the forms disclosed. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art.
Additionally, the above disclosure is not intended to limit the present invention. The scope of the present invention is defined by the appended claims.
This is an interpreter implementing a very simple machine:
There are two registers: x and X
Commands with capital letters are applied to X
Commands with small letters are applied to x x? - load ? to x, where ? is a digit
X? - load ? to X, where ? is a digit a? - add ? to x where ? is a digit or X
A? - add ? to X where ? is a digit or x
M - store in memory the value of X r - restore x from memory p - print the current value of x
P - print the current value of X
1?# - loop commend, where ? and # are digits, the interpreter executes the last ? commands (not including the 1?#) # times
NOTE: no nested loops
This interpreter does not check the correctness of commands The only purpose for developing this interpreter was to prototype a JIT compilation technology .
To compile the JIT and/or interpreter: gcc Jit.c -g -DJIT=1 -o jit gcc Jit.c -g -o int command line for printing 1..10 jit xO al p 129 command line for fibbonacci (10) jit xO XI p P M Ax r P 148 command line for fibbonacci ( 46) jit xO XI p P M Ax r P 149 M Ax r P 149 M Ax r P 149 M Ax r P 149 M Ax r P 144 ****************************************************************•*•/
#if DBG > 1 ttdefine PR_STATE() printSt ( LINE )
#else ftdefine PR_STATE() #endif void Pr_state(int LineN) { extern int cur, lcntr, x, X, m; extern char *freePtr; printf ("LN: %d cur=%d lcntr=%d x=%d\n", lineN, cur , lcntr , x) ; }
#if JIT
#define JSTART(n) \ Strt_ ## n: \ { \ PR STATE ( ) ; #define JEND(n) \ PR_STATE ( ) ; } \ End_ ## n: \ pB = (int) &&Strt_ ## n; \ pE = (int) &&End_ ## n; \ csz = pE - pB; \ bcopy (pB, freePtr, csz) ; \ freePtr += csz; \ PR_STATE ( ) ; #else
#define JSTART(n) PR_STATE(); #define JEND(n) PR_STATE ( ) ; #endif char ncache [10000] ; void *nYable[100] ; char *freePtr = ncache; char ret [40] ; int rsize; void (*print) (char *, int); void (*printSt) (int) ; void Print (char *s, int x) { printf (" %s = %d\n", s, x) ; } int nothing = 0; /* These are really locals. They are here for debug purposes. */ int cur, lcntr=0, x, X, m=0; int main(int argc, char *argv[]){ char *tkn=argv [1] ; int pB, pE, csz, xl; print = Print; printSt = Pr_state; #if JIT
/* set up the jit build ret_jmp to interpreter loop */
{ label StrtR, EndR; void *rt = &&Ret abel; if (nothing) StrtR:
Goto *rt; EndR: pB = (int) &&StrtR; pE = (int) &&EndR; xl = pE - pB; bcopy (pB, ret, xl ) ; rsize=xl; #endif
/* Interpreter */ for (cur=l; cur<argc; ) { switch (tkn[0] ) { λx'
JSTART ( 1 ) x = (tknfl]-^' ) cur++; tkn = argv [cur] ; JEND(l) break; case λX' :
JSTART (2)
X = (tkn[l]-,0' ) ; cur++; tkn = argv [cur] ;
JEND (2) break;
'a'
JSTART (3) if (tkn[l] == λX' ) x += X; else x += (tkn[l]-,0 ) ; cur++; tkn = argv [cur];
JEND (3) break;
λA'
JSTART (4) if (tkn[l] == λx' )
X += x; else
X += (tkn[l]-^0' ) cur++; tkn = arg [cur] ;
JEND ( 4 ) break; case ' : JSTART (5) = X; cur++; tkn = arg [cur] ;
JEND (5) break; JSTART (6) x = m; cur++; tkn = argv [cur] ;
JEND ( 6 ) break; case X' :
JSTART (7) if (lcntr > 0) { lcntr—; if (lcntr > 0) cur -= tkn[l] '0'; else cur++;
} else{ cur -= tkn[l]-λ0' ; lcntr = tkn[2]- 0';
} tkn = argv [cur];
JEND (7)
#if JIT
PR STATE () ; bcopy (ret, freePtr, rsize) ; freePtr += rsize;
/* return here from native code after BB is over */ RetLabel :
/* loop to do jitting */ if (nTable[cur] != 0) goto * (nTable [cur] ) ; else /* new nblock */ nTable [cur] = freePtr;
#endif break; case 'p' :
JSTART (8) print ("x", x) ; cur++; tkn = argv [cur] ;
JEND (8) break; case sp, .
JSTART (9) print ("X", X) ; cur++; tkn = arg [cur];
JEND (9)
Break; default: cur++; /* ignore bad command */ tkn = argv [cur];
Computer Program Listing 1

Claims

What Is Claimed Is:
1. A method for increasing performance of a platform-independent virtual machine in executing a sequence of platform-independent codes generated by a high- level language compiler for the platform-independent virtual machine, comprising; retrieving a platform-independent code to be executed by the platform- independent virtual machine; locating a sequence of native code instructions that, when executed, will perform an action associated with the platform-independent code; executing the sequence of native code instructions; storing a copy of the sequence of native code instructions associated with the platform-independent code in a cache; and if the platform-independent code defines a loop, saving a pointer to a beginning of the loop within the cache so that the pointer can be used to execute the loop from the cache.
2. The method of claim 1, further comprising repeating the steps of obtaining, locating, executing, storing, and saving until there are no more platform- independent codes to be executed.
3. The method of claim 2, wherein the pointer is saved in a table indexed by a position of the platform-independent code in the sequence of platform- independent codes.
4. The method of claim 3, wherein if the platform-independent code defines a loop, the method further comprises storing a branch instruction in the cache at an end of the sequence of native code instructions associated with the platform- independent code so that the sequence of native code instructions can be repeated.
5. The method of claim 4, further comprising executing the sequence of native code instructions stored in the cache that is associated with the loop. ι.
6. The method of claim 5, wherein executing the sequence of native code instructions associated with the loop includes executing the branch instruction stored in the cache.
7. The method of claim 6, wherein executing the branch instruction stored in the cache causes the sequence of native code instructions for the loop to be repeated without additional reference to the sequence of platform-independent codes.
8. The method of claim 7, wherein repeating the loop is terminated when specified conditions for terminating the loop are achieved.
9. A computer-readable storage medium storing instructions that when executed by a computer causes the computer to perform a method for increasing performance of a platform-independent virtual machine in executing a sequence of platform-independent codes generated by a high-level language compiler for the platform-independent virtual machine, the method comprising; retrieving a platform-independent code to be executed by the platform- independent virtual machine; locating a sequence of native code instructions that, when executed, will perform an action associated with the platform-independent code; executing the sequence of native code instructions; storing a copy of the sequence of native code instructions associated with the platform-independent code in a cache; and if the platform-independent code defines a loop, saving a pointer to a beginning of the loop within the cache so that the pointer can be used to execute the loop from the cache.
10. The computer-readable storage medium of claim 9, the method further comprising repeating the steps of obtaining, locating, performing, storing, and saving until there are no more platform-independent codes to be executed.
11. The computer-readable storage medium of claim 10, wherein the pointer is saved in a table indexed by a position of the platform-independent code in the sequence of platform-independent codes.
12. The computer-readable storage medium of claim 11, wherein if the platform-independent code defines a loop, the method further comprises storing a branch instruction in the cache at an end of the sequence of native code instructions associated with the platform-independent code so that the sequence of native code instructions can be repeated.
13. The computer-readable storage medium of claim 12, the method further comprising executing the sequence of native code instructions stored in the cache that is associated with the loop.
14. The computer-readable storage medium of claim 13, wherein executing the sequence of native code instructions associated with the loop includes executing the branch instruction stored in the cache.
15. The computer-readable storage medium of claim 14, wherein executing the branch instruction stored in the cache causes the sequence of native code instructions for the loop to be repeated without additional reference to the sequence of platform-independent codes.
16. The computer-readable storage medium of claim 15, wherein repeating the loop is terminated when specified conditions for terminating the loop are achieved.
17. An apparatus that facilitates increasing performance of a platform- independent virtual machine in executing a sequence of platform-independent codes generated by a high-level language compiler for the platform-independent virtual machine, comprising; an retrieving mechanism that is configured to retrieve a platform-independent code to be executed by the platform-independent virtual machine; a locating mechanism that is configured to locate a sequence of native code instructions that, when executed, will perform an action associated with the platform- independent code; an executing mechanism that is configured to execute the sequence of native code instructions; a storing mechanism that is configured to store a copy of the sequence of native code instructions associated with the platform-independent code in a cache; and a saving mechanism that is configured to save a pointer to a beginning of a loop within the cache so that the pointer can be used to execute the loop from the cache.
18. The apparatus of claim 17, further comprising a repeating mechanism that is configured to repeat the steps of obtaining, locating, performing, storing, and saving until there are no more platform-independent codes to be executed.
19. The apparatus of claim 18, wherein the saving mechanism is configured to save the pointer in a table indexed by a position of the platform- independent code in the sequence of platform-independent codes.
20. The apparatus of claim 19, wherein the storing mechanism is further configured to store a branch instruction in the cache at an end of the sequence of native code instructions associated with the platform-independent code so that the sequence of native code instructions can be repeated.
21. The apparatus of claim 20, wherein the executing mechanism is further configured to execute the sequence of native code instructions stored in the cache that is associated with the loop.
22. The apparatus of claim 21, wherein the executing mechanism is further configured to execute the sequence of native code instructions associated with the loop including executing the branch instruction stored in the cache.
23. The apparatus of claim 22, wherein executing the branch instruction stored in the cache causes the executing mechanism to repeat the sequence of native code instructions for the loop without additional reference to the sequence of platform-independent codes.
24. The apparatus of claim 23, wherein the repeating mechanism is further configured to terminate the loop when specified conditions for terminating the loop are achieved.
PCT/US2001/046840 2000-11-13 2001-11-08 Method and apparatus for increasing performance of an interpreter WO2002052409A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB0308866A GB2384089B (en) 2000-11-13 2001-11-08 Method and apparatus for increasing performance of an interpreter
AU2002245075A AU2002245075A1 (en) 2000-11-13 2001-11-08 Method and apparatus for increasing performance of an interpreter

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US71276100A 2000-11-13 2000-11-13
US09/712,761 2000-11-13

Publications (2)

Publication Number Publication Date
WO2002052409A2 true WO2002052409A2 (en) 2002-07-04
WO2002052409A3 WO2002052409A3 (en) 2004-02-26

Family

ID=24863448

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/046840 WO2002052409A2 (en) 2000-11-13 2001-11-08 Method and apparatus for increasing performance of an interpreter

Country Status (3)

Country Link
AU (1) AU2002245075A1 (en)
GB (1) GB2384089B (en)
WO (1) WO2002052409A2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005008478A3 (en) * 2003-07-15 2005-09-15 Transitive Ltd Method and apparatus for performing native binding
EP2015177A1 (en) * 2003-07-15 2009-01-14 Transitive Limited Method and apparatus for performing native binding
CN107179935A (en) * 2016-03-11 2017-09-19 华为技术有限公司 A kind of instruction executing method and virtual machine
US11966727B2 (en) 2016-03-11 2024-04-23 Lzlabs Gmbh Load module compiler

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998059292A1 (en) * 1997-06-25 1998-12-30 Transmeta Corporation Improved microprocessor

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4991088A (en) * 1988-11-30 1991-02-05 Vlsi Technology, Inc. Method for optimizing utilization of a cache memory
US5768593A (en) * 1996-03-22 1998-06-16 Connectix Corporation Dynamic cross-compilation system and method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998059292A1 (en) * 1997-06-25 1998-12-30 Transmeta Corporation Improved microprocessor

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ALTMAN E R ET AL: "WELCOME TO THE OPPORTUNITIES OF BINARY TRANSLATION" COMPUTER, IEEE COMPUTER SOCIETY, LONG BEACH., CA, US, US, vol. 33, no. 3, March 2000 (2000-03), pages 40-45, XP001075148 ISSN: 0018-9162 *
HSIEH C-H A ET AL: "Java bytecode to native code translation: the Caffeine prototype and preliminary results" PROCEEDINGS OF THE 29TH. ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE. MICRO-29. PARIS, DEC. 2 - 4, 1996, PROCEEDINGS OF THE ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE. (MICRO), LOS ALAMITOS, IEEE COMP. SOC. PRESS, U, vol. SYMP. 29, 2 December 1996 (1996-12-02), pages 90-97, XP010206088 ISBN: 0-8186-7641-8 *
KAZI I H ET AL: "Techniques for obtaining high performance in Java programs" ACM COMPUTING SURVEYS, ACM, NEW YORK, US, US, vol. 32, no. 3, 3 September 2000 (2000-09-03), pages 213-240, XP002958726 ISSN: 0360-0300 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005008478A3 (en) * 2003-07-15 2005-09-15 Transitive Ltd Method and apparatus for performing native binding
US7434209B2 (en) 2003-07-15 2008-10-07 Transitive Limited Method and apparatus for performing native binding to execute native code
EP2015177A1 (en) * 2003-07-15 2009-01-14 Transitive Limited Method and apparatus for performing native binding
US8091076B2 (en) 2003-07-15 2012-01-03 International Business Machines Corporation Dynamic native binding
US8108842B2 (en) 2003-07-15 2012-01-31 International Business Machines Corporation Method and apparatus for performing native binding
CN107179935A (en) * 2016-03-11 2017-09-19 华为技术有限公司 A kind of instruction executing method and virtual machine
CN107179935B (en) * 2016-03-11 2021-01-29 华为技术有限公司 Instruction execution method and virtual machine
US11966727B2 (en) 2016-03-11 2024-04-23 Lzlabs Gmbh Load module compiler

Also Published As

Publication number Publication date
GB0308866D0 (en) 2003-05-21
WO2002052409A3 (en) 2004-02-26
AU2002245075A1 (en) 2002-07-08
GB2384089B (en) 2005-07-13
GB2384089A (en) 2003-07-16

Similar Documents

Publication Publication Date Title
AU780946B2 (en) Method and apparatus for debugging optimized code
US5999732A (en) Techniques for reducing the cost of dynamic class initialization checks in compiled code
US6513156B2 (en) Interpreting functions utilizing a hybrid of virtual and native machine instructions
US6021273A (en) Interpreter generation and implementation utilizing interpreter states and register caching
US6158048A (en) Method for eliminating common subexpressions from java byte codes
US6078744A (en) Method and apparatus for improving compiler performance during subsequent compilations of a source program
US6381737B1 (en) Automatic adapter/stub generator
US6363522B1 (en) Method and apparatus for handling exceptions as normal control flow
US7124407B1 (en) Method and apparatus for caching native code in a virtual machine interpreter
JP2000267862A (en) Hybrid just-in-time compiler for minimizing consumption of resources
US6243668B1 (en) Instruction set interpreter which uses a register stack to efficiently map an application register state
EP2082318A1 (en) Register-based instruction optimization for facilitating efficient emulation of an instruction stream
WO2000017747A1 (en) Opimizing symbol table lookups in platform-independent virtual machines
WO2003003215A2 (en) Method and apparatus to facilitate debugging a platform-independent virtual machine
US6931638B2 (en) Method and apparatus to facilitate sharing optimized instruction code in a multitasking virtual machine
US7051323B2 (en) Method and apparatus for initializing romized system classes at virtual machine build time
JPH06309178A (en) Method and computer system for processing interruption by interruption processing cord
RU2128362C1 (en) Device for preparation of calling image and its execution
JP4799016B2 (en) Method and device for calling functions
WO2002052409A2 (en) Method and apparatus for increasing performance of an interpreter
JPH11134198A (en) Processor and method for compilation, device and method for program execution, and program storage medium
US20020095664A1 (en) Pre-interpretation and execution method for Java computer program language
WO1997014096A1 (en) System and method for debugging computer software
CN117076052A (en) Mode jump method, device, electronic equipment and storage medium
Cailliau Interpreter for P4-code an emulator for the Pascal HSC

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PH PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

ENP Entry into the national phase in:

Ref document number: 0308866

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20011108

Format of ref document f/p: F

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 0308866.3

Country of ref document: GB

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP