EP0314370B1 - Bounded lag distributed discrete event simulation method and apparatus - Google Patents

Bounded lag distributed discrete event simulation method and apparatus Download PDF

Info

Publication number
EP0314370B1
EP0314370B1 EP88309768A EP88309768A EP0314370B1 EP 0314370 B1 EP0314370 B1 EP 0314370B1 EP 88309768 A EP88309768 A EP 88309768A EP 88309768 A EP88309768 A EP 88309768A EP 0314370 B1 EP0314370 B1 EP 0314370B1
Authority
EP
European Patent Office
Prior art keywords
event
subsystem
simulation
time
events
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP88309768A
Other languages
German (de)
French (fr)
Other versions
EP0314370A2 (en
EP0314370A3 (en
Inventor
Boris Dmitrievich Lubachevsky
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AT&T Corp
Original Assignee
AT&T Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AT&T Corp filed Critical AT&T Corp
Publication of EP0314370A2 publication Critical patent/EP0314370A2/en
Publication of EP0314370A3 publication Critical patent/EP0314370A3/en
Application granted granted Critical
Publication of EP0314370B1 publication Critical patent/EP0314370B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation

Definitions

  • This invention relates to methods for simulating behaviour of a system and to systems for performing discrete event simulation.
  • Computer simulation has become very important in recent years because of the many applications where simulation of systems is highly beneficial.
  • One such application is the use of simulation in the design of complex systems. These may be electronic systems such as a telecommunications switching network, robot based flexible manufacturing systems, process control systems, health care delivery systems, transportation systems, and the like. Design verification through simulation plays an important role in speeding up the design and insuring that it conforms to the specification.
  • Another application is the use of simulation in analyzing, and tracking down, faults appearing in operating systems.
  • Still another application is optimizing the operation of existing systems through repeated simulations, e.g., the operation of a manufacturing facility, the operation of the telecommunications network, scheduling and dispatching, etc.
  • Yet another application is the use of simulation to predict the operation of systems which for various reasons can not be tested (e.g., response to catastrophe).
  • Simulations can be classified into three types: continuous time, discrete time, and discrete event.
  • Discrete event simulation means simulation of a system in which phenomena of interest change value or state at discrete moments of time, and no changes occur except in response to an applied stimulus. For example, a bus traveling a prescribed route defines a discrete event system in which the number of passengers can change only when the bus arrives at a bus stop along the route.
  • a number of parallel processors form a simulation multicomputer network, and it is the entire network that is devoted to a simulation task. More specifically, each processor within the network is devoted to a specific portion of the system that is simulated; it maintains its own event list and communicates event occurrences to appropriate neighbor processors. Stated conversely, if one views a simulated system as a network of interacting subsystems, distributed simulation maps each subsystem onto a processor of the multi computer network.
  • the time stamping is required, of course, to maintain causality so that in a message-receiving node an event that is scheduled for time T is not simulated when other incoming messages can still arrive with a time-stamp of less than T. Because of this, when a particular node is able to receive input from two sender nodes, it cannot simulate an event with any assurance that it would not be called upon to refrain from simulating the event, until it receives a message from both sender nodes. Waiting to receive a message from all inputs slows the simulation process down substantially and can easily result in a deadlock cycle where each node waits for a previous node, which amounts to the situation of a node waiting for itself.
  • the null message approach expends computing resources in generating, sending, and reading the null messages; the recovery approach expends computing resources to detect and recover from a deadlock, and roll-back approach expends computing resources in simulating events and then undoing the work that was previously done.
  • a simulation system is rcalized that avoids all blocking and advances the simulation time in an efficient manner.
  • the efficiency is achieved by each node independently evaluating for itself a time interval that is not "at risk” and simulating all events that are within that evaluated time.
  • a point in time is not "at risk” for a considered node if no other node in the system has a scheduled event that can affect the simulation at the considered node.
  • FIG. 1 presents a pictorial explanation that may aid in understanding the invention.
  • vertical lines 21, 22, 23, 24, 25, 26 and 27 represent seven simulation nodes and their simulation time lines (where time advances upward).
  • the circles along these lines (30-43) represent events that have been, or are scheduled to be processed (i.e., simulated). These events may cause change of value or state, i.e., other events, at the originating node or at some other nodes.
  • node 24 is the node of concern, but it is understood that the consideration undertaken by node 24 are concurrently taken by all other nodes.
  • the horizontal distances between line 24 and the other lines represent the time delays for events in other nodes to affect the value or state at node 24. Accordingly, event 30 processed at node 21 for time T1 may cause an event 40 at node 24 to be scheduled for some time not earlier than T2, as shown in FIG. 1.
  • the interval between T1 and T2 equals the delay between line 24 and line 21 (i.e., the horizontal distance between the lines).
  • the events depicted in FIG. 1 can be divided into two groups: those that have been simulated (and marked with a cross), and those that are yet to be simulated (un-crossed).
  • the simulated events need not be considered for simulation because their effects are already represented by those events which have not been simulated (for example, event 30 has caused event 40; the first has been simulated, while the second is yet to be considered for processing).
  • event 33 in FIG. 1 is earliest in time from among all of the nodes.
  • the time of event 33 forms the current floor simulation time of the system.
  • the floor is depicted in FIG. 1 by dashed line 45.
  • a time interval beginning with the floor simulation time is selected for consideration.
  • This time interval which I call the bounded lag interval, can be a convenient time interval that takes into account the number of nodes in the system, the number of events to be simulated, and the computing power of the processors employed. All events scheduled within the bounded lag interval can be affected by events scheduled at other nodes within the bounded lag time interval, but only if those nodes are at a time distance from the affected node 24 that is less than the selected bounded lag interval. That reduces the number of nodes that need to be considered. In the FIG.
  • the bounded lag interval ends at dashed line 46; and as drawn, the nodes that need to be considered for their potential effect on node 24 are nodes 22, 23, 25 and 26. Nodes 21 and 27 are outside the bounded lag delay and their scheduled events within (or outside) the bounded lag need not be considered.
  • next scheduled event is event 35, and as drawn, only nodes 23 and 25 need to be considered.
  • event 35 In determining whether event 35 is to be simulated, one can observe that only event 34 is scheduled early enough to have a possible impact on event 35. That event can be analyzed and if it is determined that it does not affect event 35, then event 35 can be simulated. Alternatively, it may prove even more efficient to refrain from processing event 35 simply because of the potential effect by event 34. In the following description, this alternative approach is taken because it saves the process of evaluating what event 34 may do.
  • FIG. 2 presents a block diagram of a concurrent event simulator. It comprises a plurality of node controllers 51 that are connected to a communications and common processing network 52. Network 52 performs the communication and synchronization functions, and in some applications it also performs some of the computations that are needed by the entire system.
  • the node controllers can be realized with conventional stored program computers that may or may not be identical to each other.
  • Each of the node controllers which corresponds to a node in the FIG. 1 depiction, is charged with the responsibility to simulate a preassigned subsystem of the simulated system.
  • Each controller C i maintains an event list ⁇ i that is executed by simulating each event in the list in strict adherence to the scheduled event simulation times.
  • the bounded lag interval is selectively fixed, and that in conformance with the selected bounded lag interval each controller 51 is cognizant of the processors with which it must interact to determine whether events are scheduled.
  • the process by which the event simulations are enabled is carried out by the controllers in accordance with the above-described principles, as shown, for example, by the flowchart of FIG. 3.
  • Block 100 is the flow control block. It tests whether the floor simulation time, T floor , is less than the end of the simulation time, T end . As long as T floor ⁇ T end , the simulation continues by passing control to block 110. When T floor reaches or exceeds T end , the simulation ends. Block 110 determines the floor simulation time of the simulated system at each iteration. That is, block 110 determines the lowest event time of the scheduled events dispersed among the event lists ( ⁇ i ) of controllers 51 (C i ) that are are yet to be simulated.
  • block 110 evaluates the floor simulation time in accordance with the equation where N is the total number of node controllers 51 and T i is the time of the event, e i , which has the earliest scheduled time among the events e' in the event list ⁇ i ; i.e., Block 110 can be implemented in each of the controllers by having each controller broadcast to all other controllers (via network 52) its T i and evaluate for itself the minimum T i which defines T floor . Alternatively, block 110 can be implemented within communications and common processing network 52 by having controllers 51 send their T i values to network 52, and having network 52 select the minimum T i and broadcast it as T floor back to the controllers.
  • each of the controllers evaluates its earliest “at risk” time. This is accomplished by network 52 distributing the T i information to neighboring controllers, as required, and each controller C i evaluates the "at risk” demarcation point, ⁇ i , from the T i information.
  • This "at risk” point is defined as the earliest time at which changes at the neighboring controllers can affect the history simulated by the controller, based on the neighboring controllers' own scheduled events or based on a response to an event from the controller itself (reflection). This is expressed by the following equation:
  • processor C i simulates all or some of the scheduled events whose times are earlier that ⁇ i .
  • the time T i is advanced with each simulation of an event, and the simulated event is deleted from ⁇ i .
  • new events are called to be scheduled, then those events are sent to network 52 for transmission to the appropriate node controllers.
  • the execution of events is called to be blocked, that information is also sent to network 52, and thereafter to the appropriate node controllers for modification of the event lists.
  • node controller 22 Since there are no events scheduled for node controller 24 between the time of T floor , and point 50, no progress in simulations is made by this controller. Concurrently, node controller 22 simulates event 33 (since it is positioned at T floor , and no other event can affect it), and node 25 simulates event 34 since neither node 26 nor node 24 (the closest nodes) have any events scheduled prior to the time of event 34. Node 27 probably also simulates event 36, but this is not certain, since FIG. 1 does not show all of the neighbors of node controller 27.
  • event 39 is simulated, event 37 at node 27 is simulated, but at node 26 event 38 is not simulated because it is beyond the "at risk" point of node 26 caused by the position of event 37 at node 27.
  • the next T floor moves to the time of event 38, and the process continues to simulate additional events.
  • neighbors(i) refers to nodes that communicate directly with node i.
  • the auxiliary variable ⁇ k i represents an estimate of the earliest time when events can affect node i after traversing exactly k links. It can be shown that the iteration test is always met within a relatively low number of steps, depending on the value of the bounded lag interval, B. To account for opaque periods, the evaluation of ⁇ is augmented to be where op ji is the end of opaque period (when communication unblocks) for node j in the direction of node i.
  • each ⁇ i reduces to computing the minimum of the opaque periods relative to block i; to wit:
  • T floor increases because the event determining that value can always be simulated.
  • the movement of T floor is affected, however, by how closely the nodes are separated and the scheduled events.
  • One other observation that can be made is that the above-described procedure is very conservative, in the sense that an assumption is made that whenever an event is scheduled to be simulated at one node controller, it will always have an effect on the neighboring controller (after the appropriate delay).
  • Knowledge that an event scheduled for simulation will not affect a neighboring node can be put to use to simulate more events concurrently (have fewer node controllers that are idle). This can be accomplished by communicating not only the T i of the earliest scheduled event in each list, but also the effect that it may have on neighboring node controllers. If fact, each list can broadcast more than just the earliest scheduled event.
  • the design decision that a practitioner must make, of course, is whether fewer iterations but more complex evaluations are economically justifiable.
  • FIG. 4 presents a block diagram of one realization for node controller 51.
  • this embodiment relates to use of the iterative method for evaluating ⁇ (with the use of the auxiliary variable ⁇ ) when the delays between changes in one subsystem and the effect of those changes on other subsystems are not insignificant, it will be appreciated by the skilled artisan that the other realizations for computing ⁇ i are substantially similar to this realization. It will also be appreciated that although node controller 51 is shown in FIG. 2 as one of a plurality of controllers, such plurality can be realized within a single processor of sufficient computing power.
  • state register 56 defines the current state of the simulated subsystem and event list 53 specifies the events that are scheduled to be simulated.
  • Processor 54 is the event simulation processor, and it is responsive to state register 56, to event list 53 and to register 58.
  • Register 58 is the ⁇ register and, as the name implies, it stores the value of ⁇ for the controller. Based on the value of ⁇ and the scheduled time of the event at the top of the event list, processor 54 simulates the event in conformance with the state of the subsystem and develops a new state of the subsystem as well as, perhaps, new events.
  • the new state is stored in register 56, new events scheduled for the controller are stored in event list 53 via line 61, and events affecting other controllers are communicated out via line 62. Events produced at other controllers that may affect this controller are accepted via line 63 and stored in event list 53 through processor 54.
  • processor 54 is the event simulation processor
  • processor 55 is the synchronization processor.
  • Processor 54 is shown in FIG. 4 as a separate processor but in practice a single processor may serve the function of both processor 54 and processor 55.
  • Processor 55 receives information from event list 53 concerning the time of the event in list 53 that possesses the earliest simulation time. It transmits that information to other controllers via line 64 and receives like information from all other relevant nodes via line 65. From that information processor 55 develops the value of T floor and stores that value in register 57.
  • Processor 55 also receives ⁇ j information from its neighbor controllers (controllers where changes can affect the controller directly) via line 66, and transmits its own ⁇ i values via line 67. With the aid of T floor and the other incoming information, processor 55 performs the iterative computations to develop the values of ⁇ k i and ⁇ k i . Those values are stored by processor 55 in registers 57 and 58, respectively.
  • the above assigns the task of computing T floor to processor 55.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Multi Processors (AREA)

Description

  • This invention relates to methods for simulating behaviour of a system and to systems for performing discrete event simulation.
  • Computer simulation has become very important in recent years because of the many applications where simulation of systems is highly beneficial. One such application is the use of simulation in the design of complex systems. These may be electronic systems such as a telecommunications switching network, robot based flexible manufacturing systems, process control systems, health care delivery systems, transportation systems, and the like. Design verification through simulation plays an important role in speeding up the design and insuring that it conforms to the specification. Another application is the use of simulation in analyzing, and tracking down, faults appearing in operating systems. Still another application is optimizing the operation of existing systems through repeated simulations, e.g., the operation of a manufacturing facility, the operation of the telecommunications network, scheduling and dispatching, etc. Yet another application is the use of simulation to predict the operation of systems which for various reasons can not be tested (e.g., response to catastrophe).
  • Simulations can be classified into three types: continuous time, discrete time, and discrete event. Discrete event simulation means simulation of a system in which phenomena of interest change value or state at discrete moments of time, and no changes occur except in response to an applied stimulus. For example, a bus traveling a prescribed route defines a discrete event system in which the number of passengers can change only when the bus arrives at a bus stop along the route.
  • Of the three simulation classes, from computation standpoint it appears that discrete event simulation is potentially the least burdensome approach because simulation of time when nothing happens is dispensed with. Of course, synchronization of the event simulations must be considered when parallelism is employed. Most often, a discrete event simulator progresses by operating on an event list. An event at the top of the list is processed, possibly adding events to the list in the course of processing, and the simulation time is advanced. Thereafter, the processed event at the top of the list is removed. This technique limits the speed of simulation to the rate at which a single processor is able to consider the events one at a time. In a parallel scheme many processors simultaneously are engaged in the task creating a potential for speeding up the simulation. Although techniques for performing event list manipulation and event simulation in parallel have been suggested, large scale performance improvements are achieved only by eliminating the event list in its traditional form. This is accomplished by distributed simulations.
  • In a distributed simulation, a number of parallel processors form a simulation multicomputer network, and it is the entire network that is devoted to a simulation task. More specifically, each processor within the network is devoted to a specific portion of the system that is simulated; it maintains its own event list and communicates event occurrences to appropriate neighbor processors. Stated conversely, if one views a simulated system as a network of interacting subsystems, distributed simulation maps each subsystem onto a processor of the multi computer network.
  • Although distributed simulation provides parallelism which has the potential for improving the simulation speed, allocation and synchronization of work among the processors is a major concern which may impede the realization of the improvement goals. One well known approach for distributed simulation has been proposed by Chandy and Misra in "Distributed Simulation: A Case Study in Design and Verification of Distributed Programs,"IEEE Transactions on Software Engineering, Vol. SE-5, No. 5, September 1979, pp. 440-452, and by Chandy, Holmes and Misra in "Distributed Simulation of Networks,"Computer Networks, Vol. 3, No. 1, February 1979, pp. 105-113. In this approach, they recognize that physical systems to be simulated are composed of independent but interacting entities, and that those entities should be mapped onto a topologically equivalent system of logical nodes. Interaction between nodes is accomplished by the exchange of time-stamped messages which include the desired message information and identify the simulation time of the sending node. In accordance with the Chandy-Holmes-Misra approach, the nodes interact only via messages. There are no global shared variables, each node is activated only in response to a message, each node maintains its own clock, and finally, the time-stamps of the messages generated by each node are non-decreasing (in time). In this arrangement, each of the nodes works independently to process the events assigned to it in the correct simulated order. Thus, independent events can be simulated in parallel, within different nodes, even if they occur at different simulated times.
  • The time stamping is required, of course, to maintain causality so that in a message-receiving node an event that is scheduled for time T is not simulated when other incoming messages can still arrive with a time-stamp of less than T. Because of this, when a particular node is able to receive input from two sender nodes, it cannot simulate an event with any assurance that it would not be called upon to refrain from simulating the event, until it receives a message from both sender nodes. Waiting to receive a message from all inputs slows the simulation process down substantially and can easily result in a deadlock cycle where each node waits for a previous node, which amounts to the situation of a node waiting for itself.
  • To remedy the wait problem, artisans have been employing recovery and avoidance techniques. In the recovery technique, proposed by Chandy and Misra, upon detecting a deadlock, the processors in the network exchange messages in order to determine which of the waiting nodes can process their events in spite of the apparent deadlock. This is described in K. M. Chandy and J. Misra, "Asynchronous Distributed Simulation via a Sequence of Parallel Computations,"Communications of the ACM, Vol. 24, No. 4, April 1981, pp. 198-206. In the avoidance technique, on the other hand, certain types of nodes send null messages under specific conditions even when no instructions for other nodes are called for. By this technique, nodes can be advanced more quickly in their simulation time. Jefferson and Sowizral, in "Fast Concurrent Simulation Using the Time Warp Mechanism,"Distributed Simulation, 1985, The Society for Computer Simulation Multiconference, San Diego, California, suggest a different technique where each node is allowed to advance in its simulation time "at its own risk," but when a message arrives that would have caused some events to not have been simulated, then a "roll-back" is executed to undo the simulation that was done. Roll-back of a node may not be difficult, perhaps, but the fact that the simulated event(s) that need to be rolled back may have caused messages to be sent to other nodes does complicate the task substantially. To achieve the roll-back, Jefferson et al. suggest the use of "anti-messages," which are messages that parallel the original messages, except that they cause the performance of some action that "undoes" the original action.
  • Neither of these techniques is very good because each potentially expands an inordinate amount of computation time in making sure that the overall simulation advances properly. The null message approach expends computing resources in generating, sending, and reading the null messages; the recovery approach expends computing resources to detect and recover from a deadlock, and roll-back approach expends computing resources in simulating events and then undoing the work that was previously done.
  • 1986 WINTER SIMULATION CONF. PROC., Washington, DC, 8th - 10th December 1986, pages 417-423, IEEE, New York, US; D.W. JONES: "Concurrent simulation: An alternative to distributed simulation" discloses a new approach to parallel simulation where previous work in this area has concentrated on distributed simulation. The new approach uses spatial decomposition to allow simulations to be run on networks of machines, where the message flow between processors in the network is related closely to the topology of the system being simulated. The disclosed approach, concurrent simulation, is based on temporal decomposition. This allows natural use to be made of the shared memory facilities and load-balancing capabilities of multiprocessors.
  • According to one aspect of this invention there is provided a method as claimed in claim 1.
  • According to another aspect of this invention there is provided a system as claimed in claim 14.
  • Recognizing that in physical systems there is always some delay between the time when one part of the system does something, and the time when another part of the system realizes that something was done, a simulation system is rcalized that avoids all blocking and advances the simulation time in an efficient manner. The efficiency is achieved by each node independently evaluating for itself a time interval that is not "at risk" and simulating all events that are within that evaluated time. A point in time is not "at risk" for a considered node if no other node in the system has a scheduled event that can affect the simulation at the considered node. By also restricting the simulation of scheduled events at any one time to a chosen simulated time segment (bounded lag) beginning with the lowest simulation time found among the nodes, allows the evaluation of the "at risk" time interval to be based on only a subset of the nodes that can potentially affect the simulation at the considered node. This simplification results from the the fact that there are delays between nodes, and that the lower bounds for those delays are fixed and known apriorily.
  • In simulating systems where some nodes affect other nodes only through intermediate nodes, opaque periods can be experienced when, because of the specific process that is being simulated, such an intermediate node "promises" that a particular route emanating from this node would be busy for a set period of time, and thereby also "promises" that no other node can use this route as a conduit to affect other nodes. That, in effect, increases the propagation delay from the nodes that use the busy intermediate route, which, in turn, increases the allowance for the simulation time within the bounded lag that is not "at risk".
  • Brief Description of the Drawing
    • FIG. 1 illustrates the timing inter-relationships of events processed in a multi-processor environment with recognized delay between events and their effect on other events;
    • FIG. 2 depicts a block diagram of a multi-processor arrangement for simulating events ;
    • FIG. 3 presents a flow chart describing the steps carried out in each of the processors of FIG. 2 in the course of event simulation; and
    • FIG. 4 describes one realization for the node controllers of FIG. 2.
    Detailed Description
  • As indicated above, one of the major problems with the prior art distributed simulation systems is their failure to realize and take advantage of the fact that a delay is always present between communicating physical subsystems. This invention takes advantage of this inherent delay, as described in detail below.
  • FIG. 1 presents a pictorial explanation that may aid in understanding the invention. Therein, vertical lines 21, 22, 23, 24, 25, 26 and 27 represent seven simulation nodes and their simulation time lines (where time advances upward). The circles along these lines (30-43) represent events that have been, or are scheduled to be processed (i.e., simulated). These events may cause change of value or state, i.e., other events, at the originating node or at some other nodes. For purposes of discussion, it is assumed that node 24 is the node of concern, but it is understood that the consideration undertaken by node 24 are concurrently taken by all other nodes.
  • The horizontal distances between line 24 and the other lines represent the time delays for events in other nodes to affect the value or state at node 24. Accordingly, event 30 processed at node 21 for time T₁ may cause an event 40 at node 24 to be scheduled for some time not earlier than T₂, as shown in FIG. 1. The interval between T₁ and T₂ equals the delay between line 24 and line 21 (i.e., the horizontal distance between the lines).
  • The events depicted in FIG. 1 can be divided into two groups: those that have been simulated (and marked with a cross), and those that are yet to be simulated (un-crossed). The simulated events need not be considered for simulation because their effects are already represented by those events which have not been simulated (for example, event 30 has caused event 40; the first has been simulated, while the second is yet to be considered for processing).
  • Of the events that have yet to be simulated (33-43), event 33 in FIG. 1 is earliest in time from among all of the nodes. In this case, the time of event 33, forms the current floor simulation time of the system. The floor is depicted in FIG. 1 by dashed line 45.
  • A time interval beginning with the floor simulation time is selected for consideration. This time interval, which I call the bounded lag interval, can be a convenient time interval that takes into account the number of nodes in the system, the number of events to be simulated, and the computing power of the processors employed. All events scheduled within the bounded lag interval can be affected by events scheduled at other nodes within the bounded lag time interval, but only if those nodes are at a time distance from the affected node 24 that is less than the selected bounded lag interval. That reduces the number of nodes that need to be considered. In the FIG. 1 depiction, the bounded lag interval ends at dashed line 46; and as drawn, the nodes that need to be considered for their potential effect on node 24 are nodes 22, 23, 25 and 26. Nodes 21 and 27 are outside the bounded lag delay and their scheduled events within (or outside) the bounded lag need not be considered.
  • In considering the effects on node 24 in greater detail, one can take into account the time of the next scheduled event and reduce the number of nodes being considered even further. In the FIG. 1 embodiment, for instance, the next scheduled event is event 35, and as drawn, only nodes 23 and 25 need to be considered.
  • In determining whether event 35 is to be simulated, one can observe that only event 34 is scheduled early enough to have a possible impact on event 35. That event can be analyzed and if it is determined that it does not affect event 35, then event 35 can be simulated. Alternatively, it may prove even more efficient to refrain from processing event 35 simply because of the potential effect by event 34. In the following description, this alternative approach is taken because it saves the process of evaluating what event 34 may do.
  • FIG. 2 presents a block diagram of a concurrent event simulator. It comprises a plurality of node controllers 51 that are connected to a communications and common processing network 52. Network 52 performs the communication and synchronization functions, and in some applications it also performs some of the computations that are needed by the entire system. The node controllers can be realized with conventional stored program computers that may or may not be identical to each other. Each of the node controllers, which corresponds to a node in the FIG. 1 depiction, is charged with the responsibility to simulate a preassigned subsystem of the simulated system. Each controller Ci maintains an event list Πi that is executed by simulating each event in the list in strict adherence to the scheduled event simulation times. It should be remembered that the bounded lag interval is selectively fixed, and that in conformance with the selected bounded lag interval each controller 51 is cognizant of the processors with which it must interact to determine whether events are scheduled. The process by which the event simulations are enabled is carried out by the controllers in accordance with the above-described principles, as shown, for example, by the flowchart of FIG. 3.
  • The process described in FIG. 3 is carried out in iterations that encompass blocks 100 through 140. Block 100 is the flow control block. It tests whether the floor simulation time, Tfloor, is less than the end of the simulation time, Tend. As long as Tfloor < Tend, the simulation continues by passing control to block 110. When Tfloor reaches or exceeds Tend, the simulation ends. Block 110 determines the floor simulation time of the simulated system at each iteration. That is, block 110 determines the lowest event time of the scheduled events dispersed among the event lists (Πi) of controllers 51 (Ci) that are are yet to be simulated. Expressed mathematically, block 110 evaluates the floor simulation time in accordance with the equation
    Figure imgb0001
    where N is the total number of node controllers 51 and Ti is the time of the event, ei, which has the earliest scheduled time among the events e' in the event list Πi; i.e.,
    Figure imgb0002
    Block 110 can be implemented in each of the controllers by having each controller broadcast to all other controllers (via network 52) its Ti and evaluate for itself the minimum Ti which defines Tfloor. Alternatively, block 110 can be implemented within communications and common processing network 52 by having controllers 51 send their Ti values to network 52, and having network 52 select the minimum Ti and broadcast it as Tfloor back to the controllers.
  • Having established the Tfloor, and knowing the system's bounded lag interval, B, that limits the number of neighboring controllers with which a controller must communicate, in accordance with block 120, each of the controllers evaluates its earliest "at risk" time. This is accomplished by network 52 distributing the Ti information to neighboring controllers, as required, and each controller Ci evaluates the "at risk" demarcation point, αi, from the Ti information. This "at risk" point is defined as the earliest time at which changes at the neighboring controllers can affect the history simulated by the controller, based on the neighboring controllers' own scheduled events or based on a response to an event from the controller itself (reflection). This is expressed by the following equation:
    Figure imgb0003
  • Having determined the value of αi which corresponds to the point in the simulated time beyond which the simulation of events at controller Ci is "at risk", in accordance with block 130, processor Ci simulates all or some of the scheduled events whose times are earlier that αi. In block 140, the time Ti is advanced with each simulation of an event, and the simulated event is deleted from Πi. Concurrently, if in the course of simulating an event, new events are called to be scheduled, then those events are sent to network 52 for transmission to the appropriate node controllers. Similarly, if the execution of events is called to be blocked, that information is also sent to network 52, and thereafter to the appropriate node controllers for modification of the event lists.
  • The following carries out the example depicted in FIG. 1, assuming for the sake of simplifying the drawing that the situation remains static -- i.e., none of the depicted events are cancelled and no unshown events are scheduled. After events 30, 31, and 32 have been simulated (denoted by the crossed circles) all of the node controllers communicate their earliest scheduled event times, Ti, to network 52 where Tfloor is evaluated to correspond to dashed line 45. With reference to node 24, the bounded lag defined by the distance between dashed line 45 and dashed line 46 specifies that only nodes 22-26 need to be considered. In the course of that consideration, node 24 determines that scheduled event 34 at node 25 defines an "at risk" demarcation point 50. Since there are no events scheduled for node controller 24 between the time of Tfloor, and point 50, no progress in simulations is made by this controller. Concurrently, node controller 22 simulates event 33 (since it is positioned at Tfloor, and no other event can affect it), and node 25 simulates event 34 since neither node 26 nor node 24 (the closest nodes) have any events scheduled prior to the time of event 34. Node 27 probably also simulates event 36, but this is not certain, since FIG. 1 does not show all of the neighbors of node controller 27.
  • With events 33, 34, and 36 simulated and deleted from their respective event lists, the next iteration raises Tfloor to the time of event 37 (and correspondingly raises the horizon or end of the bounded lag interval. This end is equal to Tfloor+ B). At this new level of Tfloor the "at risk" demarcation point for node 24 is at point 49 (dictated by event 39 of node 23), and in accordance with this new "at risk" point, both events 35 and 40 are simulated within node controller 24. This completes the simulation of events scheduled for node 24 which are shown in FIG. 1. Concurrently at node 23, event 39 is simulated, event 37 at node 27 is simulated, but at node 26 event 38 is not simulated because it is beyond the "at risk" point of node 26 caused by the position of event 37 at node 27. The next Tfloor moves to the time of event 38, and the process continues to simulate additional events.
  • The above description concentrates on evaluating the "at risk" demarcation point in connection with the direct effects of one node on another. In many physical systems, however, there are many instances where one subsystem affects another subsystem indirectly, i.e., through some other subsystem. This condition gives rise to the possibility that the intermediate node is either busy and unavailable generally, or is somehow sensitized to serve some nodes and not other nodes. Either situation can yield the condition that the delay from one node, A, to another node, C, through an intermediate node, B, is at times much longer than the sum of the of delays A to B and B to C. I call this additional delay an opaque period. Opaque periods have the potential for pushing forward the "at risk" demarcation point and, therefore, it is beneficial to account for this potential in evaluating αi. Such accounting may be achieved by evaluating αi iteratively as follows.
    • 1: Set α i 0 = +∞ ; β i 0 = T i ; K=0
      Figure imgb0004
    • 2: synchronize
    • 3: evaluate
      Figure imgb0005
    • 4: evaluate α i k+1 = min α i k i k+1
      Figure imgb0006
    • 5: synchronize
    • 6: evaluate
      Figure imgb0007
      broadcast value of A to all nodes
    • 7: if A ≤ Tfloor + B then increment k and return to step 3.
  • In the above, the term neighbors(i) refers to nodes that communicate directly with node i. The auxiliary variable β k i
    Figure imgb0008
    represents an estimate of the earliest time when events can affect node i after traversing exactly k links. It can be shown that the iteration test is always met within a relatively low number of steps, depending on the value of the bounded lag interval, B. To account for opaque periods, the evaluation of β is augmented to be
    Figure imgb0009
    where opji is the end of opaque period (when communication unblocks) for node j in the direction of node i.
  • In some simulations it may turn out that the delays between subsystems are very small and that opaque periods predominate. In such systems the value of each αi reduces to computing the minimum of the opaque periods relative to block i; to wit:
    Figure imgb0010
  • It may be observed that with each iteration the value of Tfloor increases because the event determining that value can always be simulated. The movement of Tfloor is affected, however, by how closely the nodes are separated and the scheduled events. One other observation that can be made is that the above-described procedure is very conservative, in the sense that an assumption is made that whenever an event is scheduled to be simulated at one node controller, it will always have an effect on the neighboring controller (after the appropriate delay). In physical systems, however, there are many situations where one part of a system performs many operations that affect none of its neighbor subsystems, or affect only one or very few of its neighbor subsystems. Knowledge that an event scheduled for simulation will not affect a neighboring node can be put to use to simulate more events concurrently (have fewer node controllers that are idle). This can be accomplished by communicating not only the Ti of the earliest scheduled event in each list, but also the effect that it may have on neighboring node controllers. If fact, each list can broadcast more than just the earliest scheduled event. The design decision that a practitioner must make, of course, is whether fewer iterations but more complex evaluations are economically justifiable.
  • FIG. 4 presents a block diagram of one realization for node controller 51. Although this embodiment relates to use of the iterative method for evaluating α (with the use of the auxiliary variable β) when the delays between changes in one subsystem and the effect of those changes on other subsystems are not insignificant, it will be appreciated by the skilled artisan that the other realizations for computing αi are substantially similar to this realization. It will also be appreciated that although node controller 51 is shown in FIG. 2 as one of a plurality of controllers, such plurality can be realized within a single processor of sufficient computing power.
  • In FIG. 4, state register 56 defines the current state of the simulated subsystem and event list 53 specifies the events that are scheduled to be simulated. Processor 54 is the event simulation processor, and it is responsive to state register 56, to event list 53 and to register 58. Register 58 is the α register and, as the name implies, it stores the value of α for the controller. Based on the value of α and the scheduled time of the event at the top of the event list, processor 54 simulates the event in conformance with the state of the subsystem and develops a new state of the subsystem as well as, perhaps, new events. The new state is stored in register 56, new events scheduled for the controller are stored in event list 53 via line 61, and events affecting other controllers are communicated out via line 62. Events produced at other controllers that may affect this controller are accepted via line 63 and stored in event list 53 through processor 54.
  • Whereas processor 54 is the event simulation processor, processor 55 is the synchronization processor. Processor 54 is shown in FIG. 4 as a separate processor but in practice a single processor may serve the function of both processor 54 and processor 55. Processor 55 receives information from event list 53 concerning the time of the event in list 53 that possesses the earliest simulation time. It transmits that information to other controllers via line 64 and receives like information from all other relevant nodes via line 65. From that information processor 55 develops the value of Tfloor and stores that value in register 57. Processor 55 also receives βj information from its neighbor controllers (controllers where changes can affect the controller directly) via line 66, and transmits its own βi values via line 67. With the aid of Tfloor and the other incoming information, processor 55 performs the iterative computations to develop the values of β k i
    Figure imgb0011
    and α k i
    Figure imgb0012
    . Those values are stored by processor 55 in registers 57 and 58, respectively.
  • It is to be understood that the foregoing descriptions are merely illustrative of my invention and that other implementations and embodiments which incorporate variations from the above are possible.
  • For example, the above assigns the task of computing Tfloor to processor 55. However, it may be preferable to include computing means in the communication and common processing network of FIG. 2 where the Tfloor is computed and distributed to the various node controllers.

Claims (20)

  1. A method for simulating behavior of a system containing interacting subsystems, where a known minimum delay exists between changes occurring at one of said subsystems and the effects of said changes on others of said subsystems, CHARACTERIZED BY:
    a plurality of simulation steps, with each such step simulating events occurring at a simulation time in each of said subsystems, restricted to events whose simulation time lies within a preselected time interval beginning with the earliest simulation time of events that are yet to be simulated.
  2. A method as claimed in claim 1 wherein said step of simulating is followed by a step of advancing said preselected interval in simulated time.
  3. A method as claimed in claim 1 wherein said step of simulating concurrently performs a plurality of event simulations.
  4. A method as claimed in claim 1 wherein said step of simulating is preceded by a step of evaluating whether a scheduled event is to be simulated, based on states of or changes occurring at a subset of said subsystems in response to previous simulation steps.
  5. A method as claimed in claim 4 wherein the subsystems belonging to said subset are dependent on the simulation time span of said preselected interval.
  6. A method as claimed in claim 4 wherein said subset is controlled by the time span of said preselected interval and said delays between said subsystems.
  7. A method as claimed in claim 4 wherein said subset includes the subsystems whose delays are within said preselected interval.
  8. A method as claimed in claim 4 wherein said step of evaluating computes a value of α and said step of simulating simulates events whose simulated time is not greater than α.
  9. A method as claimed in claim 8 wherein said α for subsystem i, αi, is computed by
    Figure imgb0013
    where d(j,i) is said delay between changes occurring at subsystem j and their possible effect on system i, d(i,j) is said delay between changes occurring at subsystem i and their possible effect on system j, and Tj is the simulation time of the event in subsystem j having the earliest simulation time.
  10. A method as claimed in claim 4 wherein said step of evaluating includes consideration of the condition that the effect of a change in one of said subsystems is blocked from propagating through another one of said subsystems.
  11. A method as claimed in claim 4 wherein said step of evaluating computes a value of α and said step of simulating simulates events whose simulated time is not greater than α, where α for subsystem i, αi, is the value of α k i
    Figure imgb0014
    in the final iteration that follows the steps:
    1: Set α i 0 = +∞ ; β i 0 = T i ; k=0
    Figure imgb0015
    2: synchronize
    3: evaluate
    Figure imgb0016
    4: evaluate α i k+1 = min α i k i k+1
    Figure imgb0017
    5: synchronize
    6: evaluate
    Figure imgb0018
    broadcast value of A to all nodes
    7: if A ≤ Tfloor + B then increment k and return to step 3,
    where Ti is the time of the event in subsystem i having the earliest simulation time, β k i
    Figure imgb0019
    is an auxiliary variable value at iteration k, and d(j,i) is said delay between changes occurring at subsystem j and their possible effect on subsystem i.
  12. A method as claimed in claim 4 wherein said step of evaluating computes a value of α and said step of simulating simulates events whose simulated time is not greater than α, where α for subsystem i, αi, is the value of α k i
    Figure imgb0020
    in the final iteration that follows the steps:
    1: Set α i 0 = +∞ ; β i 0 = T i ; k=0
    Figure imgb0021
    2: synchronize
    3: evaluate
    Figure imgb0022
    4: evaluate α i k+1 = min α i k i k+1
    Figure imgb0023
    5: synchronize
    6: evaluate
    Figure imgb0024
    broadcast value of A to all nodes
    7: if A ≤ Tfloor + B then increment k and return to step 3,
    where Ti is the time of the event in subsystem i having the earliest simulation time, β k i
    Figure imgb0025
    is an auxiliary variable value at iteration k, d(j,i) is said delay between changes occurring at subsystem j and their possible effect on subsystem i, and opji is the opaque period for subsystem j in the direction of subsystem i.
  13. A method as claimed in claim 4 wherein said step of evaluating computes a value of α and said step of simulating simulates events whose simulated time is not greater than α, where α for subsystem i, αi, is equal to
    Figure imgb0026
    where opji is the opaque period for subsystem j in the direction of subsystem i.
  14. A system for performing discrete event simulation of a system having a plurality of interacting subsystems, comprising:
    a plurality of blocks, each block simulating a preselected subsystem and including (a) an event list associated with said preselected subsystem and (b) means for maintaining information concerning the state of said preselected subsystem, and α, where α is an estimate of the earliest time for which the information in said associated state registers or the events in said associated event list can be modified by neighboring subsystems; and characterized by
    a controller system for repetitively performing simulation of events in said event lists whose times are earlier than α and re-evaluating the values of Tfloor and α, said controller system comprising a plurality of controllers communicating with said blocks, with each controller repetitively performing simulation of events in said event lists and re-evaluating the value of said α; and means for interconnecting said controllers, wherein said interconnecting means receives a Ti from each of said event lists, where Ti is the simulated time of the event to be simulated in said event list having the earliest simulation time, develops Tfloor which is min (Ti), and returns Tfloor to each of said controllers.
  15. A system as claimed in claim 14 wherein said controller system comprises a plurality of interconnected controllers communicating with said blocks, with each controller repetitively performing simulation of events in said event lists and re-evaluating the value of said α.
  16. A system as claimed in claim 14 wherein said controller system comprises an event evaluation processor associated with each of said blocks, connected to its associated event list, its associated means for maintaining state information, and to neighboring blocks, which are blocks that simulate subsystems that directly affect, or can be affected by, said preselected subsystem, where said event processor performs simulation of events in said associated event list and modifies said associated event list and the event lists of said neighboring blocks, as required by said simulation of events;
    a synchronization processor associated with each of said blocks, connected to its associated event list and means for maintaining α, for developing values of said α; and
    means for interconnecting said plurality of event evaluation processors and synchronization processors.
  17. A system as claimed in claim 16 wherein said synchronization processor develops a value of α based on values of Tj, from a preselected subset of said subsystems, where Tj is the simulation time of the event in subsystem j having the earliest simulation time.
  18. A system as claimed in claim 16 where each of said synchronization processors develops a value of α for its block based on a consideration of potential changes in neighbors of its block.
  19. A system as claimed in claim 18 wherein said consideration is iterative.
  20. A system as claimed in claim 16 where each of said synchronization processors develops a value of α for its block based on a consideration of opaque periods of its neighbors.
EP88309768A 1987-10-28 1988-10-19 Bounded lag distributed discrete event simulation method and apparatus Expired - Lifetime EP0314370B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US114369 1987-10-28
US07/114,369 US4901260A (en) 1987-10-28 1987-10-28 Bounded lag distributed discrete event simulation method and apparatus

Publications (3)

Publication Number Publication Date
EP0314370A2 EP0314370A2 (en) 1989-05-03
EP0314370A3 EP0314370A3 (en) 1991-07-31
EP0314370B1 true EP0314370B1 (en) 1996-01-24

Family

ID=22354804

Family Applications (1)

Application Number Title Priority Date Filing Date
EP88309768A Expired - Lifetime EP0314370B1 (en) 1987-10-28 1988-10-19 Bounded lag distributed discrete event simulation method and apparatus

Country Status (6)

Country Link
US (1) US4901260A (en)
EP (1) EP0314370B1 (en)
JP (1) JPH0697451B2 (en)
CA (1) CA1294046C (en)
DE (1) DE3854935T2 (en)
IL (1) IL88146A (en)

Families Citing this family (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3721719A1 (en) * 1987-07-01 1989-01-12 Lawrenz Wolfhard METHOD FOR TESTING A NETWORK BUILDING
JP2583949B2 (en) * 1988-03-10 1997-02-19 松下電器産業株式会社 Logic simulation method and logic simulation device
GB8812849D0 (en) * 1988-05-31 1988-07-06 Int Computers Ltd Logic simulator
US5452231A (en) * 1988-10-05 1995-09-19 Quickturn Design Systems, Inc. Hierarchically connected reconfigurable logic assembly
US5109353A (en) * 1988-12-02 1992-04-28 Quickturn Systems, Incorporated Apparatus for emulation of electronic hardware system
US5329470A (en) * 1988-12-02 1994-07-12 Quickturn Systems, Inc. Reconfigurable hardware emulation system
US5369593A (en) 1989-05-31 1994-11-29 Synopsys Inc. System for and method of connecting a hardware modeling element to a hardware modeling system
US5353243A (en) 1989-05-31 1994-10-04 Synopsys Inc. Hardware modeling system and method of use
US5161115A (en) * 1989-09-12 1992-11-03 Kabushiki Kaisha Toshiba System test apparatus for verifying operability
US5081601A (en) * 1989-09-22 1992-01-14 Lsi Logic Corporation System for combining independently clocked simulators
US5375074A (en) * 1990-01-23 1994-12-20 At&T Corp. Unboundedly parallel simulations
US5436846A (en) * 1990-05-29 1995-07-25 Grumman Aerospace Corporation Method of facilitating construction of a microwave system by appropriate measurements or determination of parameters of selected individual microwave components to obtain overall system power response
US7373587B1 (en) * 1990-06-25 2008-05-13 Barstow David R Representing sub-events with physical exertion actions
WO1992000654A1 (en) * 1990-06-25 1992-01-09 Barstow David R A method for encoding and broadcasting information about live events using computer simulation and pattern matching techniques
JPH0464164A (en) * 1990-07-03 1992-02-28 Internatl Business Mach Corp <Ibm> Simulation method and device
US5272651A (en) * 1990-12-24 1993-12-21 Vlsi Technology, Inc. Circuit simulation system with wake-up latency
WO1992014201A1 (en) * 1991-02-01 1992-08-20 Digital Equipment Corporation Method for multi-domain and multi-dimensional concurrent simulation using a conventional computer
JPH05216712A (en) * 1991-10-23 1993-08-27 Internatl Business Mach Corp <Ibm> Computer system, method for performing inner-viewing task on computer system and i/o processor assembly
US5794005A (en) * 1992-01-21 1998-08-11 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Synchronous parallel emulation and discrete event simulation system with self-contained simulation objects and active event objects
US5633812A (en) * 1992-09-29 1997-05-27 International Business Machines Corporation Fault simulation of testing for board circuit failures
US5913051A (en) * 1992-10-09 1999-06-15 Texas Instruments Incorporated Method of simultaneous simulation of a complex system comprised of objects having structure state and parameter information
US5566097A (en) * 1993-03-05 1996-10-15 International Business Machines Corporation System for optimal electronic debugging and verification employing scheduled cutover of alternative logic simulations
US6363518B1 (en) * 1993-11-04 2002-03-26 Cadence Design Systems, Inc. Automated positioning of relative instances along a given dimension
US5680583A (en) * 1994-02-16 1997-10-21 Arkos Design, Inc. Method and apparatus for a trace buffer in an emulation system
GB2338325B (en) * 1994-10-03 2000-02-09 Univ Westminster Data processing method and apparatus for parallel discrete event simulation
US5617342A (en) * 1994-11-14 1997-04-01 Elazouni; Ashraf M. Discrete-event simulation-based method for staffing highway maintenance crews
US6058252A (en) * 1995-01-19 2000-05-02 Synopsys, Inc. System and method for generating effective layout constraints for a circuit design or the like
US6053948A (en) * 1995-06-07 2000-04-25 Synopsys, Inc. Method and apparatus using a memory model
US5684724A (en) * 1995-08-30 1997-11-04 Sun Microsystems, Inc. Flashback simulator
US5809283A (en) * 1995-09-29 1998-09-15 Synopsys, Inc. Simulator for simulating systems including mixed triggers
US5784593A (en) * 1995-09-29 1998-07-21 Synopsys, Inc. Simulator including process levelization
JP2927232B2 (en) * 1996-01-29 1999-07-28 富士ゼロックス株式会社 Distributed simulation apparatus and distributed simulation method
US5841967A (en) * 1996-10-17 1998-11-24 Quickturn Design Systems, Inc. Method and apparatus for design verification using emulation and simulation
US6195628B1 (en) 1997-03-13 2001-02-27 International Business Machines Corporation Waveform manipulation in time warp simulation
US5850538A (en) * 1997-04-23 1998-12-15 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Priority queues for computer simulations
US6389379B1 (en) 1997-05-02 2002-05-14 Axis Systems, Inc. Converification system and method
US6421251B1 (en) 1997-05-02 2002-07-16 Axis Systems Inc Array board interconnect system and method
US6009256A (en) * 1997-05-02 1999-12-28 Axis Systems, Inc. Simulation/emulation system and method
US6134516A (en) * 1997-05-02 2000-10-17 Axis Systems, Inc. Simulation server system and method
US6321366B1 (en) 1997-05-02 2001-11-20 Axis Systems, Inc. Timing-insensitive glitch-free logic system and method
US6026230A (en) * 1997-05-02 2000-02-15 Axis Systems, Inc. Memory simulation system and method
US6031987A (en) 1997-05-06 2000-02-29 At&T Optimistic distributed simulation based on transitive dependency tracking
US5960191A (en) * 1997-05-30 1999-09-28 Quickturn Design Systems, Inc. Emulation system with time-multiplexed interconnect
US6059835A (en) * 1997-06-13 2000-05-09 International Business Machines Corporation Performance evaluation of processor operation using trace pre-processing
US5970240A (en) * 1997-06-25 1999-10-19 Quickturn Design Systems, Inc. Method and apparatus for configurable memory emulation
US6278963B1 (en) 1997-07-01 2001-08-21 Opnet Technologies, Inc. System architecture for distribution of discrete-event simulations
CA2295983A1 (en) * 1997-07-01 1999-01-14 Mil 3, Inc. System architecture for distribution of discrete-event simulations
JP3162006B2 (en) 1997-11-10 2001-04-25 核燃料サイクル開発機構 Simulation method of extraction system
JP3746371B2 (en) * 1998-04-09 2006-02-15 株式会社日立製作所 Performance simulation method
US6356862B2 (en) * 1998-09-24 2002-03-12 Brian Bailey Hardware and software co-verification employing deferred synchronization
DE19921128A1 (en) * 1999-05-07 2000-11-23 Vrije Universiteit Brussel Bru Process for generating target system-specific program codes, simulation processes and hardware configuration
US7257526B1 (en) * 2000-04-05 2007-08-14 Lucent Technologies Inc. Discrete event parallel simulation
US20020133325A1 (en) * 2001-02-09 2002-09-19 Hoare Raymond R. Discrete event simulator
US7219047B2 (en) * 2001-03-29 2007-05-15 Opnet Technologies, Inc. Simulation with convergence-detection skip-ahead
US7315805B2 (en) * 2004-02-05 2008-01-01 Raytheon Company Operations and support discrete event stimulation system and method
US7502725B2 (en) * 2004-04-29 2009-03-10 International Business Machines Corporation Method, system and computer program product for register management in a simulation environment
US20070130219A1 (en) * 2005-11-08 2007-06-07 Microsoft Corporation Traversing runtime spanning trees
US20080270213A1 (en) * 2007-04-24 2008-10-30 Athena Christodoulou Process risk estimation indication
JP5251586B2 (en) * 2009-02-19 2013-07-31 富士通セミコンダクター株式会社 Verification support program, verification support apparatus, and verification support method
WO2018054465A1 (en) * 2016-09-22 2018-03-29 Siemens Aktiengesellschaft Method and devices for the synchronised simulation and emulation of automated production systems
DE102018111851A1 (en) * 2018-05-17 2019-11-21 Dspace Digital Signal Processing And Control Engineering Gmbh Method for event-based simulation of a system

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4204633A (en) * 1978-11-20 1980-05-27 International Business Machines Corporation Logic chip test system with path oriented decision making test pattern generator
US4527249A (en) * 1982-10-22 1985-07-02 Control Data Corporation Simulator system for logic design validation
JPS59119466A (en) * 1982-12-27 1984-07-10 Fujitsu Ltd Inter-unit communication system of hardware logic simulator
JPS59163645A (en) * 1983-03-09 1984-09-14 Hitachi Ltd Function simulating method
GB8309692D0 (en) * 1983-04-09 1983-05-11 Int Computers Ltd Verifying design of digital electronic systems
US4636967A (en) * 1983-10-24 1987-01-13 Honeywell Inc. Monitor circuit
US4751637A (en) * 1984-03-28 1988-06-14 Daisy Systems Corporation Digital computer for implementing event driven simulation algorithm
DE3445030A1 (en) * 1984-12-11 1986-06-19 Philips Patentverwaltung Gmbh, 2000 Hamburg TRAFFIC SIMULATION DEVICE FOR TESTING SWITCHING SYSTEMS WITH CONSIDERATION OF THE SUBSCRIBER-SYSTEM INTERACTION
JPS61237162A (en) * 1985-04-15 1986-10-22 Mitsubishi Electric Corp Event simulating system

Also Published As

Publication number Publication date
JPH0697451B2 (en) 1994-11-30
JPH01155462A (en) 1989-06-19
IL88146A0 (en) 1989-06-30
DE3854935D1 (en) 1996-03-07
EP0314370A2 (en) 1989-05-03
US4901260A (en) 1990-02-13
EP0314370A3 (en) 1991-07-31
IL88146A (en) 1992-03-29
CA1294046C (en) 1992-01-07
DE3854935T2 (en) 1996-09-19

Similar Documents

Publication Publication Date Title
EP0314370B1 (en) Bounded lag distributed discrete event simulation method and apparatus
Righter et al. Distributed simulation of discrete event systems
Peacock et al. Distributed simulation using a network of processors
Heidelberger et al. Computer performance evaluation methodology
Bailey et al. Parallel logic simulation of VLSI systems
Bagrodia et al. MIDAS: Integrated design and simulation of distributed systems
Madisetti et al. WOLF: A rollback algorithm for optimistic distributed simulation systems
Cota et al. A modification of the process interaction world view
Bagrodia et al. A unifying framework for distributed simulation
Kurose et al. Computer-aided modeling, analysis, and design of communication networks
Fromm et al. Experiences with performance measurement and modeling of a processor array
Hsu et al. Performance measurement and trace driven simulation of parallel CAD and numeric applications on a hypercube multicomputer
Ayani Parallel simulation
Abrams et al. Implementing a global termination condition and collecting output measures in parallel simulation
Reed Parallel discrete event simulation: a case study
Tay et al. Performance analysis of time warp simulation with cascading rollbacks
Kumar et al. A study of achievable speedup in distributed simulation via null messages
MacNair An introduction to the research queueing package
Smith et al. Performance analysis of software for an MIMD computer
Chiola et al. Exploiting timed petri net properties for distributed simulation partitioning
Peterson et al. Performance of a globally-clocked parallel simulator
Theodoropoulos et al. Analyzing the Timing Error in Distributed Simulations of Asynchronous Computer Architectures.
Binns et al. An implementation for hybrid continuous variable/discrete event dynamic systems
Gimarco Distributed simulation using hierarchical rollback
Zarei et al. Performance analysis of automatic lookahead generation by control flow graph: some experiments

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): DE GB

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): DE GB

17P Request for examination filed

Effective date: 19920122

17Q First examination report despatched

Effective date: 19940228

RAP3 Party data changed (applicant data changed or rights of an application transferred)

Owner name: AT&T CORP.

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE GB

REF Corresponds to:

Ref document number: 3854935

Country of ref document: DE

Date of ref document: 19960307

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed
REG Reference to a national code

Ref country code: GB

Ref legal event code: 732E

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20000914

Year of fee payment: 13

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20001221

Year of fee payment: 13

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20011019

REG Reference to a national code

Ref country code: GB

Ref legal event code: IF02

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20011019

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20020702