GB2112186A - Improved distributed processing system - Google Patents

Improved distributed processing system Download PDF

Info

Publication number
GB2112186A
GB2112186A GB08229487A GB8229487A GB2112186A GB 2112186 A GB2112186 A GB 2112186A GB 08229487 A GB08229487 A GB 08229487A GB 8229487 A GB8229487 A GB 8229487A GB 2112186 A GB2112186 A GB 2112186A
Authority
GB
United Kingdom
Prior art keywords
shared memory
data
master controller
processor
system bus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB08229487A
Other versions
GB2112186B (en
Inventor
Simon S Chen
Artur Ichnowski
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intersil Corp
Original Assignee
Intersil Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intersil Inc filed Critical Intersil Inc
Publication of GB2112186A publication Critical patent/GB2112186A/en
Application granted granted Critical
Publication of GB2112186B publication Critical patent/GB2112186B/en
Expired legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • G06F12/0615Address space extension
    • G06F12/0623Address space extension for memory modules

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Multi Processors (AREA)
  • Small-Scale Networks (AREA)
  • Information Transfer Systems (AREA)

Abstract

A distributed processing system has a master controller (29) for transferring data between processor modules (12). Each processor module has a shared memory which can be mapped into a shared memory window of the master controller address space. Once mapped into the shared memory window, the master controller has access to the mapped shared memory to read data from or write data into the mapped shared memory. Each shared memory on the system bus (14) has the same address with respect to the master controller; to address a particular shared memory, the master controller enables it to accept and recognise the address signals from the master controller, while the other shared memories on the system bus remain disabled from recognising the address signals. <IMAGE>

Description

SPECIFICATION Improved distributed processing system The present invention relates to data processing systems, and more particularly, to data processing systems having a plurality of distributed processors.
Data processing systems which have a single processor, (often referred to as the central processing unit or CPU), are limited by the size and speed of the processing unit itself. In order to increase the data handling capabilities of a system, one approach has been to add one or more processors to the system rather than increase the size of speed of the single CPU.
Systems having more than one processor are often typically referred to as distributed processing systems. The architecture of a distributed system can take one of a variety of forms.
One approach has been to couple each added CPU to the system but interconnecting the main CPU, memory and the input/output (I/O) devices.
An inherent disadvantage with this approach is that each CPU must compete with the other CPU's for access to the system bus in order to transfer data to or receive data from the system memory or any of the I/O devices on the bus.
Another disadvantage of this approach is that it generally increases the complexity of the system software. Both of these disadvantages can slow down the operating speed of each CPU. These drawbacks can be particularly troublesome for certain real time applications such as communication controllers which often process and transfer a large amount of data between I/O devices and cannot tolerate significant CPU delays caused by system bus contention.
One modification to the above architecture has been to combined each additional CPU with an I/O device to form a single module. Each module can include direct memory access transfer logic, and in some cases, execution memory for the CPU. This approach can lessen some of the bus contention problems since data transfers between the I/O device and the CPU of the module can take place off the system bus. However, even with this approach, significant delays caused by contention over the system bus can still occur which the programing of the CPU must tolerate.
Also, the addition of each CPU-I/O module to the system bus after increases the complexity of the system software.
In order to reduce contention over the system bus, several different processor interconnection schemes have been proposed. For example, Lehman, et al., U. S. Patent No. 3,551,894, suggest a system where each processor has its own serial data bus to connect the processor to every device to which data is to be transferred.
Other proposals include the system shown in Webster U.S. Patent No. 3,815,095, in which each processor has a multiplexer which selectively accepts data from a plurality of data buses, each data bus being connected to the output of a processor. In both of these systems, data bus contention problems are reduced since each processor has its own output data bus connected to each data destination. However, the physical interconnections between the processors are relatively complicated as a result of the large number of data buses. In addition, this approach does not readily lend itself to adding additional processors since the data bus of each processor must be connected to each device with which it communicates.
Still another approach to improve data transfers between distributed processor has been to provide each processor with a shared (or dual port) memory through which all data transfers to other processors within the system are passed.
The advantage of this architecture is that after a processor has completed processing a portion of data, the processed data may be placed in the shared memory for transfer to another processor.
The source processor is then free to turn to other tasks and is not delayed waiting for the bus to be available in order to transfer data. One such shared memory architecture is shown in Pirz, U.S.
Patent No. 4,149,242. The system in Pirz also has a separate data bus connecting each processor module to the processor module with which it is to communicate.
To avoid the complexity of multiple data buses interconnecting the processors, the shared memories of the processor modules can be interconnected by a single system bus. Data transfers between shared memories can be performed by a host computer or a central data transfer unit coupled to the system bus. Such an architecture is described in Kober, U.S. Patent 4,181,936, and also in an article entitled "Dual Port RAM Hikes Throughput In Input/Output Controller Board", Electronics, August 17,1978.
In the system described in the Electronics article, each distributed processor has an input/output section and a shared memory to form a processor module. All data transfers between the processor and the system pass through the shared memory of the module. Each shared memory is assigned a unique portion of the system address space which is defined by all the addressable locations by the host computer or other central data transfer unit coupled to the system bus. To transfer data from one processor shared memory to another, the host computer addresses the source shared memory and reads the data. The destination shared memory is then addressed to write the data. Since each processor module appears to the system as just another block of memory, additional modules can be added to the system with minimal inpact on the system bus and the system software.However, a disadvantage of this approach is that each shared memory occupies a distinct portion of the system address space. Thus, the number of modules that can be added to the system bus is limited by the total address space of the host CPU. For example, if the host CPU has an address space of 64K (i.e., 65,536 memory address locations) and each block of shared memory occupies 8K (8,192 locations) of this space then the system can only accommodate 8 such modules, leaving the host CPU with no remaining address space for addressing other devices such as the host memory. In addition, since each shared memory has a unique block of addresses, the host CPU can address only one shared memory at a time. For many applications, it is desirable to have the capability to write data simultaneously to several processors.
Accordingly, it is an object of the present invention to provide an improved distributed processing system capable of accommodating a large number of distributed processors.
It is another object of the present invention to provide an improved shared memory architecture in which data can be simultaneously written into one or more shared memories.
It is still another object of the present invention to provide a communication controller having a distributed processing architecture which facilitates the addition of additional processor modules.
These and other objects and advantages are achieved in a distributed processing system which has a plurality of processor modules coupled to a system bus. Each processor module has a shared memory and a processor which communicates with the other processor modules through the shared memory. The shared memory of each processor module is coupled to the system bus and may be accessed by either the processor of the module or by a master controller also coupled to the system bus. The master controller effectuates the transfer of data from the shared memory of one processor module to the shared memory of another processor module over the system bus.The master controller transfers the data by addressing the shared memory of the originating processor module (i.e., the source shared memory), reading the data, and then addressing the destination shared memory to write the data into the destination shared memory.
Each shared memory on the system bus has the same address with respect to the master controller. That is, each shared memory is assigned the same portion of the master controller address space on the system bus. Thus, the shared memories occupy the same amount of the master controller address space regardless of the number of shared memories on the system bus.
To address a particular shared memory to read data from or write data into that shared memory, the master controller enables that particular shared memory to accept and recognize the address signals from the master controller while the other shared memories on the system bus remain disabled from recognizing the address signals. Prior to a write operation, the master controller can enable any number of the shared memories to accept the address signals and thereby simultaneously write the data into any number of the shared memories on the system bus.
In the Drawings: Figure 1 is a schematic block diagram of a communication controller in accordance with the present invention; Figure 2 is a schematic block diagram of a data link control unit of the communication controller of Figure 1; Figure 3 is schematic block diagram of a master controller of the communication controller of Figure 1; Figure 4 is a schematic representation of the mapping of portions of two data control unit shared memories into the master controller shared memory window of the master controller address space; Figure 5 is a more detailed block diagram of the parallel input/output port and shared memory of the data link control unit of Figure 2; Figure 6 is a schematic diagram of the memory selection logic of Figure 5; Figure 7 is a schematic diagram of the contention logic of Figure 5; and Figure 8 is a schematic block diagram of a line interface module of the communication controller of Figure 1.
A communication controller 10 is shown in Figure 1 to have a distributed processing architecture which includes a plurality of processor modules 12 which are coupled to a system bus 14. Each processor modules has a plurality of input/output (I/O) ports 16 which are connected to a variety of peripheral devices such as CRTterminals 22 and line printers 24. In addition, one or more host computers 26 may be connected to an l/O port 16.
The controller 10 routes and switches data being transferred among the various devices connected to the controller l/O ports. For example, the communication controller 10 can function as a cluster controller for a plurality of CRT terminals 22 and perform as a front end processor for a host computer 26.
Each processor module has a data link control unit (DLCU) 20 and one or more line interface modules (LIM) 18 which interface between the I/O ports 16 and the DLCU of the module 12. The data link control unit 20 inputs the data transmitted from the I/O ports 16 of the associated line interface modules 18, performs any necessary processing and transmits the data to the processor module to which the destination device (e.g., printer 24) is connected. The destination DLCU performs additional processing of the data as required and transmits the data to the destination device through the appropriate LIM 18 and I/O ports 16.
Each line interface module 18 contain isolation, protection, and voltage conversion circuits as required for the particular device or classes of devices connected to the I/O ports 16.
In addition, each LIM 18 includes circuits for handling the "link level" functions specified by the "protocols" used by the devices connected to that LIM. A protocol is a set of rules or procedures for the transmission of data, which is observed by the transmitting and receiving devices. Accordingly, the communication controller 10 must observe the data transmission procedures expected by the transmitting and receiving devices connected to the I/O ports 16 of the controller 10.
The link level functions are a subset of those rules and include setting up and disconnecting a link and data formatting. These link level functions are implemented by the LI M's under the control of the DLCU's. The higher level aspects of the protocols which depend upon the particular application are performed by the DLCU's. If the destination device for the inputted data uses a different protocol than that which is used by the source device, the DLCU's can be programmed to convert the protocol of the source device to the protocol of the destination device.
Each DLCU 20 has a shared memory which is accessible by the local DLCU processor and also by a master controller 29 which effectuates the transfer of blocks of data between DLCU's. To transfer a block of data, the source DLCU signals the master controller over the system bus 14 that it has data stored in its shared memory, which is to be transferred. As will become more clear in the following detailed description, the master controller 29 maps a block of shared memory of the source DLCU (containing the data) into a portion of the address space of the master controller. This portion of the master controller address space will be referred to as the "shared memory window" of the master controller address space.Upon determining the identity of the destination DLCU, the master controller also maps a block of the destination DLCU shared memory into the master controller shared memory window and then reads the data from the source shared memory and writes the data into the destination shared memory, each word of data being transferred over the system bus 14.
Each processor module 12 also includes a DLCU/LIM bus 30 through which the input and output data and control signals between the DLCU of the processor module and the LIM's associated with that DLCU pass. A typical DLCU is shown in Figure 2 to have a microprocessing unit 32 (MPU) which communicates with the LIM's 18 and the other elements of the DLCU through an internal bus 34. A LIM interface circuit 35 buffers the internal bus 34 to the DLCU/LIM bus 30. The MPU 32 includes a high speed microprocessor which may be a Zilog Z80A microprocessor, for example, which is described in the "Z80A CPU Technical Manual". The MPU 32 also includes logic to buffer the data, address and control lines of the internal bus 34.
The MPU 32, under the control of the program stored in a read-only-memory (ROM) 36 and a local random-access-memory 38, reads the input data from the LIM's 18. The data is then stored in the local memory 38 for further processing or is transferred directly to the shared memory 28 for transfer to the appropriate destination DLCU.
In the illustrated embodiment, the shared memory 28 includes a block of 8K (8,192) bytes of random-access-memory. The 8K bytes of shared memory are divided into two sections, an input section and an output section, of 4K bytes each. The shared memory 28 has dual ports to provide accessability by both the MPU 32 and the master controller 29. Data to be transferred to another DLCU is placed in the input section of the shared memory 28. This data is then read by the master controller and stored in the output section of the destination DLCU. Each shared memory 28 has logic to resolve any contentions caused by an attempt by the master controller 29 and the local MPU to simultaneously access the shared memory 28 of the DLCU.
Control signals between the master controller 29 and a DLCU pass through a parallel input/output port 40 of each DLCU 20. For example, the MPU 32 can transmit an interrupt signal through the parallel input/output port 40 and the system bus 14 to the master controller or vice-versa. In addition, in order to map an input section or an output section of the shared memory 28 into the address space of the master controller, the master controller stores an enable signal in the parallel input/output port 40. The mapping operation will be described in greater detail below.
The DLCU also has a counter/timer circuit 42 to provide timing signals to the processor module 12. In the illustrated embodiment, the counter/timer circuit 42 is implemented with a Zilog Z80A-CTC chip which is compatible with the Zilog Z80A microprocessor. The Z80A-CTC has 4 independent channels, two of which are used to provide clock driven interrupt signals to the MPU 32. The other two channels provide a real time clock.
As shown in Figure 3, the master controller includes a microprocessing unit 44 which may be similar to the MPU 32 of each DLCU 20. The master controller 29 also has its own local memory 46 which is connected to the MPU 44 through the system bus 14. A floppy disc controller 47 may be used to control a floppy disc (not shown) to load programs into the master controller memory 46 and the local memory 38 (Figure 2) of each DLCU.
In the illustrated embodiment, the master controller MPU 44 has a memory address space of 64K bytes. That is, it can address up to 65,536 individual memory locations for read and write operations. A 64K master controller MPU address space is graphically represented in Figure 4 as a rectangular area 48. The top of the area 48 represents address 0 and the bottom of the area represents the last address of the address space, 65,536, (designated "64K").
A portion of the master controller address space is reserved for a shared memory window 50 which is used to address the shared memories 28 of the DLCU's 20. Here, 8K of the master controller address space is reserved for the shared memory window 50 since each DLCU memory has 8K memory locations. The shared memory window 50 is subdivided into an input section 52 and an output section 54, each of which is a 4K block of addresses.
The MPU 32 of each DLCU also has a 64K byte address space. Two such blocks of 64K memory locations for two DLCU's 20a and 20b are represented by two rectangular areas 56 and 58 in Figure 4, respectively. Within each DLCU memory space is a shared memory which is indicated at 28a and 28b for the memory spaces of the DLCU's 20a, and 20b, respectively. As previously mentioned, each shared memory is subdivided into an input section and an output section of 4K bytes each. In Figure 4, the input sections for the shared memories 28a and 28b are indicated at 60a and 60b, respectively, with the output sections indicated at 62a and 62b, respectively.
In order for the master controller 29 to transfer duta from a source DLCU such as DLCU 20a to a destination DLCU such as DLCU 20b, the master controller 29 maps the input section 60a of the DLCU 20a shared memory into the input section 52 of the master controller address space. The master controller 29 then maps the output section 62b of the DLCU 20b memory space into the output section 54 of the master controller address space. The master controller may then address the data stored within the input section 60a to read the data as if the memory locations at 60a were a portion of the local memory of the master controller. Similarly, the master controller addresses the memory locations in the output section 62b of the DLCU 20b to write the data read from the DLCU 20a into the output section 62b of the DLCU 20b.The particular manner in which the master controller 29 maps an input section or an output section of a DLCU shared memory into the master controller address space will be discussed in more detail below.
Although the block of addresses reserved for the shared memory window is shown in Figure 4 as located in the last 8K address block of the master controller address space, the shared memory window 50 may be located anywhere within the master controller address space.
Similarly, the block of shared memory locations may be located anywhere within the DLCU memory space. Furthermore, the sizes of the shared memory and the master controller address space are given for purposes of illustration only and are not intended to limit the scope of the present invention.
To initiate the mapping function and the transfer of data from one DLCU to another, the source DLCU having the data to be transferred transmits an "interrupt request" signal to the master controller 29 through the parallel input/output port 40 (Figure 2) of the DLCU. As shown in Figure 5, the parallel input/output port 40 includes a parallel input/output (PIO) 64. In the illustrated embodiment, the PIO 64 is a two port programmable device which provides TTL (transistor-transistor-logic) compatible interface between the master controller 29 and the MPU 32 of the DLCU. The PIO may, for example, be implemented with a Zilog Parallel I/O controller integrated circuit clip which is compatible with the Zilog Z80A microprocessor.
The PIO 64 has a plurality of control registers for storing control signals such as interrupt signals and enable signals. In order for a DLCU to interrupt the master controller, the MPU 32 of the DLCU sets a bit in a control register of the PIO 64 which causes an interrupt request signal to be generated on a line 66 which is transmitted via interrupt logic 70 through the system bus 14 to the MPU 44 of the master controller.
When the MPU 44 of the master controller receives an interrupt request, the MPU 44 transmits an "interrupt acknowledge" signal over the system bus 14. The PIO 64 of the particular DLCU which generated the interrupt request signal responds by gating the contents of a control register onto the system bus 14 through a set of transceivers 68 interconnecting the PIO 64 to the system bus 14.
The data gate onto the system bus 14 in response to an interrupt acknowledge signal is referred to as an "interrupt vector" and is inputted by the MPU 44 of the master controller. The interrupt vector directs the master controller as to the identity of the particular PIO circuit 64 (and DLCU 20) which generated the input interrupt request, and the particular subroutine to handle the interrupt request. The contents of the interrupt vector control register is typically set by the MPU 44 of the master controller at the time the system power is applied.
The PIO circuits 64 have built-in logic to determine the highest priority port of the PIO chips requesting an interrupt at the same time.
Utilizing the internal interrupt logic, the PIO chips of the DLCU's may be interconnected together in a "daisy chain" fashion to provide automatic interrupt priority control without external logic.
However, with a large number of DLCU's and hence a large number of PIO circuits coupled to the system bus 14, it may be desirable to add "look ahead" logic to accommodate a large number of such chips. An example of such a "look ahead" logic is described in the "PIO Technical Manual" and is represented by the interrupt logic 70 of Figure 5 for each DLCU and the interrupt control logic 72 (Figure 3) of the master controller 29.
Referring further to Figure 5, the shared memory 28 of each DLCU includes a dual port randome access memory (RAM) 74, which may be accessed by both the local MPU 32 and also by the master controller 29. One port of the RAM 74 is connected by a set of transceivers 76 to the DLCU internal bus 34 and the other port is connected by a set of transceivers 78 to the system bus 14. The transceivers 68, 76, and 78 may, for example, be implemented with LS244 and LS245-type integrated gate circuit chips.
To gain access to the RAM 74 of the shared memory 28, the local MPU 32 addresses the RAM 74 by placing address signals (which correspond to memory locations in the RAM 74) on the DLCU internal bus 34. A memory selection logic 80 decodes the high order bits of the address signals and generates a "select" signal on a line 82 to a contention logic 84 associated with the RAM 74. If the RAM 74 is not also being addressed by the master controller 29, the contention logic 84 produces an "enable" signal on a line 86 to the transceivers 76 which causes the transceivers 76 to gate the address signals and data signals from the DLCU internal bus 34 to the shared memory RAM 74. In this manner, the local MPU 32 can address the RAM 74 of the shared memory 28 and write data to the input section of the RAM 74, for transfer to another DLCU.After the data is written to the input section, the local MPU 32 sets a control bit in the PIO circuit 64 to generate an interrupt request signal as previously described. At this time, the MPU 32 also sets the counter/timer circuit 42 to generate a local interrupt request signal on a line 88 to the local MPU 32 if the system interrupt is not acknowledged by the master controller 29 within a predetermined time period.
Upon acknowledging the system interrupt request signal and determining the identity of the DLCU requesting the interrupt, the master controller 29 maps the input section of the shared memory 28 of that DLCU into the input section 52 of the shared memory window 50 of the master controller address space. To accomplish this, the master controller 29 addresses the PIO circuit 64 of the requesting DLCU and sets an input section control bit of a mapping control register in the PIO circuit 64 of the parallel input/output port 40.
The port 40 has an I/O address decoder 90 which decodes the address signals from the master controller and generates a "PlO enable" signal on a line 92 to the PIO circuit 64 if the address signals correspond to the address of the mapping control register of the PlO circuit 64. The PlO enable signals enables the mapping control register of the PlO circuit 64 to accept the data from the master controller on the system bus 14 through the transceivers 68.
When set, the input section control bit of the mapping control register generates an input section "mapping control" signal on a line 94 to a memory selection logic 96. The input section mapping control signal enables the input section of the shared memory 28 of that DLCU to accept address signals from the master controller on the system bus 14 as represented in Figure 4. The mapping control register has a second bit which is set by the master controller 29, to generate an "input acknowledge" signal on a line 93 to the counter/timer circuit 42. The arrival of the input acknowledge signal causes the counter/timer circuit 42 to generate a local interrupt request signal on line 88 to the local MPU.
Once mapped into its address space, the master controller can address the input section by providing address signals on the system bus 14.
The same address signals are presented to the shared memory 28 of each DLCU but only the input section which is mapped into the master controller address space will respond to the address signals from the master controller. The high order bits of the address signals are decoded by the memory selection logic 96 of the shared memory 28 which produces a "select" signal on a line 98 to the contention logic 84 when the input section mapping control signal on line 94 is active. If the local MPU is not already accessing the shared memory 28, the contention logic 84 will generate an "enable" signal on a line 100 to the transceivers 78 to gate the lower order address bit through the set of transceivers 78 to the shared memory RAM 74.
Having access to the input section of the source DLCU, the master controller can read the initial portion of the data stored therein to determine the identity of the destination DLCU.
These data signals are gated to the system bus 14 through the master controller enabled transceivers 78.
The output section of the destination DLCU shared memory is then mapped into the output section 54 of the shared memory window of the master controller address space in a manner similar to that of the input section. Thus, the master controller addresses the mapping control register of the PIO circuit 64 of the destination DLCU to set an output section control bit of the mapping control register which generates an output section mapping control signal on the line 95 of that DLCU. This mapping control signal enables the output section of the shared memory 28 of the destination DLCU to accept the address signals and data signals from the master controller through the transceivers 78.
In addition, more than one output section can be mapped into the shared memory window of the master controller. Thus, the master controller can set the output section control bit of more than one DLCU prior to addressing the output sections of the DLCU shared memories. In this manner the master controller can read data from the input section of the source DLCU and write data to one or more destination DLCU's.
After the master controller 29 has transferred the data from the input section of the source DLCU to the output section of a destination DLCU, the master controller sets a fourth bit in the mapping control register of the PIO circuit 64 of the destination DLCU to generate an "output request" signal on a line 104 from the PIO circuit 64 to the counter/timer circuit 42. The counter/timer circuit 42, in response to the output request signal, generates an interrupt request on line 88 to the MPU 32 which informs the MPU 32 that data has been transferred to the output section of its shared memory 28. The MPU 32 in response to the interrupt signal, reads the data from the output section of its shared memory, processes the data and transmits the data to the external destination device through the appropriate LIM 18 and i/O port 16 of the processor module.
The memory selection logic 96 of the shared memory 28 is shown in greater detail in Figure 6.
The memory delection logic 96 includes a 1 of 8 decoder 110 which may be an LS138 type integrated circuit chip for example. The decoder 110 has three selection inputs A, B and C connected to three high order system address bits, SA12-SA14, respectively, and an enable input connected to the highest order address bit SAl 5, of the system bus 14. These four high order system address bits are used to select a particular 4K block of memory locations addressable by the master controller.
The memory selection logic 96 further includes an AND gate 112 having one inverted input connected to the input section mapping control line 94 from the PIO circuit 64 (Figure 5) and another inverted input connected by a strap 114 to one of the eight outputs of the decoder 110. In the illustrated embodiment, the strap 114 is shown connected to an output 11 6 of the decoder 110. The output line 11 6 will be active, that is, a logical low, in response to a particular combination of state of the address bits SA12-- SAl5. That combination of states is the address of the input section of the shared memory of each DLCU. The address of the input section with respect to the master controller is easily shifted simply by connecting the strap 114 to another decoder 110 output.
The output of the AND gate 112 is connected to the input of a NOR gate 118 which has an output 120 connected to the inverting input of a second AND gate 122. The output of the AND gate 122 is the memory select line 98 for the master controller accesses, which is connected to the contention logic 84 (Figure 5). If the input section mapping control signal on line 94 is active (logical low) and the address of the input section of the shared memory is presented at the inputs of the decoder 110, the memory select signal at 98 will become active (logical low) if the master controller is accessing memory (i.e., "SMEQ" is active). If the local MPU 32 is not already accessing the shared memory, the contention logic 84 (Figure 5) enables the tranceiver 78 to transmit the address signals (and data signal) from the master controller to the RAM 74 of the shared memory.In this manner, the input section mapping control signal enables the shared memory input section to accept the address signals from the master controller.
The memory selection logic 96 further includes an additional AND gate 124 which has inverting inputs connected to the output section mapping control line 95 and to one of the eight outputs of the decoder 110 by strap 126. The output of the AND gate 124 is connected to the other input of the NOR gate 118. The logic operates in a similar manner to produce a memory select signal on the line 98 when the address of the output section is presented to the decoder 110 while the output section mapping control signal on line 95 is active during a memory request by the master controller 29. Here too, the output sections of the shared memories 28 may be assigned any 4K block of addresses within the master controller address space by the selective connecting of the straps 126 to the decoders 110 of each DLCU.
Referring now to Figure 7, the contention logic 84 of the shared memory 28 is shown to include a pair of D-type flip flops 130 and 132. The flip flop 130 is set whenever the master controller is accessing the shared memory 28 of the DLCU.
Similarly, the flip flop 132 is set whenever the local MPU of the DLCU is accessing the shared memory 28. Accordingly, the D input of the flip flop 130 is connected to the master controller memory select line 98 from the master controller memory selection logic 96. The select line 98 is also connected to the input of a NAND gate 134 which has another input connected to the Q output of the second flip flop 132. Accordingly, if the master controller attempts to access the shared memory 28 (MC memory select signal active) while the local MPU is already accessing the shared memory (flip flop 132 set), a wait signal is generated on a line 136 to the system bus 14 (Figure 5) which is transmitted to the MPU of the master controller. This inhibits the master controller from simultaneously accessing the shared memory while the local MPU of the DLCU is accessing the shared memory.Similarly, a NAND gate 138 generates a wait signal on a line 140 to the local MPU 32 if the DLCU is attempting to access the shared memory (DLCU memory select signal active) while the master controller is already accessing the shared memory (flip flop 130 set).
The flip flops 130 and 132 can change state only in the presence of a clock signal (CPU #) which is provided to the clock inputs of the flip flops 130 and 132. However, a time delay 142 is used to connect the clock signal line CPU ~ to the clock input of the second flip flop 132 to induce a delay such that the clock signal arrives at the flip flop 130 before the clock signal arrives at the other flip flop 132. Thus, if the master controller and local MPU 32 should attempt to access the shared memory at the same time, the clock signal will arrive at the flip flop 130 first such that the Q output of the flip flop 130 will change state first.
Thus, the 0 output of the flip flop 130 will become active (logical high) causing a wait signal to be transmitted on the line 140 to the local MPU.
When the 0 output of the flip flop 130 becomes active, the Q output also becomes active (logical low) which generates an enable signal on the line 100 to the transceivers 78 (Figure 5) connecting the system bus 14 to the RAM 74 of the shared memory 28. The Q output of the flip flop 130 is also connected to the "clear" input of the second flip flop 132. Thus, when the Q output of the flip flop 1 30 becomes active, the Q output of the flip-flop 132 becomes inactive (logical high) which disables the transceivers 76 (Figure 5) connecting the DLCU internal bus 34 to the RAM 74 of the shared memory 28. In this manner, the address and data busses of the system bus 14 are coupled to RAM 74 and the address and data busses of the DLCU internal bus 34 are uncoupled from the RAM 74 when the flip flop 130 is set.
The 4 output of the flip flop 132 is similarly connected to the clear input of the flip flop 130 to reset the flip flop 130 and disable the transceiver 78 when the Q output of the flip flop 132 becomes active. This enables the transceiver 76 to conduct the address and data signals from the local MPU 32 to the shared memory 28 while uncoupling the address and data busses of the system bus 14 from the RAM 74.
As previously mentioned, each DLCU 20 can control up to four line interface modules 18 through which the data to and from the peripheral devices is transferred. A typical line interface module is shown in greater detail in Figure 8. The primary function of each line interface module 18 is to perform serial/parallel data conversion. For example, the LIM 18 can assemble five to eight bit characters from a binary serial data stream received from an I/O port 16.
The assembled characters are then inputted by the DLCU. Similarly, the LIM 18 serializes the parallel data from the DLCU into a transmitted sequence of binary pulses through the I/O port 16 to a peripheral device.
In the illustrated embodiment, each LIM has four input/output channels, each of which is represented by a card indicated in broken line at 152. Each input/output channel 152 includes a serial communication control circuit 154 which performs the formatting of the data for serial data communication as previously described. The serial communication control circuit may be implemented, for example, with a Zilog Z80 PIO controller integrated circuit chip as described previously and in addition, a Zilog Serial I/O controller (SIO) integrated circuit chip.
The SIO circuit is a programmable, dual channel device which is capable of handling asynchronous, synchronous, and synchronous bit oriented protocols such as IBM Bisync (binary synchronous communications), HDLC (high level data link control), SDLC (synchronous data link control), and other serial protocols. The SIO and PlO circuits of the serial communication control circuits 1 54 can, under the control of the DLCU, perform the data link handling functions such as CRC (cyclic redundancy check) generation and checking, automatic flag of sync character insertion and automatic zero insertion and deletion.
Each I/O channel 152 is provided with loop back gates indicated at 156, which are controlled by an output bit from the PIO circuit. The loop back gates 156 are used in an internal test mode in which the SlO circuit is disconnected from the user system and the SIO transmitter outputs of the channel are connected to SIO receiver inputs.
This allows each DLCU to test whether the data is being properly transmitted.
The CRT terminal, computer or other external device is connected to the communication controller 10 at the connector 158. Each of the I/O channels 1 52 of the LIM 18 shown in Figure 8 is designed to meet the RS-232-C interface standard. Accordingly, the connector 158 is a 25 pin connector. A set of jumpers 160 allow variable assignment of the data and control signal lines to the pins of the connector 1 58. Similarly, a set of jumpers 162 allow the output and input pins of the SIO and PIO circuits of the serial communication control 154 to be variably assigned. For example, the RS 232-C line interface module can be jumpered to function as either data terminal equipment (DTE) or data communication equipment (DCE).
A set of transient suppressors 164 protect the circuit components of the communication controller 10 from voltage and current transients occurring on any of the data or control signal lines emanating from the external device connected to the communication controller 10 at the connector 158. A set of RS-232 drivers 166 and RS-232 receivers 168 convert the voltages of the RS-232 specifications to the voltage levels which are compatible with the circuits of the LIM 18 and communication controller 10. Further isolation is provided by a set of optical couplers 170.
Although the LIM 18 shows in Figure 8 has been designed to meet the RS-232 physical interface specification, the communication controller 10 may be provided with other line interface modules to interface with external devices which require other interface standards.
A bit rate detection circuit 172 is provided to determine the data transmission rate which is read by the DLCU 20. The DLCU can then program a bit rate generator 174 to provide clock signals at a rate which is appropriate for the detected data transmission rate. The bit rate detection circuit 172 may be implemented, for example, with a Zilog Z80 counter/timer circuit and the bit rate generator 174 may be implemented by a COM 5016 integrated circuit chip, for example.
The DLCU can address the integrated circuit chips of the serial communication control 154, a bit rate detection circuit 172 or bit rate generator 174 by supplying the appropriate address signals which are decoded by an address decode and parity check logic 176. The address decode logic 176 provides an enable signal to the integrated circuit chip which is addressed by the DLCU. The DLCU/LIM bus is connected to the integrated circuit chips of the I/O channel 152 by a set of data bus transceivers 178.
Interrupts to the DLCU are generated by the SIO and PIO circuits of the serial communication control 154. These circuits can utilize the buiit-in "daisy chain" interrupt priority structure. Where a large number of these chips are interconnected, a "look-ahead" logic may be utilized as previously described. This circuitry is represented by the bus and interrupt control logic 180 and interrupt logic 182 (Figure 2) of the DLCU.
It is clear from the foregoing that the communication controller of the present invention can accommodate a large number of processor modules and is not limited by the address space of the master controller which transfers data from one processor module to another. Furthermore, the above described architecture allows data to be simultaneously transferred to more than one processor module.
It will, of course, be understood that modifications of the present invention, and its various aspects, will be apparent to those skilled in the art, some being apparent only after study and others being merely matters of routine electronic design. Other embodiments are also possible, with their specific designs dependent upon the particular application. As such, the scope of the invention should not be limited by the particular embodiment herein described, but should be defined only by the appended claims and the equivalents thereof.

Claims (8)

Claims
1. A data processing system comprising: a plurality of processors for processing data, each processor having a shared memory for storing data to be transferred to and received from the shared memory of another processor with each shared memory having a block of addresses in common; data transfer means for transferring data from one shared memory to another shared memory, said data transfer means having means for addressing each of the shared memories to read data from or write data into a shared memory and enabling means for enabling a particular shared memory to accept an address from the data transfer means wherein all non-enabled shared memories ignore the address from the data transfer means.
2. A data processing system comprising: a system bus; a plurality of processor modules, each having a processor and a shared memory associated therewith which is operably connected to the system bus, wherein each shared memory has a block of addresses, with respect to the system bus, in common with the other shared memories, and wherein each processor has write means for writing data into the associated shared memory to be transferred to another processor module shared memory and read means for reading data from the associated shared memory which was transferred from another processor module shared memory; and a master controller operably connected to the system bus for supplying common address signals to each of the shared memories to read data from a selected shared memory and to write data into at least one selected shared memory, the master controller having supply means for supplying an enable signal to a particular processor module to select that processor module shared memory; each processor module further having enabling means responsive to an enable signal for enabling the associated shared memory of the processor module to accept the common address signals from the master controller.
3. The data processing system of claim 2 wherein: each shared memory includes an input section having a block of addresses, with respect to the system bus, in common with the input sections of the other shared memories, and an output section having a block of addresses, with respect to the system bus, in common with the output sections of the other shared memories; the supply means includes means for supplying an input enable signal to a particular processor module to select that processor shared memory input section and also an output enable signal to one or more processor modules to select those processor module shared memory output sections; and the enabling means includes means for enabling the associated shared memory input section to accept the common address signals in response to an input enable signal and for enabling the associated shared memory output section to accept the common address signals in response to an output enable signal.
4. The data processor of claim 2 wherein the enabling means comprises a register operably connected to the system bus for storing the enable signal from the master controller, and a plurality of gates responsive to the enable signal and operably connecting the shared memory to the system bus, for gating the address signals from the system bus and data signals to and from the shared memory when enabled.
5. A data processing system comprising; a plurality of processing means for processing data, each processing means having a memory which includes a block of shared memory having a common block of addresses with the shared memory blocks of the other processing means; data transfer means for transferring data from one shared memory block to another and having an address space which includes a shared memory window defined by the common block of addresses of the shared memory blocks of the plurality of processing means; and mapping means for mapping the shared memory block of one or more processing means into the shared memory window of the data transfer means so that the mapped shared memory will accept an address within the common block of addresses and the non-mapped shared memories do not respond.
6. A communication controller comprising: a system bus; a plurality of input/output ports; a plurality of processors, each processor being associated with at least one input/output port and having interface means for interfacing between the processor and the associated input and output ports, each processor further having a shared memory operably connected to the system bus and means for inputting data from an input/output port, processing the data and storing the data in the associated shared memory to be transferred to the shared memory of another processor; and a master controller operably connected to the system bus for supplying common address signals to each of the shared memories to read data from a selected shared memory and to write data into at least one selected shared memory, the master controller having means for supplying an enable signal to a particular processor to select the processor shared memory; each processor further having a register operably connected the system bus for storing an enable signal from the master controller, means responsive to the stored enable signal for enabling the associated shared memory of the processor to accept the common address signals from the master controller, and means for reading data written into the associated shared memory by the master controller, and outputting the data to an input/output port associated with that processor.
7. A data processing system constructed and arranged to operate substantially as hereinbefore described with reference to and as illustrated in the accompanying drawings.
8. A communications controller constructed and arranged to operate substantially as hereinbefore described with reference to and as illustrated in the accompanying drawings.
GB08229487A 1981-12-22 1982-10-15 Improved distributed processing system Expired GB2112186B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US33348681A 1981-12-22 1981-12-22

Publications (2)

Publication Number Publication Date
GB2112186A true GB2112186A (en) 1983-07-13
GB2112186B GB2112186B (en) 1985-09-11

Family

ID=23302991

Family Applications (1)

Application Number Title Priority Date Filing Date
GB08229487A Expired GB2112186B (en) 1981-12-22 1982-10-15 Improved distributed processing system

Country Status (4)

Country Link
JP (1) JPS58109960A (en)
DE (1) DE3247083A1 (en)
FR (1) FR2518781B1 (en)
GB (1) GB2112186B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0182044A2 (en) * 1984-11-13 1986-05-28 International Business Machines Corporation Initialization apparatus for a data processing system with a plurality of input/output and storage controller connected to a common bus.
GB2175421A (en) * 1985-05-13 1986-11-26 Singer Link Miles Ltd Computing system
US4674033A (en) * 1983-10-24 1987-06-16 British Telecommunications Public Limited Company Multiprocessor system having a shared memory for enhanced interprocessor communication
EP0318270A2 (en) * 1987-11-25 1989-05-31 Fujitsu Limited A multiprocessor system and corresponding method
EP0428329A2 (en) * 1989-11-13 1991-05-22 International Business Machines Corporation Extended addressing circuitry
US5228127A (en) * 1985-06-24 1993-07-13 Fujitsu Limited Clustered multiprocessor system with global controller connected to each cluster memory control unit for directing order from processor to different cluster processors

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS58123148A (en) * 1982-01-18 1983-07-22 Hitachi Ltd Data transmitting system
JPH0378421U (en) * 1989-11-29 1991-08-08
DE4202852A1 (en) * 1992-02-01 1993-08-05 Teldix Gmbh Transmission of information to all units of multiprocessor system - has simultaneous telegram containing identification address and data transmitted to all units

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3909790A (en) * 1972-08-25 1975-09-30 Omnus Computer Corp Minicomputer with selector channel input-output system and interrupt system
DE2546202A1 (en) * 1975-10-15 1977-04-28 Siemens Ag COMPUTER SYSTEM OF SEVERAL INTERCONNECTED AND INTERACTING INDIVIDUAL COMPUTERS AND PROCEDURES FOR OPERATING THE COMPUTER SYSTEM
DE2641741C2 (en) * 1976-09-16 1986-01-16 Siemens AG, 1000 Berlin und 8000 München Computing system made up of several individual computers connected and interacting with one another via a manifold system and a control computer
US4158227A (en) * 1977-10-12 1979-06-12 Bunker Ramo Corporation Paged memory mapping with elimination of recurrent decoding
US4285039A (en) * 1978-03-28 1981-08-18 Motorola, Inc. Memory array selection mechanism
JPS5561866A (en) * 1978-11-02 1980-05-09 Casio Comput Co Ltd Memory designation system
AT361726B (en) * 1979-02-19 1981-03-25 Philips Nv DATA PROCESSING SYSTEM WITH AT LEAST TWO MICROCOMPUTERS

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4674033A (en) * 1983-10-24 1987-06-16 British Telecommunications Public Limited Company Multiprocessor system having a shared memory for enhanced interprocessor communication
EP0182044A2 (en) * 1984-11-13 1986-05-28 International Business Machines Corporation Initialization apparatus for a data processing system with a plurality of input/output and storage controller connected to a common bus.
EP0182044A3 (en) * 1984-11-13 1989-01-18 International Business Machines Corporation Initialization apparatus for a data processing system with a plurality of input/output and storage controller connected to a common bus.
GB2175421A (en) * 1985-05-13 1986-11-26 Singer Link Miles Ltd Computing system
GB2175421B (en) * 1985-05-13 1989-11-29 Singer Link Miles Ltd Computing system
US5017141A (en) * 1985-05-13 1991-05-21 Relf Richard S Computing system
US5228127A (en) * 1985-06-24 1993-07-13 Fujitsu Limited Clustered multiprocessor system with global controller connected to each cluster memory control unit for directing order from processor to different cluster processors
EP0318270A2 (en) * 1987-11-25 1989-05-31 Fujitsu Limited A multiprocessor system and corresponding method
EP0318270A3 (en) * 1987-11-25 1990-10-31 Fujitsu Limited A multiprocessor system
EP0428329A2 (en) * 1989-11-13 1991-05-22 International Business Machines Corporation Extended addressing circuitry
EP0428329A3 (en) * 1989-11-13 1991-10-16 International Business Machines Corporation Extended addressing circuitry

Also Published As

Publication number Publication date
JPS6246025B2 (en) 1987-09-30
JPS58109960A (en) 1983-06-30
GB2112186B (en) 1985-09-11
DE3247083A1 (en) 1983-07-07
FR2518781B1 (en) 1988-04-29
FR2518781A1 (en) 1983-06-24

Similar Documents

Publication Publication Date Title
EP1047994B1 (en) Intelligent data bus interface using multi-port memory
US4471427A (en) Direct memory access logic system for a data transfer network
US4447878A (en) Apparatus and method for providing byte and word compatible information transfers
US4590551A (en) Memory control circuit for subsystem controller
EP0834135B1 (en) Architecture for an i/o processor that integrates a pci to pci bridge
US5860021A (en) Single chip microcontroller having down-loadable memory organization supporting &#34;shadow&#34; personality, optimized for bi-directional data transfers over a communication channel
US5913045A (en) Programmable PCI interrupt routing mechanism
CA1297994C (en) Input output interface controller connecting a synchronous bus to an asynchronous bus and methods for performing operations on the buses
US4428043A (en) Data communications network
US6209042B1 (en) Computer system having two DMA circuits assigned to the same address space
EP0486167A2 (en) Multiple computer system with combiner/memory interconnection system
CA1129110A (en) Apparatus and method for providing byte and word compatible information transfers
US4443850A (en) Interface circuit for subsystem controller
EP0117836B1 (en) Address-controlled automatic bus arbitration and address modification
EP0518488A1 (en) Bus interface and processing system
US4945473A (en) Communications controller interface
US4939636A (en) Memory management unit
US4456970A (en) Interrupt system for peripheral controller
US5896549A (en) System for selecting between internal and external DMA request where ASP generates internal request is determined by at least one bit position within configuration register
EP0444711A2 (en) Bus control system in a multi-processor system
GB2112186A (en) Improved distributed processing system
US4430710A (en) Subsystem controller
US5872940A (en) Programmable read/write access signal and method therefor
EP0074300B1 (en) Memory control circuit for subsystem controller
EP0060535A2 (en) Multiprocessor network

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee