GB2176920A - Content addressable memory - Google Patents

Content addressable memory Download PDF

Info

Publication number
GB2176920A
GB2176920A GB08612679A GB8612679A GB2176920A GB 2176920 A GB2176920 A GB 2176920A GB 08612679 A GB08612679 A GB 08612679A GB 8612679 A GB8612679 A GB 8612679A GB 2176920 A GB2176920 A GB 2176920A
Authority
GB
United Kingdom
Prior art keywords
lines
memory
signals
bit
page
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB08612679A
Other versions
GB8612679D0 (en
GB2176920B (en
Inventor
John H Crawford
Paul S Ries
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Publication of GB8612679D0 publication Critical patent/GB8612679D0/en
Publication of GB2176920A publication Critical patent/GB2176920A/en
Application granted granted Critical
Publication of GB2176920B publication Critical patent/GB2176920B/en
Expired legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/14Protection against unauthorised use of memory or access to memory
    • G06F12/1416Protection against unauthorised use of memory or access to memory by checking the object accessibility, e.g. type of access defined by the memory independently of subject rights
    • G06F12/145Protection against unauthorised use of memory or access to memory by checking the object accessibility, e.g. type of access defined by the memory independently of subject rights the protection being virtual, e.g. for virtual blocks or segments before a translation mechanism
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1027Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The memory (CAM) comprises a plurality of buffers 56, 57, each receiving first signals and providing corresponding signals BIT, BIT in true and complementary form on line pairs 59, 60 etc. A plurality of memory cells e.g. 67, ... are coupled between the pairs of lines, in rows generally perpendicular to the pairs, a plurality of row comparator lines e.g. 60 being provided, one for each of said rows of cells. Comparators e.g. 61-64, one for each of said memory cells, compare the binary state stored in the memory cell with the line signals, unless disabled when its by maintaining the corresponding pair of lines in the same binary state, in which case the corresponding cell is ignored in the comparison. A "hit" exists if all the enabled cells in a row contain data agreeing with that on the corresponding lines 59, 60 etc. <IMAGE>

Description

1 GB 2 176 920 A 1
SPECIFICATION
Content addressable memory 1 10 Backgroundoftheinvention 1. Field of the invention.
The invention relates to a content add ressable memory suitable for an address translation unitfor memory management, particularly in a microprocessorsystem.
2. Priorart There are many well-known mechanisms for memory management. In some systems, a larger address (virtual address) is translated to a smaller physical address. In others, a smaller address is used to access a larger memory space, for instance, by using bank switching. The present invention relates to the former category, that is, where larger virtual address is used to access a limited physical memory.
In memory management systems, it is also known to providevarious protection mechanisms. Forexample, a system may preventa userfrom writing into an operating system or perhaps even from reading the operating system to external ports. As will be seen, the present invention implements a protection mechanism as part of a broader control scheme which assigns "attributes" to data on two distinct levels.
The closest prior art known to Applicant is that described in U.S. Patent 4,442,484. This patent describes the memory management and protection mechanism embodied in a commercially available microprocessor, the Intel 286. This microprocessor includes segmentation descriptor registers containing segment base addresses, limit information and attributes (e.g., protection bits). The segment descriptortable and the segment descriptor registers both contain bits defining various control mechanisms such as privilege level, types of protection, etc. These control mechanisms are described in detail in U.S. Patent 4,442,484.
One problem with the Intel 286 is thatthe segment offset is limited to 64k bytes. It also requires consecu- tive locations in physical memory for a segment which is not always easyto maintain. As will be seen, one advantage to the invented system is thatthe segment offset is as large as the physical address space. Yet, the invented system still provides compatibilitywith the prior segmentation mechanism found in the Intel 286. Other advantages and distinctions between the prior art system discussed in the above-mentioned patent and its commercial realization (Intel 286 microprocessor) will be apparentfrom the detailed description of the present invention.
Summary of the invention
According to the present invention there is described a content addressable memory (CAM) corn- prising:
a plurality of buffers, each for receiving first signals and for providing said first signals and second sig nals, said second signals being complements of said firstsignals; a plurality of a generally parallel pairs of lines each 130 pair being coupled to receive one of said first and second signals; a plurality of memory cells coupled between each pair of lines said cells being arranged in rows general- ly perpendicular to said pairs of lines; a plurality of row comparator lines one associated with each of said rows of cel Is; a plurality of comparators, one for coupling between each of said memory cells, its respective pair of lines and one of said comparator lines, said comparators for comparing a binary state stored in said memory cell with said first and second signals; loading means for loading data from said pairs of lines to said cells; said comparators being disabled when its respective pairs of lines are both maintained ata certain bin a ry state; whereby by causing at least some of said buffers to provide said certain binary statefor said first and second signals, selected ones of said cells can be ignored for said comparison.
Brief description of the drawings
Figure 1 is a block diagram showing the overall architecture of the microprocessor in which the present invention is currently realized.
Figure 2 is a block diagram i I lustrating the segmentation mechanism embodied in the microprocessor of Figure 1.
Figure3 is a block diagram illustrating the page field mapping for a hit or match in the page cache memory.
Figure 4 is a block diagram illustrating the page field mapping for no hit or match in the page cache memory of Figure 3. Forthis condition, the page directory and page table in main memory are used and, hence, are shown in Figure 4.
Figure 5 is a diagram used to illustrate the attributes stored in the page directory, page table page cache memory.
Figure 6is a block diagram illustrating the organization of the content addressable memory of the present invention and data storage contained within the page cache memory, Figure 7 is an electrical schematic of a portion of the content addressable memory of Figure 6.
Figure 8 is an electrical schematic of the logic circuits associated with the detector of Figure 6.
Detailed description
A microprocessor system and in particular, a memory management mechanism forthe system is described. In thefollowing description, numerous specific details are setforth such as specific number of bits, etc., in orderto provide a thorough understanding of the present invention. Itwill be obvious, however, to one skilled in the artthatthe present invention may be practiced withoutthese specific details. In other instances, well-known structures are not shown in detail in order notto unnecessarily obscure the present invention.
The microprocessor system includesthe microprocessor 10 of Figure 1. This microprocessor isfabricated on a single silicon substrate using complementary meta I-oxide-semicond uctor (CMOS) processing.
2 GB 2 176 920 A 2 Anyone of many well-known CMOS processes may be employed, moreover, it will be obvious thatthe microprocessor maybe realized with othertechnolo gies, for instance, n-channel, bipolar, SOS, etc.
The memory management mechanism for some conditions requires access to tables stored in main memory. A random-access memory (RAM) 13 which functions as the main memoryfor the system is shown in Figure 1. An ordinary RAM may be used such as one employing dynamic memories.
As shown in Figure 1, the microprocessor 10 has a physical address of 32 bits, and the processor itself is a 32-bit processor. Other components of a microp rocessor system commonly used such as drivers, mathematical processors, etc., are not shown in Fi gure 1.
The memory management makes use of both seg mentation and paging. Segments are defined by a set of segment descriptor tables that are separate from the page tables used to describe the page translation.
The two mechanisms are completely separate and independent. Avirtual address is translated to a phy sical address in two distinct steps, using two distinct mapping mechanisms. A segmentation technique is used forthe firsttranslation step, and a paging techni- 90 que is used forthe second translation step. The pag ing translation can be turned off to produce a one step translation with segmentation only, which is compatible with the 286.
Segmentation (the first translation) translates a 48- 95 bitvirtual address to a 32-bit linear (intermediate) address. The 48-bitvirtual address is composed of a 16-bit segment selector, and a 32-bit offsetwithin this segment. The 16-bit segment selector identifies the segment, and is used to access an entry from the segment descriptor table. This segment descriptor entry contains a base address of the segment, the size (limit) of the segment, and various attributes of the segment. The translation step adds the segment base to the 32-bit offset in the virtual address to obtain a 32-bit linear address. Atthe same time, the 32-bit offset in the virtual address is compared againstthe segment limit, and the type of the access is checked againstthe segment attributes. Afault is generated and the addressing process is aborted, if the 32-bit 110 offset is outside the segment limit, or if the type of the access is not allowed bythe segment attributes.
Paging (the second translation) translates a 32-bit linear address to a 32-bit physical address using a two-level paging table, in a process described in detail below.
Thetwo steps are totally independent. This permits a (large) segmentto becomposed of several pages, ora pageto becomposed of several (small) segments.
A segment can start on any boundary, and be of arbitrary size, and is not limited to starting on a page boundary, orto have a length that is an exact multiple of pages. This allows segments to describe separate- ly protected areas of memory that start at arbitrary addresses and to be of arbitrary size.
Segmentation can be used to cluster a number of small segments, each with its unique protection attributes and size, into a single page. In this case, seg- mentation provides the protection attributes, and paging provides a convenient method of physical memory mapping a group of related units that must be protected separately.
Paging can be used to break up very large seg- ments into small units for physical memory management. This provides a single identifier (the segment selector), and a single descriptor (the segment descriptor) for a separately protected unit of memory, ratherthan requiring the use of a multitude of page descriptors. Within a segment, paging provides an additional level of mapping that allows large segmentsto be mapped into separate pages that need not be contiguous in physical memory. In fact, paging allows a large segmentto be mapped so that only a few pages at a time are resident in physical memory, with the remaining parts of the segment mapped onto disk. Paging also supports the definition of substructure within a large segment, for example, to write protect some pages of a large segment, while other pages can be written into.
Segmentation provides a very comprehensive protection model which works on the "natural" units used by a programmer: arbitrary sized pieces of linearly addressed memory. Paging provides the most convenient method for managing physical memory, both system main memory and backing disk memory. The combination of thetwo methods provides a veryflexible and powerful memory protection model.
Overallmicroprocessorarchitecture In Figure 1,the Microprocessor includes a bus interface unit 14. The bus unit includes buffersfor permitting transmission of the 32-bit address signals and for receiving and sending the 32 bits of data. Internal to the microprocessor, unit 14 communicates overthe internal bus 19. The bus unit includes a pre-fetch unit for fetching instructions from the RAM 12 and a prefetch queue which communicates with the instruction unit of the instruction decode unit 16. The queued instructions are processed within the execution unit 18 (arithmetic logic unit) which includes a 32-bit registerfile. This unit, aswell asthe decode unit, communicatewith the internal bus 19.
The address translation unit 20 provides two functions; one associated with the segment descriptor registers, and the otherwith the page descriptor cache memory. The segment registers are forthe most part known in the prior art; even so, they are described in more detail in conjunction with Figure 2. The page cache memory and its interaction with the page directory and page table stored within the main memory 13 is discussed in conjunction with Figures 3-7.
Segmentation mechanism The segmentation u nit of Figu re 1 receives a virtual address from the execution unit 18 and accesses the appropriate register segmentation information. The register contains the segment base address which along with the offset f rom the virtual address are coupled over lines 23 to the page unit.
Figure 2 illustrates the accessing of the tables in main memorywhen the segmentation registers are loaded with mapping information fora new segment.
3 GB 2 176 920 A 3 J 10 The segment field indexes the segment descriptor table in the main memory 13. The contents of the table provide a base address and additionally, provide attributes associated with ihe data in the seg- ment. The base address and offset are compared to the segment limits in comparator 27; the output of this comparator providing a fault signal. The adder 26 which is part of the microprocessor combines the base and offset to provide a "physical" address on lines 31. This address may be used by the microprocessor as a physical address or used by the paging unit. This is done to provide compatibility with certain programs written for a prior microprocessor (Intel 286). Forthe Intel 286, the physical address space is 24 bits.
The segment attributes including details on the descriptors employed such as the various privilege levels are setforth in U.S. Patent 4,442,484.
The factthat the segmentation mechanism is known in the prior art is represented in Figure 2 bythe dotted line 28 which indicates the prior art structures to the left of the dotted line.
The page field mapping block 30 which includes the page unit of Figure 1 as well as its interaction with the page directory and page table stored in the main memory is shown in Figures 3 through 7.
While the segmentation mechanism uses shadow registers, it also could be implemented with a cache memory as is done with the paging mechanism.
Page descriptor cache memory In Figure 3 the page descriptor cache memory of the page unit 22 of Figure 1 is shown within dotted line 22a. This memory comprises two arrays, a con- tent addressable memory (CAM) 34 and a page data (base) memory 35. Both memories are implemented with static memory cells. The organization of memories 34 and 35 is described in conjunction with Figure 6. The specific circuitry used for CAM 34with its unique masking feature is described in conjunction with Figures7 and 8.
The linear address from the segment unit 21 are coupledtothe page unit22 of Figure 1.Asshown in Figure 3, this linear address comprises two fields, the page information field (20 bits) and a displacement field (12 bits). Additionally, there is.a four bit page attribute field provided by the microcode. The 20-bit page information field is compared with the contents of the CAM 34. Also, the four attribute bits ("dirty", "valid", "U/S", and "W/R") must also match those in the CAM before a hit occurs. (There is an exception to this when "masking'" is used as will be discussed.) For a hit condition, the memory 35 provides a 20-bit base word which is combined with the 12-bit dis- placement field of the I inear address as represented by summer 36 of Figure 3 and the resultant physical address selects from a 4k byte page frame in main memory 13.
Page addressing for the no-hit condition A page directory 13a and a page table 13b are stored in the main memory 13 (see Fig ure4). The base address for the page directory is provided from the microprocessor and is shown in Figure 4 as the page directory base 38. Ten bits of the page information field are used as an index (after being scaled by a factor of 4) into the page directory as indicated bythe summer40 in Figure 4. The page directory provides a 32-bit word. Twenty bits of this word are used as a base for the page table. The other 10 bits of the page information field are similarly used as an index (again being scaled by a factor of 4) into the pagetable as indicated bythe summer 41. The page table also provides a 32-bitword, 20 bits of which are the page base of the physical address. This page base address is combined as indicated by summer42 with the 12-bit displacement field to provide a 32- bit physical address.
Five bits from the 12-bitfields of the page directory and table are used for attributes particularly "dirty", "access"," U/S", "R/W" and "present". These will be discussed in more detail in conjunction with Figure 5. Remaining bits of this field are unassigned.
The stored attributes from the page directory and table are coupled to control logic circuit75 along with the 4 bits of attribute information associated with the linear address. Parts of this logic circuit are shown in subsequentfigures are discussed in conjunction with thesefigures.
Page directory attributes In Figure 5the page directoryword, pagetable word and CAM word are again shown. The protective/ control attributes assigned to thefour bits of the page directoryword are listed within bracket43. The same fourattributes with one additional attribute are used forthe pagetable word and are setforth within bracket44. Thefour attributes used forthe CAM word are setforth within bracket45.
The attributes are used forthefollowing purpose:
1. DIRTY. This bit indicates whether a page has been written into. The bit is changed once a page has been written into. This bit is used, for instance, to inform the operating system that an entire page is not "clean ". This bit is stored in the page table and in the CAM (not in the page directory). The processor sets this bit in the page table when a page is written into.
2. ACCESSED. This bit is stored in only the page directory and table (not in the CAM) and is used to indicate that a page has been accessed. Once a page is accessed, this bit is changed in the memory bythe processor. Unlike the dirty bit, this bit indicates whether a page has been accessed either for writing orreading.
3. U/S. The state of this bit indicates whether the contents of the page is user and supervisory accessible (binary 1) or supervisor only (binary zero).
4. R/W. This read/write protection bit must be a binary 1 to allowthe pageto be written into by a user level program.
5. PRESENT. This bit in the page table indicates if the associated page is present in the physical memory. Thi's bit in the page directory indicates if the associated page table is present in physical memory.
6. VALID. This bit which is stored only in the CAM is used to indicate if the contents of the CAM is valid. This bit is set to a first state on initialization then changed when a valid CAM word is loaded.
The five bits from the page directory and table are coupled to control logic circuit 75 to provide appropri- 4 GB 2 176 920 A 4 ate fault signals within the microprocessor.
The user/supervisor bitsfrom the pagedirectory andtable are logicallyANDed as indicated by gate46 to providethe R/W bitstored inthe CAM 34of Figure 5 3. Similarly,the read/write bitsfromthe pagedirectoryandtable are logically AN Ded through gate47to providetheW/R bitstored intheCAM. The dirtybit fromthe pagetable isstored inthe CAM. Thesegates are partofthe control logic75of Figure4.
The attributes stored in the CAM are "automatically" tested since they are treated as part of the address and matched againstthe four bitsfrom the microcode. Afault condition results even if a valid page base is stored in the CAM, if, for instance, the linear address indicates that a "user" write cycle is to occur into a page with R/W-0.
The ANDing of the U/S bits from the page directory and table ensuresthatthe "worst case" is stored in the cache memory. Similarly, the ANDing of the R/W bit provides the worst case for the cache memory.
Organization of thepage descriotorcache memory The CAM 34 as shown in Fig ure 6 is organized in 8 sets with 4 words in each set. Twenty-one bits (17 address and 4 attributes) are used to f ind a match in this array. The four comparator lines from the four stored words in each set are connected to a detector. For instance, the comparator lines forthefourwords of set 1 are connected to detector 53. Similarly,the comparator lines forthe fourwords in sets 2 through 8 are connected to detectors. The comparator lines are sensed bythe detectors to determine which word in the set matches the input (21 bits) to the CAM array. Each of the detectors contains "hard wired " logic which permits selection of one of the detectors depending upon the state of the 3 bits from the 20-bit page information field coupled to the detectors. (Note the other 17 bits of this bit page information field is coupled to the CAM array.)
For purposes of explanation, eight detectors are implied from Figure 6. In the current embodiment only one detector is used with the three bits selecting one set of four lines for coupling to the detector. The detector itself is shown in Figure 8.
The data storage portion of the cache memory is organized into four arrays shown as arrays 35a-d. The data words corresponding to each set of the CAM are distributed with one word being stored in each of the four arrays. For instance, the data word (base address) selected by a hitwith word 1 of set 1 is in array 35a, the data word selected by a hitwith word 2 of set 1 is in array 35b, etc. The three bits used to select a detector are also used to select a word in each of the arrays. Thus, simultaneously, words are selected from each of the four arrays. The final selection of a word from the arrays is done through the multiplexer 55. This multiplexer is controlled by the four comparator lines in the detector.
When the memory cache is accessed, the matching process which is a relatively slow process begins through use of the 21 bits. The otherthree bits are ableto immediately select a set of four lines and the detector is prepared for sensing a drop in potential on the comparator lines. (As will be discussed, all the comparator (rows) lines are precharged with the X selected (hit) line remaining charged whilethe nonselected lines discharge.) Simultaneously, four wordsfrom the selected set are accessed in arrays 35a-35d. If and when a match occurs, the detector is able to identify the word within the set and this information is transmitted to the multiplexer 55 allowing the selection of the data word. This organization improves access time in the cache memory.
Contentaddressable memory (CAM) InFigure7,the21 bits which are coupled to the CAM array are again shown with 17 of the bits being coupled to the complement generator and override circuit 56 and with the 4 attribute bits coupled to the VUDW logic circuit 57. The 3 bits associated with the selection of the detectors described in conjunction with Figure 6 are notshown in Figure7.
The circuit 56 generates the true and complement signal foreach of the address signals and couples them to parallel lines in the CAM array, such as lines 59 and 60. Similarly, the VUDW logic 57 generates both thetrue and complement signals forthe attribute bits and couplesthem to parallel lines in the array. The lines 59 and 60 are duplicated for each of the true and complement bit lines (i.e., 21 pairsofbit and bit/ lines).
Each of the 32 rows in the CAM array has a pair of parallel row lines such as lines 68 and 70. An ordinary static memory cell such as cell 67 is coupled between each of the bit and bit/ lines (columns) and is associated with the pair of row lines. In the presently preferred embodiment, the memory cells comprise ordinary flip-flop static cells using p-channel transistors. One line of each pair of row lines (line 70) permitsthe memory cell to be coupled to the bit and bit/ linewhen data is written into the array. Otherwise, the content of the memory cell is compared to the data on the column lines and the results of the comparison is coupled to the hit line 68. The comparison is done by comparators, one associated with each cell. The comparator comprises the n-channel transistors 61-64. Each pair of the comparator transistors, for example, transistors 61 and 62, are coupled between one side of the memory cell and the opposite bit line.
Assume that data is stored in the memory cell 67 and thatthe node of the cell closest to bit line 59 is high. When the contents of the CAM are examined, firstthe hit line 68 is precharged through transistor 69. Then the signals coupled to the CAM are placed on the column lines. Assume firstthat line 59 is high. Transistor 62 does not conduct since line 60 is low. Transistor 63 does not conduct since the side of the cell to which it is connected is low. Forthese conditions, line 68 is not discharged, indicating that a match has occurred in the cell. The hit line 68 provides ANDing of the comparisons occurring along the row. If a match does not occur, one or more of the comparators will cause the hit line to discharge.
During precharging the circuits 56 and 57 generate an override signal causing all column lines (both bit and bit/)to be low. This preventsthe comparators from draining the chargefrom the hit lines beforethe comparison begins.
Itshould be noted thatthe comparators examine the" binary one" condition and, in effect, ignore the Z 1 GB 2 176 920 A 5 j 10 "binary zero" condition. That is, for instance, if the gate of transistor 64 is high (line 59 high) then transis tors 63 and 64 control the comparison. Similarly, if the bit/ line 60 is high, then transistors 61 and 62 control the comparison. This feature of the comparator per mits cells to be ignored. Thus, when a word is coupled to the CAM, certain bits can be masked from the matching process by making both the bit and bit/ line low. This makes it appearthatthe contents of the cell match the condition on the column lines. This feature is used by the VUDW logic circuit 57.
Microcode signals coupled to logic circuit 57 causes the bit and bit/ line for selected ones of the attribute bits to be low as a function of the microcode bits. This results in the attribute associated with that bitto be ignored. This feature is used, for instance, to ignore the U/S bit in the supervisory mode. That is, the supervisory mode can access user data. Similarly, the read/write bit can be ignored when reading or when the supervisory mode is active. The dirty bit is also ignored when reading. (The feature is not used forthevalid bit.) When the attribute bits are stored in main memory, they can be accessed and examined and logic circuits used to control accessing, for instance, based on the 90 one orzero state of the U/S bit. However, with the cache memory no separate logic is used. The forcing of both the bit and bit/ lines low, in effect, providesthe extra logic by allowing a match (or preventing a fault) even though the bit patterns of the attribute bits are 95 not matched.
The detectorfrom Figure 6, as shown in Figure 8, includes a plurality of NOR gates such as gates 81, 82, 83 and 84. Three of the hit lines from the selected set of CAM lines are coupled to gate 81; these are shown as lines A, B, and C. A different combination of the lines are connected to each of the other NOR gates.
For instance, NOR gate 84 receives the hit lines D, A, and B. The output of each of the NOR gates is an input toaNANDgatesuchasNAND9ate86.Ahitline provides one input to each NAND gate. This line is the one (of the four A,13,C,D) that is not an input to the NOR gate. This is also the bit line from the set entryto be selected. For example, gate 86 should select the set that is associated with hit line D. For instance, in thecase of NOR gate81, hit line D is coupled tothe NAND gate 86. Similarly, forthe NAND gate 90, the hit line C in addition to the output of gate 84, are inputs to this gate. An enable read signal is also coupled to the NAND gates to prevent the outputs of this logic from being enabled fora write. The output of the NAND gates, such as line 87, are used to control the multi plexer55 of Figure 6. In practice, the signal from the NAND gate, such as the signal on line 87, controlsthe multiplexerthrough p-channel transistors. For pur poses of explanation, an additional inverter 88 is shown with an output line 89.
The advantage to this detector is that it enables precharge lines to be used in the multiplexer 55. Alter nately, a static arrangement could be used, but this would require considerably more power. With the arrangement as shown in Figure 8, the outputfrom the inverters will remain in the same state until one of the hit lines drops in potential. When that occurs, only a single output line will drop in potential, permitting the multiplexerto selectthe correctword.
The present application has been divided out of our copending U.K. Patent Ap pl ication No.8519991 in which there is described and claimed in a microp- rocessor system which includes a microprocessor and a data memory where the microprocessor has a segmentation mechanism fortranslating a virtual memory address to a second memory address and for controlling databased on attributes, an improve- mentcomprising:
a page cache memory integral with said microprocessorfor receiving a firstfield of said second memory address and forcomparing itwith contents of said page cache memoryto provide a second field under certain conditions; said data memory including storage for page mapping data, said firstfield of said second memory address being coupled to said data memoryto select a third field from said page data when said certain conditions of said page cache memory are not met; said microprocessor system including a circuitfor combining one of said second and thirdfieldswith an offsetfieldfrom said first address to provide a physical address for said data memory; wherebythe physical addressibilityof said data memory is improved.

Claims (5)

1. A content addressable memory (CAM) comprising:
a plurality of buffers, each for receiving first signals and for providing said first signals and second signals, said second signals being complements of said firstsignals; a plurality of a generally parallel pairs of lines each pair being coupled to receive one of said first and second signals; a plurality of memory cells coupled between each pairof lines said cells being arranged in rows generally perpendicularto said pairs of lines; a plurality of row comparator lines one associated with each of said rows of cells; a plurality of comparators, onefor coupling be- tween each of said memory cells, its respective pairof lines and one of said comparator lines, said comparators for comparing a binary state stored in said memory cell with said first and second signals; loading mepnsfor loading data from said pairs of linesto said cells; said comparators being disabled when its respective pairs of lines are both maintained at a certain binarystate; whereby bycausing at leastsome of said buffersto provide said certain binary stateforsaid first and second signals, selected ones of said cells can be ignored forsaid comparison.
2. The CAM defined by Claim 1 wherein said row comparator lines are precharged lines.
3. The CAM defined by Claim 2 including a storage memory which comprises a plurality of sections and wherein data is accessed simultaneously in all of said sections and an outputfrom one of said sections being selectedthrough said row lines.
4. The CAM defined by Claim 3 including detec- 6 GB 2 176 920 A 6 torscoupledto a predetermined numberof said row lines,said detectors for sensing which one of said predetermined numberof lines remainscharged.
5. The CAM defined by Claim 4wherein said selection of said output from one of said sections is made bysaid detectors.
Printed in the UKfar HMSO, D881 8935,11186,7102. Published by The Patent Office, 25 Southampton Buildings, London, WC2A lAY, from which copies maybe obtained.
1 i-.
k
GB8612679A 1985-06-13 1986-05-23 Content addressable memory Expired GB2176920B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US74438985A 1985-06-13 1985-06-13

Publications (3)

Publication Number Publication Date
GB8612679D0 GB8612679D0 (en) 1986-07-02
GB2176920A true GB2176920A (en) 1987-01-07
GB2176920B GB2176920B (en) 1989-11-22

Family

ID=24992533

Family Applications (2)

Application Number Title Priority Date Filing Date
GB8519991A Expired GB2176918B (en) 1985-06-13 1985-08-08 Memory management for microprocessor system
GB8612679A Expired GB2176920B (en) 1985-06-13 1986-05-23 Content addressable memory

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GB8519991A Expired GB2176918B (en) 1985-06-13 1985-08-08 Memory management for microprocessor system

Country Status (8)

Country Link
JP (1) JPH0622000B2 (en)
KR (1) KR900005897B1 (en)
CN (1) CN1008839B (en)
DE (1) DE3618163C2 (en)
FR (1) FR2583540B1 (en)
GB (2) GB2176918B (en)
HK (1) HK53590A (en)
SG (1) SG34090G (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1988007721A1 (en) * 1987-04-02 1988-10-06 Unisys Corporation Associative address translator for computer memory systems
DE10248065B4 (en) * 2001-10-12 2005-12-22 Samsung Electronics Co., Ltd., Suwon Content-Addressable Memory Device
EP1654657A1 (en) * 2003-07-29 2006-05-10 Cisco Technology, Inc. Force no-hit indications for cam entries based on policy maps
US7689485B2 (en) 2002-08-10 2010-03-30 Cisco Technology, Inc. Generating accounting data based on access control list entries
US11074191B2 (en) 2007-06-01 2021-07-27 Intel Corporation Linear to physical address translation with support for page attributes

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5251308A (en) * 1987-12-22 1993-10-05 Kendall Square Research Corporation Shared memory multiprocessor with data hiding and post-store
US5341483A (en) * 1987-12-22 1994-08-23 Kendall Square Research Corporation Dynamic hierarchial associative memory
US5226039A (en) * 1987-12-22 1993-07-06 Kendall Square Research Corporation Packet routing switch
US5055999A (en) * 1987-12-22 1991-10-08 Kendall Square Research Corporation Multiprocessor digital data processing system
US5761413A (en) 1987-12-22 1998-06-02 Sun Microsystems, Inc. Fault containment system for multiprocessor with shared memory
CA2078312A1 (en) 1991-09-20 1993-03-21 Mark A. Kaufman Digital data processor with improved paging
US5313647A (en) * 1991-09-20 1994-05-17 Kendall Square Research Corporation Digital data processor with improved checkpointing and forking
CA2078315A1 (en) * 1991-09-20 1993-03-21 Christopher L. Reeve Parallel processing apparatus and method for utilizing tiling
GB2260629B (en) * 1991-10-16 1995-07-26 Intel Corp A segment descriptor cache for a microprocessor
US5895489A (en) * 1991-10-16 1999-04-20 Intel Corporation Memory management system including an inclusion bit for maintaining cache coherency
CN1068687C (en) * 1993-01-20 2001-07-18 联华电子股份有限公司 Dynamic allocation method storage with stored multi-stage pronunciation
EP0613090A1 (en) * 1993-02-26 1994-08-31 Siemens Nixdorf Informationssysteme Aktiengesellschaft Method for checking the admissibility of direct memory accesses in a data processing systems
US5548746A (en) * 1993-11-12 1996-08-20 International Business Machines Corporation Non-contiguous mapping of I/O addresses to use page protection of a process
US5590297A (en) * 1994-01-04 1996-12-31 Intel Corporation Address generation unit with segmented addresses in a mircroprocessor
US6622211B2 (en) * 2001-08-15 2003-09-16 Ip-First, L.L.C. Virtual set cache that redirects store data to correct virtual set to avoid virtual set store miss penalty
US7171539B2 (en) 2002-11-18 2007-01-30 Arm Limited Apparatus and method for controlling access to a memory
GB2396034B (en) 2002-11-18 2006-03-08 Advanced Risc Mach Ltd Technique for accessing memory in a data processing apparatus
US7149862B2 (en) 2002-11-18 2006-12-12 Arm Limited Access control in a data processing apparatus
AU2003278350A1 (en) 2002-11-18 2004-06-15 Arm Limited Secure memory for protecting against malicious programs
GB2396930B (en) 2002-11-18 2005-09-07 Advanced Risc Mach Ltd Apparatus and method for managing access to a memory
US7900017B2 (en) * 2002-12-27 2011-03-01 Intel Corporation Mechanism for remapping post virtual machine memory pages
US20060090034A1 (en) * 2004-10-22 2006-04-27 Fujitsu Limited System and method for providing a way memoization in a processing environment
GB2448523B (en) * 2007-04-19 2009-06-17 Transitive Ltd Apparatus and method for handling exception signals in a computing system
KR101671494B1 (en) 2010-10-08 2016-11-02 삼성전자주식회사 Multi Processor based on shared virtual memory and Method for generating address translation table
FR3065826B1 (en) * 2017-04-28 2024-03-15 Patrick Pirim AUTOMATED METHOD AND ASSOCIATED DEVICE CAPABLE OF STORING, RECALLING AND, IN A NON-VOLATILE MANNER, ASSOCIATIONS OF MESSAGES VERSUS LABELS AND VICE VERSA, WITH MAXIMUM LIKELIHOOD
US10930350B2 (en) * 2018-12-20 2021-02-23 SK Hynix Inc. Memory device for updating micro-code, memory system including the memory device, and method for operating the memory device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1055630A (en) * 1963-04-01 1967-01-18 Gen Electric Content addressed memory system
GB1281387A (en) * 1969-11-22 1972-07-12 Ibm Associative store
GB1360585A (en) * 1971-12-30 1974-07-17 Ibm Functional memories
GB1457423A (en) * 1973-01-17 1976-12-01 Nat Res Dev Associative memories
GB1543736A (en) * 1976-06-21 1979-04-04 Nat Res Dev Associative processors
US4377855A (en) * 1980-11-06 1983-03-22 National Semiconductor Corporation Content-addressable memory

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4376297A (en) * 1978-04-10 1983-03-08 Signetics Corporation Virtual memory addressing device
GB1595740A (en) * 1978-05-25 1981-08-19 Fujitsu Ltd Data processing apparatus
GB2127994B (en) * 1982-09-29 1987-01-21 Apple Computer Memory management unit for digital computer
US4442482A (en) * 1982-09-30 1984-04-10 Venus Scientific Inc. Dual output H.V. rectifier power supply driven by common transformer winding
USRE37305E1 (en) * 1982-12-30 2001-07-31 International Business Machines Corporation Virtual memory address translation mechanism with controlled data persistence

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1055630A (en) * 1963-04-01 1967-01-18 Gen Electric Content addressed memory system
GB1281387A (en) * 1969-11-22 1972-07-12 Ibm Associative store
GB1360585A (en) * 1971-12-30 1974-07-17 Ibm Functional memories
GB1457423A (en) * 1973-01-17 1976-12-01 Nat Res Dev Associative memories
GB1543736A (en) * 1976-06-21 1979-04-04 Nat Res Dev Associative processors
US4377855A (en) * 1980-11-06 1983-03-22 National Semiconductor Corporation Content-addressable memory

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1988007721A1 (en) * 1987-04-02 1988-10-06 Unisys Corporation Associative address translator for computer memory systems
DE10248065B4 (en) * 2001-10-12 2005-12-22 Samsung Electronics Co., Ltd., Suwon Content-Addressable Memory Device
US7689485B2 (en) 2002-08-10 2010-03-30 Cisco Technology, Inc. Generating accounting data based on access control list entries
EP1654657A1 (en) * 2003-07-29 2006-05-10 Cisco Technology, Inc. Force no-hit indications for cam entries based on policy maps
EP1654657A4 (en) * 2003-07-29 2008-08-13 Cisco Tech Inc Force no-hit indications for cam entries based on policy maps
US11074191B2 (en) 2007-06-01 2021-07-27 Intel Corporation Linear to physical address translation with support for page attributes

Also Published As

Publication number Publication date
JPH0622000B2 (en) 1994-03-23
CN1008839B (en) 1990-07-18
FR2583540B1 (en) 1991-09-06
FR2583540A1 (en) 1986-12-19
DE3618163C2 (en) 1995-04-27
DE3618163A1 (en) 1986-12-18
GB2176918A (en) 1987-01-07
SG34090G (en) 1990-08-03
KR870003427A (en) 1987-04-17
GB8612679D0 (en) 1986-07-02
KR900005897B1 (en) 1990-08-13
JPS61286946A (en) 1986-12-17
GB2176918B (en) 1989-11-01
GB2176920B (en) 1989-11-22
HK53590A (en) 1990-07-27
GB8519991D0 (en) 1985-09-18
CN85106711A (en) 1987-02-04

Similar Documents

Publication Publication Date Title
GB2176920A (en) Content addressable memory
US4972338A (en) Memory management for microprocessor system
US5526504A (en) Variable page size translation lookaside buffer
US5412787A (en) Two-level TLB having the second level TLB implemented in cache tag RAMs
KR920005280B1 (en) High speed cache system
US5604879A (en) Single array address translator with segment and page invalidate ability and method of operation
US5173872A (en) Content addressable memory for microprocessor system
US4136385A (en) Synonym control means for multiple virtual storage systems
EP0496288B1 (en) Variable page size per entry translation look-aside buffer
US6493812B1 (en) Apparatus and method for virtual address aliasing and multiple page size support in a computer system having a prevalidated cache
US4803621A (en) Memory access system
US3764996A (en) Storage control and address translation
US5568415A (en) Content addressable memory having a pair of memory cells storing don&#39;t care states for address translation
US3761881A (en) Translation storage scheme for virtual memory system
US5265227A (en) Parallel protection checking in an address translation look-aside buffer
US4685082A (en) Simplified cache with automatic update
US5053951A (en) Segment descriptor unit for performing static and dynamic address translation operations
EP0095033A2 (en) Set associative sector cache
US5530824A (en) Address translation circuit
US7398362B1 (en) Programmable interleaving in multiple-bank memories
US6745292B1 (en) Apparatus and method for selectively allocating cache lines in a partitioned cache shared by multiprocessors
US5218687A (en) Method and apparatus for fast memory access in a computer system
JPH08227380A (en) Data-processing system
US5535351A (en) Address translator with by-pass circuit and method of operation
US5530822A (en) Address translator and method of operation

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20010808