US20180190339A1 - Apparatuses and methods for accessing and scheduling between a plurality of row buffers - Google Patents

Apparatuses and methods for accessing and scheduling between a plurality of row buffers Download PDF

Info

Publication number
US20180190339A1
US20180190339A1 US15/394,860 US201615394860A US2018190339A1 US 20180190339 A1 US20180190339 A1 US 20180190339A1 US 201615394860 A US201615394860 A US 201615394860A US 2018190339 A1 US2018190339 A1 US 2018190339A1
Authority
US
United States
Prior art keywords
row
dram
dram array
buffer
bit lines
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US15/394,860
Other versions
US10068636B2 (en
Inventor
Berkin Akin
Shigeki Tomishima
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tahoe Research Ltd
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Priority to US15/394,860 priority Critical patent/US10068636B2/en
Assigned to INTEL CORPORATION reassignment INTEL CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AKIN, BERKIN, TOMISHIMA, SHIGEKI
Publication of US20180190339A1 publication Critical patent/US20180190339A1/en
Application granted granted Critical
Publication of US10068636B2 publication Critical patent/US10068636B2/en
Assigned to TAHOE RESEARCH, LTD. reassignment TAHOE RESEARCH, LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTEL CORPORATION
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/21Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
    • G11C11/34Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
    • G11C11/40Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors
    • G11C11/401Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors forming cells needing refreshing or charge regeneration, i.e. dynamic cells
    • G11C11/4063Auxiliary circuits, e.g. for addressing, decoding, driving, writing, sensing or timing
    • G11C11/407Auxiliary circuits, e.g. for addressing, decoding, driving, writing, sensing or timing for memory cells of the field-effect type
    • G11C11/409Read-write [R-W] circuits 
    • G11C11/4093Input/output [I/O] data interface arrangements, e.g. data buffers
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/21Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
    • G11C11/34Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
    • G11C11/40Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors
    • G11C11/401Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors forming cells needing refreshing or charge regeneration, i.e. dynamic cells
    • G11C11/4063Auxiliary circuits, e.g. for addressing, decoding, driving, writing, sensing or timing
    • G11C11/407Auxiliary circuits, e.g. for addressing, decoding, driving, writing, sensing or timing for memory cells of the field-effect type
    • G11C11/4076Timing circuits
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/21Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
    • G11C11/34Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
    • G11C11/40Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors
    • G11C11/401Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors forming cells needing refreshing or charge regeneration, i.e. dynamic cells
    • G11C11/4063Auxiliary circuits, e.g. for addressing, decoding, driving, writing, sensing or timing
    • G11C11/407Auxiliary circuits, e.g. for addressing, decoding, driving, writing, sensing or timing for memory cells of the field-effect type
    • G11C11/409Read-write [R-W] circuits 
    • G11C11/4091Sense or sense/refresh amplifiers, or associated sense circuitry, e.g. for coupled bit-line precharging, equalising or isolating
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/21Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
    • G11C11/34Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
    • G11C11/40Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors
    • G11C11/401Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors forming cells needing refreshing or charge regeneration, i.e. dynamic cells
    • G11C11/4063Auxiliary circuits, e.g. for addressing, decoding, driving, writing, sensing or timing
    • G11C11/407Auxiliary circuits, e.g. for addressing, decoding, driving, writing, sensing or timing for memory cells of the field-effect type
    • G11C11/409Read-write [R-W] circuits 
    • G11C11/4094Bit-line management or control circuits
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/21Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
    • G11C11/34Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
    • G11C11/40Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors
    • G11C11/401Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors forming cells needing refreshing or charge regeneration, i.e. dynamic cells
    • G11C11/4063Auxiliary circuits, e.g. for addressing, decoding, driving, writing, sensing or timing
    • G11C11/407Auxiliary circuits, e.g. for addressing, decoding, driving, writing, sensing or timing for memory cells of the field-effect type
    • G11C11/409Read-write [R-W] circuits 
    • G11C11/4096Input/output [I/O] data management or control circuits, e.g. reading or writing circuits, I/O drivers or bit-line switches 
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C7/00Arrangements for writing information into, or reading information out from, a digital store
    • G11C7/22Read-write [R-W] timing or clocking circuits; Read-write [R-W] control signal generators or management 
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C2207/00Indexing scheme relating to arrangements for writing information into, or reading information out from, a digital store
    • G11C2207/22Control and timing of internal memory operations
    • G11C2207/2209Concurrent read and write
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C2207/00Indexing scheme relating to arrangements for writing information into, or reading information out from, a digital store
    • G11C2207/22Control and timing of internal memory operations
    • G11C2207/2245Memory devices with an internal cache buffer
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C7/00Arrangements for writing information into, or reading information out from, a digital store
    • G11C7/10Input/output [I/O] data interface arrangements, e.g. I/O data control circuits, I/O data buffers
    • G11C7/1015Read-write modes for single port memories, i.e. having either a random port or a serial port
    • G11C7/1039Read-write modes for single port memories, i.e. having either a random port or a serial port using pipelining techniques, i.e. using latches between functional memory parts, e.g. row/column decoders, I/O buffers, sense amplifiers

Definitions

  • the present disclosure generally relates to computer memory systems and, more particularly, to Dynamic Random Access Memory (DRAM).
  • DRAM Dynamic Random Access Memory
  • the present disclosure further relates to methods and interfaces between DRAM and data row buffers, including scheduling of DRAM.
  • Memory systems typically comprise a plurality of Dynamic Random Access Memory (DRAM) integrated circuits, referred to herein as DRAM devices or chips, which are connected to one or more processors via one or more memory channels.
  • DRAM Dynamic Random Access Memory
  • On each chip or DRAM die one or more DRAM banks are formed, which typically work together to respond to a memory request.
  • DRAM Dynamic Random Access Memory
  • In each bank multiple arrays (also known as subarrays or mats) are formed, each array including a row buffer to act as a cache.
  • Conventional DRAM architectures use a single row buffer for each array in the DRAM.
  • DRAM is considered dynamic in nature as DRAM cells lose their state over time periodically. Information stored in the rows and columns of the array is “sensed” by bit lines of the DRAM. In order to utilize bit lines in the DRAM, there must be a precharging process.
  • any precharge command cannot be overlapped with other operations.
  • scheduling the DRAM architectures multiple commands, including precharging a row in the array or sensing a row into the single row buffer, are scheduled in a pipeline manner.
  • the effective access latency is increased because of the required serialization of commands as a bottleneck is created in the pipeline.
  • Write recovery latency becomes part of the critical path when switching rows after a write.
  • FIG. 1 shows an example of a DRAM array
  • FIG. 2A shows a block diagram of a top hierarchical view of a DRAM system according to an example
  • FIG. 2B shows a block diagram of a middle hierarchical view of a DRAM bank according to an example
  • FIG. 2C shows a block diagram of a lower hierarchical view of a DRAM double row buffer with dual sense amplifier sets according to an example
  • FIG. 3A illustrates a timing diagram of a conventional row address strobe (RAS) operation of a single row buffer system
  • FIG. 3B illustrates a timing diagram of a modified RAS operation using the example DRAM array
  • FIG. 4A illustrates a flow chart of a row data cycle from start to end according to an example
  • FIG. 4B illustrates a flow chart of a plurality of row data cycles according to an example
  • FIG. 5 illustrates a detailed timing diagram of scheduling of one or more data cycles using the example DRAM array
  • FIG. 6 illustrates a detailed timing diagram of a read variation using the example DRAM array
  • FIG. 7 illustrates a detailed timing diagram of a second read variation using the example DRAM array
  • FIG. 8 illustrates a detailed timing diagram of a write variation using the example DRAM array
  • memory circuits include dynamic volatile memory, which may include DRAM (dynamic random access memory), or some variant such as synchronous DRAM (SDRAM).
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • Systems utilizing DRAM as main memory, multi-level memory, caching, etc., may be included.
  • a memory subsystem as described herein may be compatible with a number of memory technologies, such as DDR3 (dual data rate version 3, original release by JEDEC (Joint Electronic Device Engineering Council) on Jun. 27, 2007, currently on release 21), DDR4 (DDR version 4, initial specification published in September 2012 by JEDEC), DDR4E (DDR version 4, extended, currently in discussion by JEDEC), LPDDR3 (low power DDR version 3, JESD209-3B, August 2013 by JEDEC), LPDDR4 (LOW POWER DOUBLE DATA RATE (LPDDR) version 4, JESD209-4, originally published by JEDEC in August 2014), WIO2 (Wide I/O 2 (WideIO2), JESD229-2, originally published by JEDEC in August 2014), HBM (HIGH BANDWIDTH MEMORY DRAM, JESD235, originally published by JEDEC in October 2013), DDR5 (DDR version 5, currently in discussion by JEDEC), LPDDR5 (currently in discussion by JEDEC),
  • non-volatile memory technologies include block addressable memory devices, such as NAND or NOR technologies.
  • memory technologies can also include future generation non-volatile devices, such as a three-dimensional crosspoint memory device or other byte-addressable nonvolatile memory devices, or memory devices that use chalcogenide-phase change material (e.g., chalcogenide glass).
  • the memory technologies can be or include multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), spin transfer torque (STT)-MRAM, or a combination of any of the above, or other memory.
  • PCM Phase Change Memory
  • a conventional DRAM chip comprises multiple DRAM banks sharing input/output (I/O) means, e.g., I/O pins.
  • I/O input/output
  • Each bank has multiple DRAM cell arrays and each DRAM array has a row buffer.
  • an “array” may also refer to a subarray, mat, or, in aggregate, a bank or subsection of a bank of the DRAM chip.
  • FIG. 1 shows an example of a proposed solution to the aforementioned latency issues: a DRAM array with a double row buffer (herein also known as a Double Row Buffer DRAM or DRB-DRAM 100 ).
  • a DRAM array 110 comprises a plurality of bit lines 120 connectable, respectively, to at least two row buffers 130 a, b of the DRAM array 110 .
  • the two row buffers may be respectively connectable to data I/O lines.
  • two row buffers 130 a, b may be integrated within the DRAM array and will be used interchangeably so as to provide the role of serving and of backing row buffers, respectively.
  • a serving row buffer is a row buffer connected to input/output.
  • a backing row buffer is a row buffer connected to bit lines. Each of the plurality of bit lines is connectable to the row buffers in that either row buffer may be, at any time, connected to a bit line.
  • the two row buffers 130 a, b are configured to electrically connect the two row buffers 130 a, b to the bit lines 120 and data I/O lines 140 in a mutually exclusive manner. That is, the row buffers 130 a, b may be either serving row buffers or backing row buffers, but may not be both. Further, only one or the other may fulfill a respective role.
  • the plurality of bit lines 120 are coupled, respectively, to the two row buffers 130 a, b via a bit line access gate transistor 132 a, b , whereby when one of the two row buffers 130 a, b is electrically connected to a bit line 120 , another of the two row buffers is not electrically connected to a bit line.
  • the plurality of data I/O lines 140 are coupled, respectively, to the two row buffers 130 a, b via a data I/O access gate transistor 134 a, b , whereby when one of the two row buffers is electrically connected to a data I/O line, another of the two row buffers is not electrically connected to a data I/O line.
  • Any of the above proposed configurations may be implemented as: a DRAM array; a DRAM chip comprising at least one DRAM array; a DRAM module, comprising a plurality of DRAM chips, etc.
  • the row When a new row is being activated within the DRAM array, the row is sensed into a backing row buffer.
  • the two row buffers change roles (i.e., the serving buffer becomes the backing buffer and vice versa).
  • the serving row buffer thus performs column I/O operations while the backing row buffer restores an updated row to the DRAM array and precharges the bit lines in preparation to sense the next row.
  • a DRAM module further comprises a signal interface configured to receive: a first micro-operation for sensing a first row of the DRAM array in a row cycle; and a second micro-operation for restoring contents of a second row of the DRAM array in the row cycle.
  • a DRAM controller may be implemented, the DRAM controller configured to issue micro-operations to perform the aforementioned steps of sensing and restoring. However, the issuance of micro-operations may be made internal to the DRAM module.
  • Row activation is considered to be a disruptive read in the DRB-DRAM system. After a row is sensed, the only valid copy will be in the serving row buffer. The value in the serving row buffer, with any potential updates carried out through the current row cycle, is to be restored back in the DRAM array in a next row cycle.
  • a single row cycle RAS timing is divided into two sections: sense and restore. That is, in a proposed configuration, the micro-operation for sensing is separable from the micro-operation for restoring in the row cycle.
  • the micro-operation for sensing senses a first row of the DRAM array with a first row buffer of the DRAM array connected via a bit line in a first row cycle.
  • the micro-operation for restoring may restore contents of a second row buffer to a second row of the DRAM array in the first row cycle.
  • a restore operation may restore the updated row in the backing row buffer from the previous row cycle to its original location in the DRAM array. This allows for the proposed DRB-DRAM solution to make write recovery timing T WR .
  • an example of the DRB-DRAM implementation can skip explicit write recovery, as the updated row in the serving row buffer will be restored in the array in the next row cycle, off the critical path, overlapped with column I/O.
  • a micro-operation is performed for precharging the bit lines in the first row cycle after restoring contents of the second row buffer to the second row of the DRAM array.
  • a subsequent access request to sense another row is performed after precharging the bit lines of the DRAM array in the first row cycle.
  • bit lines and the backing row buffer will be precharged in preparation to sense the next row upon a potential row buffer miss in the serving row buffer. Meanwhile, the serving row buffer will continue to perform column I/O.
  • the backing row buffer is ready to sense the next row upon a miss in the serving row buffer, taking precharge timing off the critical path of the row miss. Concurrent to this, a row hit access is still directly served from the serving row buffer.
  • a proposed DRB-DRAM system has at least an advantage over a conventional DRAM in that the DRB-DRAM architecture allows for overlapping precharge and restore (write recovery) with data I/O.
  • DRB-DRAM 100 (or double row buffer DRAM array) includes an additional row buffer 130 b beyond the conventional single row buffer 130 a of the DRAM array.
  • the DRB-DRAM 100 includes at least one DRAM array 110 .
  • a DRAM array 110 comprises a plurality of rows 110 n , where n is a real number.
  • Each row stores columns of cells, which hold data to be read out and written to by a memory system utilizing the DRAM array 110 .
  • a plurality of bit lines 120 are connectable to each row 110 n of the DRAM array 110 such that data may be accessed; that is, row data may be read out of the row 110 n by a bit line 120 whereby the data on said row 110 n degrades.
  • bit lines 120 Before accessing a row 110 n , however, the bit lines 120 must be precharged (PRE); precharging a bit line 120 occurs after closing every row. The act of precharging causes a reference voltage V ref to be applied identically on all bit lines. All bit lines are of the same potential. Then, an individual row to be read out is activated by using the voltage of a bit line. The connection of memory cells to bit lines causes the voltage to slightly change. This causes readout. Precharging the bit lines is a prerequisite step to the row access operation subsequently performed.
  • an outside signal is given to the DRAM array 100 to activate (ACT) a particular row 110 n in the DRAM array 110 .
  • the word line (WL) of the corresponding row is activated (ACT), making the bit lines 120 carry data from a respective row 110 n .
  • cells of the row to be activated discharge their contents onto the bit lines, causing a change of the voltage on the bit line that corresponds to the stored logical content.
  • the read-out content is stored in a row buffer 130 .
  • the plurality of bit lines 120 are connectable to at least two row buffers 130 a , 130 b of the DRAM array.
  • the bit lines 120 carry the row data between the DRAM array 110 and the row buffers 130 .
  • Data is accessed from the row buffers 130 a, b by the system through connection to data I/O lines 140 .
  • FIGS. 2A-C give a block diagram of a top-down hierarchical view of the DRB-DRAM and system utilizing said DRB-DRAM, to which concepts proposed herein may be applied.
  • FIG. 2A is a block diagram of a DRAM system 200 utilizing a DRAM chip with a double row buffer of FIG. 1 .
  • the DRAM system 200 may be integrated on a chip itself or may comprise several components that are separated. It should be understood that the system may be implemented in many possible combinations and that the DRAM system is not limited to the configuration of FIG. 2A .
  • the DRAM system 200 comprises at least one DRAM die 206 (also known as a DRAM chip).
  • An example of a DRAM system 200 may comprise a plurality of DRAM chips 206 , such DRAM chips making up a DRAM module (not shown).
  • DRAM system 200 may comprise a memory controller 250 , which is configured to, in part, initiate operations for a DRAM chip or module.
  • the memory controller 250 of an example of the present disclosure may be integrated into a microprocessor 260 or may be separate from microprocessor 260 .
  • the memory controller 250 of microprocessor 260 may be coupled to the common data bus or DRAM chip input/output pad 230 for bidirectional communication of data signals 240 .
  • the microprocessor 260 may include at least one memory controller 250 but this number is not to be limiting. If a microprocessor 260 supports multiple memory channel, such a microprocessor 260 may be configured to include a separate memory controller 250 for each memory channel.
  • Data signals 240 may include any combination of DRAM command signals.
  • the microprocessor 260 may be a single or multi-core microprocessor.
  • the memory controller 250 issues signals to the DRAM chip 206 , causing it to, e.g., precharge bit lines within the DRAM chip 206 , activate a row of the DRAM chip, and sense contents of the memory cells of a row. These signals may be part of the data signals 240 directed to the DRAM chip or module or to individual components of the DRAM chip 206 itself.
  • a DRAM chip 206 may have, as an example, one or more DRAM banks 210 sharing input/output means, e.g., I/O pins.
  • FIG. 2B gives a next, lower-tier example of the DRAM system hierarchy according to an example of the present disclosure.
  • Each bank 210 may contain multiple DRB-DRAMs 100 described in FIG. 1 .
  • a DRB-DRAM 100 may contain a DRAM array 110 , which contains an array of memory cells organized by row and by column.
  • the DRB-DRAM 100 may also contain two or more row buffers 130 a, b.
  • a row buffer 130 a, b holds a most recently accessed row, so any access request to the DRAM array 110 that seeks data of the most recent row will be considered a “hit” and shall be serviced directly from a row buffer. That is, a row in the DRAM array need not be activated if said row has already been sensed to a row buffer. However, if an access command is sent for data outside of that which has been stored in a row buffer, this will be considered a “miss,” and another row must be activated. Thus, if a “miss” occurs, then the cycle must be repeated of PRE, ACT, and READ, as issued by the memory controller 250 .
  • a value stored or sensed will be initially destroyed in a row of the DRAM array with every read operation.
  • Automatic write-back of data, or write-recovery is conventionally performed at the end of each READ.
  • a write-recovery micro-operation RES is issued by the memory controller or generated and handled internally by DRAM control logic to cause a restore from another row buffer than the row buffer used for the preceding read-out.
  • FIG. 2C provides a more detailed example of the row buffer 130 a, b of the DRB-DRAM architecture of the present disclosure.
  • the DRB-DRAM architecture includes one or more dual or double row buffers 330 a , 330 b , each of which comprise a sense amplifier 310 and electrical components. That is, a double row buffer may be alternatively known as a set of “sense amplifiers”.
  • Each row buffer 330 a , 330 b may include bit line access gate transistors 340 (also known as bit line access connection gates), which respectively assert a bit line access (BA) signal to a bit line 320 .
  • BA bit line access
  • Each row buffer 330 a , 330 b may include data I/O access gate transistors 350 (also known as data I/O access connection gates), which respectively assert a data I/O access (DA) signal to local data I/O lines 370 .
  • the sense amplifiers 310 a, b in the row buffers are connected to the bit lines 320 via bit line access connection gates 340 controlled by BA signals.
  • the sense amplifiers 310 a, b are connected to column select transistors 360 (which are eventually connected to local I/O and global IO) via data I/O access connection gates 340 controlled by DA signals.
  • the local data I/O line 370 is electrically connected to the sense amplifier 310 b of the second row buffer 330 b .
  • the bit line access signal BA and the data access signal may be respectively toggled, or switched from one state or effect to another, in any manner of timings and data combinations, e.g., DA will change from 0 to 1, or 1 to 0 at a period of time when BA is 0 or 1, etc.
  • the row buffers 330 a, b have inverted access signals to BA and DA.
  • An advantage to the aforementioned configuration is that it allows for one of the row buffers to holding an active row and to be accessed to the data I/O lines while the second row buffer can restore (or write-recover) its values to the DRAM array.
  • the bit lines of the DRAM array may be precharged while data is still being accessed from another row buffer. This allows decoupling of the local I/O data lines from precharge and charge restore. This technology is used to implement early precharge and late restore, which reduces the critical path latency of row buffer misses.
  • examples may be implemented using a novel modified RAS timing that is divided into distinct stages or phases of Sense and Restore.
  • a Restore phase according to an example of the present disclosure is controlled with a proposed restore (RES) micro-operation in the DRAM.
  • RAS timing is implemented to first sense a selected row, disrupting it in a DRAM array. Subsequently, a RES micro-operation restores a row that had been modified from the previous row cycle (RC). A Disrupted row from this row cycle will be restored in the next row cycle after being modified in a row buffer.
  • RC previous row cycle
  • FIG. 3A is an example of a conventional method of RAS timing using a single buffer.
  • Utilization of a single buffer in part means that only one row may be cached at a time.
  • ACT when a row in the DRAM array is activated (ACT), the row is first sensed in the row buffer through precharged bit lines. At that point, the row in the DRAM array is disrupted, i.e., the data previously stored in the row has been compromised. Data must be restored back into the row in order to preserve the row contents, but conventional DRAM systems only use one row buffer. This means, the sensed row stored in the row buffer must be restored back in the disrupted row to its original location in the DRAM array.
  • the RAS timing thus must include both sense and restore timings in a serial manner, i.e., consecutively performed, where the corresponding row of word-line A of FIG. 3A (WL A) of the selected row remains high, establishing the connection between the DRAM array and the row buffer.
  • the conventional RAS timing is divided into the two phases: Sense and Restore.
  • the sense and restore timings need not be performed serially but may instead be performed among other operations.
  • FIG. 3B exhibits a new DRAM micro-operation ( ⁇ called “restore” (RES) to effectively change the conventional RAS operation.
  • RES restore
  • the RAS timing may now be divided into two phases.
  • the initial phase is sensing time T SEN which is the time it takes to sense the row in the row buffer connected to a bit line.
  • T SEN may be thought of as T RAS minus the new time T RES of micro-operation restore.
  • T RAS the time it takes to sense the row in the row buffer connected to a bit line.
  • T SEN may be thought of as T RAS minus the new time T RES of micro-operation restore.
  • T SEN may be thought of as T RAS minus the new time T RES of micro-operation restore.
  • word-lines A and B are both processed with modified RAS operation with RES ⁇ It is assumed in FIG. 3B that word line B has already been sensed in one of the row buffers in a previous row cycle.
  • An operation signal is received to activate word-line A in a DRAM row (ACT A).
  • the RES micro-operation closes the word line of the current row and opens the word line B of the modified row from the previous row cycle (RES B).
  • T RES restore timing
  • the modified row will be restored in the DRAM array.
  • the bit line access signal will remain high until a new activate request is received (ACT X), whereby word line A is restored from the row buffer back into the DRAM array (RES A).
  • the timing diagram of the present disclosure is meant to be a conceptual timing diagram and is not limited to real or exact timings.
  • the timing of the ACT A command may not be exact, e.g., the real, internal word-line rising timing may not align to the ACT A command exactly but may generally be delayed inside the DRAM chip.
  • Utilizing double row buffers in the DRB DRAM system allows for efficient operation and reduced latency in data cycles. Further implementing the modified RAS timing with double row buffers allows for further reduction in latency through at least two important features: early precharge and “lazy” restore.
  • precharge In single-buffer DRAM systems with conventional RAS timing, precharge must occur serially after activation of a particular row. However, early precharge, as with an example in the current embodiment, may occur while a particular row has been activated and sensed in a row buffer.
  • FIG. 4A shows a row data cycle from start to end.
  • a row is activated (ACT) in the DRAM array (S 1 ).
  • the current row is subsequently sensed (READ) in an initial row buffer RB 0 , which is connected to bit lines in the DRAM array (S 2 ).
  • a restore micro-operation (RES ⁇ Op) is issued (S 3 A), which provides operation to toggle the bit line access signal to connect another row buffer RB 1 to the bit lines BL. It is assumed that row buffer RB 1 has contents from a previous row cycle.
  • the bit line access signal is switched from 1 to 0, which connects the bit lines BL from an initial row buffer RB 0 to the other row buffer RB 1 (S 4 A).
  • the initial row buffer RB 0 has been decoupled from the bit lines (S 4 A).
  • a data I/O access signal is switched from 0 to 1, which connects the data I/O lines (LIO) to the initial row buffer RB 0 (S 3 B).
  • a valid open row is thus held so that data I/O may be performed from the row buffer RB 0 (S 4 B).
  • data I/O (S 4 B) may therefore be performed in an overlapping timing with bit line precharging (S 6 A).
  • the row data cycle is completed, at time T RAS +T RP .
  • any subsequent row hit will be served from the initial row buffer RB 0 with a latency of time T CL , that is, the Column Address Strobe (CAS) latency or the timing of the number of cycles between sending a column address to the DRAM memory and the beginning of the data I/O in response.
  • T CL Column Address Strobe
  • Any row miss will have a latency of time T RCD , that is, the row address to column address delay, or the minimum number of clock cycles required between opening a row of memory and accessing columns within the row, plus time T CL (T RCD +T CL ).
  • T RCD the row address to column address delay
  • T CL T RCD +T CL
  • FIG. 4B shows a plurality of data cycles of a row buffer in the double row buffer DRAM system, according to a principle of “lazy” restore.
  • “lazy” restore an initial row buffer RB 0 is decoupled from bit lines BL (S L 3 ) after a current row is activated and sensed (S L 1 and S L 2 ) into the row buffer RB 0 .
  • S L 3 bit lines BL
  • S L 1 and S L 2 sensed
  • all access requests for data are serviced directly from the initial row buffer RB 0 holding the current row (S L 4 and S L 5 ).
  • the row of the DRAM array corresponding to the contents of RB 0 has been disrupted.
  • the valid value of the aforementioned row is maintained in RB 0 .
  • DRB-DRAM allows activating the new row (S L 7 ) without restoring the contents of RB 0 to the DRAM array, hence avoiding serialized T WR latency on the critical path.
  • a restore micro-operation (RES ⁇ Op) is issued, which connects the initial row buffer RB 0 to the bit lines BL (S L 9 ) and which asserts the word-line WL of the row stored in the initial row buffer RB 0 (S L 10 ). This restores the disrupted row in the DRAM array according to the modified row in row buffer RB 0 .
  • an activation request for a new row can be issued immediately after a READ or WRITE hit.
  • the activation request may avoid waiting for serialized latencies of, for example: T RTP , i.e., the read to precharge delay or the time that takes between the reading of data in the row and the closing of the row; T RP , i.e., the row precharge time; and T WR , i.e., the write recovery time or the time that must elapse between the last write command to a row and the precharge of said row.
  • FIG. 4A and FIG. 4B describe the case where RB 0 acts as serving row buffer and RB 1 acts as backing row buffer initially.
  • RB 0 and RB 1 are not limited to this and toggle roles; hence the RB 0 and RB 1 can be switched in the flow diagrams of FIG. 4A and 4B .
  • FIG. 5 is a detailed timing diagram of scheduling of one or more data cycles using the example DRAM array and the modified RAS operation above. It can be understood by those skilled in the art that the timing diagram herein is meant to show relationships between listed stages of the scheduling and is not directed to specific time intervals.
  • initial conditions are in place such that a row buffer RB 1 (as referenced in, e.g., FIG. 1 as row buffer 130 b ) holds row A, which has been modified from a previous row cycle.
  • After opening the row A for data access at least T RC amount of time has passed, implying that the bit line BL has been precharged.
  • an access request has been made for row B.
  • An activation operation (ACT B) is sent from a memory controller and arrived to the DRAM requesting that row B be activated in the DRAM array.
  • the word-line B goes high, and row buffer RB 0 starts sensing row B through the precharged bit lines (PRE B).
  • time T RCD has elapsed after receiving the activation operation, corresponding to the delay of translating row address and column address.
  • a read (RD) command is sent by the memory controller to read data of row B from the row buffer RB 0 .
  • a fourth stage S T 4 the first part, Sensing, of a modified RAS timing has elapsed at time T RAS minus T RES .
  • row B has been fully sensed in row buffer RB 0 .
  • Both rows A and B are disrupted in the DRAM array at the onset of the fourth stage, but row A is to be restored.
  • Word-line A is asserted to restore (RES A) modified row A in row buffer RB 1 to its corresponding location in the DRAM array.
  • time T CL or CAS latency, may elapse such that data D may now be sent as a response to the RD command.
  • a fifth stage S T 5 the second part, Restore, of the modified RAS timing has elapsed (B ⁇ A), as measured from the fourth stage, at time T RES .
  • the total time elapsed from the activation request ACT B is T RAS (or T SEN +T RES ). That is, row A has been restored from row buffer RB 1 back into the DRAM array. Meanwhile, row buffer RB 0 serves column accesses to the open row B.
  • a precharge operation PRE is immediately started to precharge bit lines and row buffer RB 1 (A ⁇ PRE).
  • a sixth stage S T 6 the precharge PRE has been completed. From now on, any access request (read RD/write WR) to the current row will be served from row buffer RB 0 . If the access request results in a miss, the bit line BL has already been precharged and row buffer RB 1 has been connected to the bit line BL so as to be ready to sense a new row.
  • a new row cycle is started with the arrival of an access request for row C. Access requests to the open row B in row buffer RB 0 are still served directly. However word line C goes high such that row buffer RB 1 starts to immediately sense row C through precharged bit lines (PRE C).
  • PRE C precharged bit lines
  • row B has been fully restored back in the DRAM array (C B) such that time T RAS has again elapsed.
  • the bit lines and row buffer RB 0 are ready to be precharged (B ⁇ PRE). Meanwhile, row buffer RB 1 is holding the open row C and performing column IO.
  • the stages repeat as part of a data row cycle, which starts upon receipt of a row activation request.
  • row B has been open and read from a first column X
  • row C has been made open and read from a second column Y.
  • Conventional DRAM needs to wait for timing T RTP , i.e., the read to precharge delay, to issue the precharge after the READ operation to column X in row B. Then, it waits for timing T RP for the precharge. Finally, there is a wait of timing T RCD and T CL to get the column Yin row C.
  • DRB-DRAM only waits for timing T CCD to send the ACT for row C after the READ column X in row B, since the bit lines are already precharged and RB 0 is ready to sense a new row, assuming that more than timing T RC elapsed in the current row cycle while performing data I/O on row B. Then similarly, DRB-DRAM waits for timings T RCD and T CL to get the column Y.
  • the latency that can be saved amounts to T RP plus T RTP , i.e., the read to precharge delay, minus T CCD , i.e., the minimum column-to-column command delay.
  • row B has been open and written to at a first column X
  • row C has been made open and read from a second column Y.
  • Conventional DRAM first waits for the timings T CL and T CCD for writing into column X in the open row B. Then, it needs to wait for timing T WR to restore the updated row buffer to the array. Afterwards, it issues a precharge and waits for timing T RP . Finally, conventional DRAM activates the new row C and reads column Y after timings T RCD and T CL .
  • activating row C can be issued in timing T CCD after the writing column X in the open row B, which is held in RB 1 .
  • DRB-DRAM waits for T RCD and T CL amount of time to activate row C in RB 0 and read column Y.
  • Updated row B in RB 1 is restored with the RES ⁇ Op in the next row cycle, overlapped with the data I/O from RB 0 , which holds the new row C.
  • the latency that can be saved amounts to T CL , i.e., the CAS latency, plus T WR , i.e., the write-recovery latency, plus T RP .
  • the examples of the present disclosure keeps the low latency access of row buffer hits while reducing the long latency command sequences having a row buffer miss, e.g., a read RD/write WR hit and then miss.
  • the examples give the ability to a DRAM array to keep an open row while providing the low latency miss benefit of a closed row. Aforementioned advantageous effects are realized, considering DRAM/eDRAM based memories implemented as last level cache, multi-level memory and main memory.
  • the proposed DRAM architecture may be implemented in conjunction with other DRAM systems.
  • a marked advantage may be seen, however, when using a double row buffer DRAM array and/or modified RAS timing.
  • Dual row buffers may increase area overhead of the DRAM since the number of sense amplifiers are doubled in the DRAM along with extra connection gate transistors per array.
  • a DRB-DRAM example as disclosed provides a solution congruent to the doubling the number of banks in DRAM.
  • a DRB-DRAM may issue early precharge and deferred restore while performing I/O.
  • a proposed DRB-DRAM solution can reduce the critical path latency of a row buffer miss.
  • DRB-DRAM can reduce the long latency of a row buffer miss even within a single bank without depending on parallelism of multiple banks. Both bank increase and DRB-DRAM may be implemented together, but it is noted that doubling the number of banks only reduces the chances of a row buffer miss. When a row miss happens, DRB-DRAM can reduce the latency cost of said miss.
  • VCRAM Virtual Channel SDRAM
  • ESDRAM Enhanced SDRAM
  • a first example is a dynamic random access memory (DRAM) array, comprising row buffers; and a plurality of bit lines connectable, respectively, to at least two row buffers of the row buffers.
  • DRAM dynamic random access memory
  • the at least two row buffers are respectively connectable to data input/output (I/O) lines.
  • the plurality of bit lines are coupled, respectively, to the at least two row buffers via a bit line access gate transistor, whereby when one of the at least two row buffers is electrically connected to a bit line, another of the at least two row buffers is not electrically connected to a bit line.
  • the plurality of data I/O lines are coupled, respectively, to the at least two row buffers via a data I/O access gate transistor, whereby when one of the at least two row buffers is electrically connected to a data I/O line, another of the at least two row buffers is not electrically connected to a data I/O line.
  • Example 6 is a dynamic random access memory (DRAM) chip comprising at least one DRAM array of any of examples 1 to 5.
  • DRAM dynamic random access memory
  • the plurality of bit lines of the DRAM array are coupled, respectively, to the at least two row buffers via a bit line access gate transistor, wherein when one of the two row buffers is electrically connected to a bit line, another of the at least two row buffers is not electrically connected to a bit line.
  • the DRAM chip of example 6 or 7 further comprises a signal interface configured to receive a first micro-operation for sensing that causes a sensing of a first row of the DRAM array in a row cycle; and a second micro-operation for restoring that causes a restoring of contents of a second row of the DRAM array in the row cycle.
  • Example 9 is a method for initiation of micro operations at a dynamic random access memory (DRAM) array comprising initiating a micro-operation for sensing a first row of the DRAM array in a row cycle; and initiating a micro-operation for restoring contents of a second row of the DRAM array in the row cycle.
  • DRAM dynamic random access memory
  • initiating the micro-operation for sensing is separable from initiating the micro-operation for restoring.
  • initiating the micro-operation for sensing causes a sensing of a first row of the DRAM array with a first row buffer of the DRAM array connected via a bit line in a first row cycle
  • initiating the micro-operation for restoring causes a restoring of contents of a second row buffer to a second row of the DRAM array in the first row cycle.
  • the method of any of examples 9 to 11 further comprise initiating a micro-operation for precharging bit lines of the DRAM array in the row cycle.
  • Example 13 is a dynamic random access memory (DRAM) system, comprising a DRAM chip of any of examples 6 to 8 and at least a DRAM controller configured to initiate micro operations for the DRAM chip.
  • DRAM dynamic random access memory
  • the DRAM controller comprises: an output interface configured to output a micro-operation for sensing that causes a sensing of a first row of the DRAM array into a first row buffer in a first row cycle; and to output a micro-operation for restoring contents that causes a restoring of a second row of the DRAM array from a second row buffer in the first row cycle.
  • the output interface of the DRAM controller is further configured to output a micro-operation for precharging that causes a precharging of bit lines of the DRAM array in the first row cycle.
  • Example 16 is a method of accessing a dynamic random access memory (DRAM) array, the method comprising sensing a first row of the DRAM array with a first row buffer connected via bit lines in a first row cycle; coupling the bit lines to a second row buffer of the DRAM array; and restoring contents of the second row buffer to a second row of the DRAM array in the first row cycle.
  • DRAM dynamic random access memory
  • the method of accessing a DRAM array of example 16 further comprises precharging the bit lines in the first row cycle after restoring contents of the second row buffer to the second row of the DRAM array.
  • coupling the bit lines to the first row buffer or second row buffer of the DRAM array comprises toggling a bit line access signal.
  • the method of accessing a DRAM array of any of examples 16 to 18 further comprises coupling a data input/output (I/O) line to the first row buffer of the DRAM array in the first row cycle.
  • I/O data input/output
  • coupling the data I/O lines to the first row buffer or the second row buffer of the DRAM array comprises toggling a data I/O access signal.
  • the method of accessing a DRAM array of any of examples 16 to 20 further comprises receiving an access request to sense a third row of the DRAM array in a second row cycle; and sensing the third row of the DRAM array with the second row buffer of the DRAM array.
  • receiving an access request to sense a third row is performed after precharging the bit lines of the DRAM array in the first row cycle.
  • the method of accessing a DRAM array of any of examples 16 to 22, further comprises coupling the bit line to the first row buffer of the DRAM array; and restoring contents of the first row buffer to a row of the DRAM array in the second row cycle.
  • Examples may further be a computer program having a program code for supporting one or more of the above methods, when the computer program is executed on a computer or processor. Steps, operations or processes of various above-described methods may be performed by programmed computers or processors. Examples may also cover program storage devices such as digital data storage media, which are machine, processor or computer readable and encode machine-executable, processor-executable or computer-executable programs of instructions. The instructions perform or cause performing some or all of the acts of the above-described methods.
  • the program storage devices may comprise or be, for instance, digital memories, magnetic storage media such as magnetic disks and magnetic tapes, hard drives, or optically readable digital data storage media.
  • FIG. 1 may also cover computers, processors or control units programmed to perform the acts of the above-described methods or (field) programmable logic arrays ((F)PLAs) or (field) programmable gate arrays ((F)PGAs), programmed to perform the acts of the above-described methods.
  • a functional block denoted as “means for . . . ” performing a certain function may refer to a circuit that is configured to perform a certain function.
  • a “means for s.th.” may be implemented as a “means configured to or suited for s.th.”, such as a device or a circuit configured to or suited for the respective task.
  • any functional blocks labeled as “means,” “means for providing a sensor signal,” “means for generating a transmit signal,” etc. may be implemented in the form of dedicated hardware, such as “a signal provider,” “a signal processing unit,” “a processor,” “a controller,” etc. as well as hardware capable of executing software in association with appropriate software.
  • the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which or all of which may be shared.
  • processor or “controller” is by far not limited to hardware exclusively capable of executing software, but may include digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read only memory (ROM) for storing software, random access memory (RAM), and non-volatile storage.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • ROM read only memory
  • RAM random access memory
  • non-volatile storage Other hardware, conventional and/or custom, may also be included.
  • a block diagram may, for instance, illustrate a high-level circuit diagram implementing the principles of the disclosure.
  • a flow chart, a flow diagram, a state transition diagram, a pseudo code, and the like may represent various processes, operations or steps, which may, for instance, be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
  • Methods disclosed in the specification or in the claims may be implemented by a device having means for performing each of the respective acts of these methods.
  • each claim may stand on its own as a separate example. While each claim may stand on its own as a separate example, it is to be noted that—although a dependent claim may refer in the claims to a specific combination with one or more other claims—other examples may also include a combination of the dependent claim with the subject matter of each other dependent or independent claim. Such combinations are explicitly proposed herein unless it is stated that a specific combination is not intended. Furthermore, it is intended to include also features of a claim to any other independent claim even if this claim is not directly made dependent to the independent claim.

Abstract

The present disclosure relates to a dynamic random access memory (DRAM) array, which comprises a plurality of bit lines connectable, respectively, to at least two row buffers of the DRAM array. The two row buffers are respectively connectable to data input/output (I/O) lines and are configured to electrically connect the two row buffers to the bit lines and data I/O lines in a mutually exclusive manner.

Description

    FIELD
  • The present disclosure generally relates to computer memory systems and, more particularly, to Dynamic Random Access Memory (DRAM). The present disclosure further relates to methods and interfaces between DRAM and data row buffers, including scheduling of DRAM.
  • BACKGROUND
  • Memory systems typically comprise a plurality of Dynamic Random Access Memory (DRAM) integrated circuits, referred to herein as DRAM devices or chips, which are connected to one or more processors via one or more memory channels. On each chip or DRAM die, one or more DRAM banks are formed, which typically work together to respond to a memory request. Typically, in each bank, multiple arrays (also known as subarrays or mats) are formed, each array including a row buffer to act as a cache. Conventional DRAM architectures use a single row buffer for each array in the DRAM.
  • DRAM is considered dynamic in nature as DRAM cells lose their state over time periodically. Information stored in the rows and columns of the array is “sensed” by bit lines of the DRAM. In order to utilize bit lines in the DRAM, there must be a precharging process.
  • Based on the conventional DRAM architecture, there are several commands that are serialized due to the limitations of the DRAM design. Specifically, in DRAM bank precharging of bit lines, any precharge command cannot be overlapped with other operations. When scheduling the DRAM architectures, multiple commands, including precharging a row in the array or sensing a row into the single row buffer, are scheduled in a pipeline manner. However, the effective access latency is increased because of the required serialization of commands as a bottleneck is created in the pipeline. Write recovery latency becomes part of the critical path when switching rows after a write.
  • Thus, there is a need for concepts allowing the reduction of access latency and write recovery latency in DRAM architectures.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Some examples of apparatuses and/or methods will be described in the following by way of example only, and with reference to the accompanying figures, in which:
  • FIG. 1 shows an example of a DRAM array;
  • FIG. 2A shows a block diagram of a top hierarchical view of a DRAM system according to an example;
  • FIG. 2B shows a block diagram of a middle hierarchical view of a DRAM bank according to an example;
  • FIG. 2C shows a block diagram of a lower hierarchical view of a DRAM double row buffer with dual sense amplifier sets according to an example;
  • FIG. 3A illustrates a timing diagram of a conventional row address strobe (RAS) operation of a single row buffer system;
  • FIG. 3B illustrates a timing diagram of a modified RAS operation using the example DRAM array;
  • FIG. 4A illustrates a flow chart of a row data cycle from start to end according to an example;
  • FIG. 4B illustrates a flow chart of a plurality of row data cycles according to an example;
  • FIG. 5 illustrates a detailed timing diagram of scheduling of one or more data cycles using the example DRAM array;
  • FIG. 6 illustrates a detailed timing diagram of a read variation using the example DRAM array;
  • FIG. 7 illustrates a detailed timing diagram of a second read variation using the example DRAM array;
  • FIG. 8 illustrates a detailed timing diagram of a write variation using the example DRAM array;
  • DESCRIPTION OF EMBODIMENTS
  • Various examples will now be described more fully with reference to the accompanying drawings in which some examples are illustrated. In the figures, the thicknesses of lines, layers and/or regions may be exaggerated for clarity.
  • Accordingly, while further examples are capable of various modifications and alternative forms, some particular examples thereof are shown in the figures and will subsequently be described in detail. However, this detailed description does not limit further examples to the particular forms described. Further examples may cover all modifications, equivalents, and alternatives falling within the scope of the disclosure. Like numbers refer to like or similar elements throughout the description of the figures, which may be implemented identically or in modified form when compared to one another while providing for the same or a similar functionality.
  • It will be understood that when an element is referred to as being “connected” or “coupled” to another element, the elements may be directly connected or coupled via one or more intervening elements. If two elements A and B are combined using an “or,” this is to be understood as a logical OR function and thus understood to disclose all possible combinations, i.e., “only A,” “only B,” as well as “A and B.” An alternative wording for the same combinations is “at least one of A and B.” The same applies for combinations of more than two elements.
  • The terminology used herein for the purpose of describing particular examples is not intended to be limiting for further examples. Whenever a singular form such as “a,” “an,” and “the” is used and whenever using only a single element is neither explicitly or implicitly defined as being mandatory, further examples may also use plural elements to implement the same functionality. Likewise, when a functionality is subsequently described as being implemented using multiple elements, further examples may implement the same functionality using a single element or processing entity. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used, specify the presence of the stated features, integers, steps, operations, processes, acts, elements, and/or components, but these terms do not preclude the presence or addition of one or more other features, integers, steps, operations, processes, acts, elements, components, and/or any group thereof.
  • Unless otherwise defined, all terms (including technical and scientific terms) are used herein in their ordinary meaning of the art to which the examples belong.
  • In an example, memory circuits include dynamic volatile memory, which may include DRAM (dynamic random access memory), or some variant such as synchronous DRAM (SDRAM). Systems utilizing DRAM as main memory, multi-level memory, caching, etc., may be included.
  • A memory subsystem as described herein may be compatible with a number of memory technologies, such as DDR3 (dual data rate version 3, original release by JEDEC (Joint Electronic Device Engineering Council) on Jun. 27, 2007, currently on release 21), DDR4 (DDR version 4, initial specification published in September 2012 by JEDEC), DDR4E (DDR version 4, extended, currently in discussion by JEDEC), LPDDR3 (low power DDR version 3, JESD209-3B, August 2013 by JEDEC), LPDDR4 (LOW POWER DOUBLE DATA RATE (LPDDR) version 4, JESD209-4, originally published by JEDEC in August 2014), WIO2 (Wide I/O 2 (WideIO2), JESD229-2, originally published by JEDEC in August 2014), HBM (HIGH BANDWIDTH MEMORY DRAM, JESD235, originally published by JEDEC in October 2013), DDR5 (DDR version 5, currently in discussion by JEDEC), LPDDR5 (currently in discussion by JEDEC), HBM2 (HBM version 2), currently in discussion by JEDEC), and/or others, and technologies based on derivatives or extensions of such specifications.
  • In one example, non-volatile memory technologies include block addressable memory devices, such as NAND or NOR technologies. Thus, memory technologies can also include future generation non-volatile devices, such as a three-dimensional crosspoint memory device or other byte-addressable nonvolatile memory devices, or memory devices that use chalcogenide-phase change material (e.g., chalcogenide glass). In an example, the memory technologies can be or include multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), spin transfer torque (STT)-MRAM, or a combination of any of the above, or other memory.
  • Generally, a conventional DRAM chip comprises multiple DRAM banks sharing input/output (I/O) means, e.g., I/O pins. Each bank has multiple DRAM cell arrays and each DRAM array has a row buffer. For purposes of the present disclosure, an “array” may also refer to a subarray, mat, or, in aggregate, a bank or subsection of a bank of the DRAM chip.
  • As a conceptual overview of the present disclosure, FIG. 1 shows an example of a proposed solution to the aforementioned latency issues: a DRAM array with a double row buffer (herein also known as a Double Row Buffer DRAM or DRB-DRAM 100). A DRAM array 110 comprises a plurality of bit lines 120 connectable, respectively, to at least two row buffers 130 a, b of the DRAM array 110. The two row buffers may be respectively connectable to data I/O lines.
  • In a proposed configuration, two row buffers 130 a, b (also referred herein as RB0 and RB1 but may not be limited to just two row buffers) may be integrated within the DRAM array and will be used interchangeably so as to provide the role of serving and of backing row buffers, respectively. A serving row buffer is a row buffer connected to input/output. A backing row buffer is a row buffer connected to bit lines. Each of the plurality of bit lines is connectable to the row buffers in that either row buffer may be, at any time, connected to a bit line.
  • In a proposed configuration, the two row buffers 130 a, b are configured to electrically connect the two row buffers 130 a, b to the bit lines 120 and data I/O lines 140 in a mutually exclusive manner. That is, the row buffers 130 a, b may be either serving row buffers or backing row buffers, but may not be both. Further, only one or the other may fulfill a respective role.
  • In a proposed configuration, the plurality of bit lines 120 are coupled, respectively, to the two row buffers 130 a, b via a bit line access gate transistor 132 a, b, whereby when one of the two row buffers 130 a, b is electrically connected to a bit line 120, another of the two row buffers is not electrically connected to a bit line.
  • In a proposed configuration, the plurality of data I/O lines 140 are coupled, respectively, to the two row buffers 130 a, b via a data I/O access gate transistor 134 a, b, whereby when one of the two row buffers is electrically connected to a data I/O line, another of the two row buffers is not electrically connected to a data I/O line.
  • Any of the above proposed configurations may be implemented as: a DRAM array; a DRAM chip comprising at least one DRAM array; a DRAM module, comprising a plurality of DRAM chips, etc.
  • When a new row is being activated within the DRAM array, the row is sensed into a backing row buffer. When the row is sensed in the backing row buffer, the two row buffers change roles (i.e., the serving buffer becomes the backing buffer and vice versa). The serving row buffer thus performs column I/O operations while the backing row buffer restores an updated row to the DRAM array and precharges the bit lines in preparation to sense the next row.
  • In a proposed configuration, a DRAM module further comprises a signal interface configured to receive: a first micro-operation for sensing a first row of the DRAM array in a row cycle; and a second micro-operation for restoring contents of a second row of the DRAM array in the row cycle. A DRAM controller may be implemented, the DRAM controller configured to issue micro-operations to perform the aforementioned steps of sensing and restoring. However, the issuance of micro-operations may be made internal to the DRAM module.
  • Row activation is considered to be a disruptive read in the DRB-DRAM system. After a row is sensed, the only valid copy will be in the serving row buffer. The value in the serving row buffer, with any potential updates carried out through the current row cycle, is to be restored back in the DRAM array in a next row cycle. Hence, a single row cycle RAS timing is divided into two sections: sense and restore. That is, in a proposed configuration, the micro-operation for sensing is separable from the micro-operation for restoring in the row cycle.
  • In a proposed configuration, the micro-operation for sensing senses a first row of the DRAM array with a first row buffer of the DRAM array connected via a bit line in a first row cycle. The micro-operation for restoring may restore contents of a second row buffer to a second row of the DRAM array in the first row cycle.
  • That is, in a row cycle, when the sense is completed, a restore operation may restore the updated row in the backing row buffer from the previous row cycle to its original location in the DRAM array. This allows for the proposed DRB-DRAM solution to make write recovery timing TWR. When there is a subsequent row buffer miss after a write, an example of the DRB-DRAM implementation can skip explicit write recovery, as the updated row in the serving row buffer will be restored in the array in the next row cycle, off the critical path, overlapped with column I/O.
  • In a proposed configuration, a micro-operation is performed for precharging the bit lines in the first row cycle after restoring contents of the second row buffer to the second row of the DRAM array. A subsequent access request to sense another row is performed after precharging the bit lines of the DRAM array in the first row cycle.
  • That is, after the restore operation is completed, i.e., the backing row buffer is restored in the array, bit lines and the backing row buffer will be precharged in preparation to sense the next row upon a potential row buffer miss in the serving row buffer. Meanwhile, the serving row buffer will continue to perform column I/O. When the precharge of the bit lines and the row buffer is done, the backing row buffer is ready to sense the next row upon a miss in the serving row buffer, taking precharge timing off the critical path of the row miss. Concurrent to this, a row hit access is still directly served from the serving row buffer.
  • A proposed DRB-DRAM system has at least an advantage over a conventional DRAM in that the DRB-DRAM architecture allows for overlapping precharge and restore (write recovery) with data I/O.
  • With reference to FIG. 1, DRB-DRAM 100 (or double row buffer DRAM array) includes an additional row buffer 130 b beyond the conventional single row buffer 130 a of the DRAM array. According to the example of FIG. 1, the DRB-DRAM 100 includes at least one DRAM array 110. A DRAM array 110 comprises a plurality of rows 110 n, where n is a real number.
  • Each row stores columns of cells, which hold data to be read out and written to by a memory system utilizing the DRAM array 110. A plurality of bit lines 120 (or BL 120) are connectable to each row 110 n of the DRAM array 110 such that data may be accessed; that is, row data may be read out of the row 110 n by a bit line 120 whereby the data on said row 110 n degrades.
  • Before accessing a row 110 n, however, the bit lines 120 must be precharged (PRE); precharging a bit line 120 occurs after closing every row. The act of precharging causes a reference voltage Vref to be applied identically on all bit lines. All bit lines are of the same potential. Then, an individual row to be read out is activated by using the voltage of a bit line. The connection of memory cells to bit lines causes the voltage to slightly change. This causes readout. Precharging the bit lines is a prerequisite step to the row access operation subsequently performed.
  • To read data, an outside signal is given to the DRAM array 100 to activate (ACT) a particular row 110 n in the DRAM array 110. The word line (WL) of the corresponding row is activated (ACT), making the bit lines 120 carry data from a respective row 110 n. Particularly, cells of the row to be activated discharge their contents onto the bit lines, causing a change of the voltage on the bit line that corresponds to the stored logical content.
  • The read-out content is stored in a row buffer 130. In an example, the plurality of bit lines 120 are connectable to at least two row buffers 130 a, 130 b of the DRAM array. The bit lines 120 carry the row data between the DRAM array 110 and the row buffers 130. Data is accessed from the row buffers 130 a, b by the system through connection to data I/O lines 140.
  • FIGS. 2A-C give a block diagram of a top-down hierarchical view of the DRB-DRAM and system utilizing said DRB-DRAM, to which concepts proposed herein may be applied. FIG. 2A is a block diagram of a DRAM system 200 utilizing a DRAM chip with a double row buffer of FIG. 1. The DRAM system 200 may be integrated on a chip itself or may comprise several components that are separated. It should be understood that the system may be implemented in many possible combinations and that the DRAM system is not limited to the configuration of FIG. 2A. The DRAM system 200 comprises at least one DRAM die 206 (also known as a DRAM chip). An example of a DRAM system 200 may comprise a plurality of DRAM chips 206, such DRAM chips making up a DRAM module (not shown).
  • Another example of a DRAM system 200 may comprise a memory controller 250, which is configured to, in part, initiate operations for a DRAM chip or module. The memory controller 250 of an example of the present disclosure may be integrated into a microprocessor 260 or may be separate from microprocessor 260.
  • The memory controller 250 of microprocessor 260 may be coupled to the common data bus or DRAM chip input/output pad 230 for bidirectional communication of data signals 240. The microprocessor 260 may include at least one memory controller 250 but this number is not to be limiting. If a microprocessor 260 supports multiple memory channel, such a microprocessor 260 may be configured to include a separate memory controller 250 for each memory channel. Data signals 240 may include any combination of DRAM command signals. The microprocessor 260 may be a single or multi-core microprocessor.
  • The memory controller 250 issues signals to the DRAM chip 206, causing it to, e.g., precharge bit lines within the DRAM chip 206, activate a row of the DRAM chip, and sense contents of the memory cells of a row. These signals may be part of the data signals 240 directed to the DRAM chip or module or to individual components of the DRAM chip 206 itself.
  • A DRAM chip 206 may have, as an example, one or more DRAM banks 210 sharing input/output means, e.g., I/O pins. FIG. 2B gives a next, lower-tier example of the DRAM system hierarchy according to an example of the present disclosure. Each bank 210 may contain multiple DRB-DRAMs 100 described in FIG. 1.
  • A DRB-DRAM 100 may contain a DRAM array 110, which contains an array of memory cells organized by row and by column. The DRB-DRAM 100 may also contain two or more row buffers 130 a, b.
  • A row buffer 130 a, b holds a most recently accessed row, so any access request to the DRAM array 110 that seeks data of the most recent row will be considered a “hit” and shall be serviced directly from a row buffer. That is, a row in the DRAM array need not be activated if said row has already been sensed to a row buffer. However, if an access command is sent for data outside of that which has been stored in a row buffer, this will be considered a “miss,” and another row must be activated. Thus, if a “miss” occurs, then the cycle must be repeated of PRE, ACT, and READ, as issued by the memory controller 250.
  • A value stored or sensed will be initially destroyed in a row of the DRAM array with every read operation. Automatic write-back of data, or write-recovery is conventionally performed at the end of each READ. In DRB-DRAM, a write-recovery micro-operation RES is issued by the memory controller or generated and handled internally by DRAM control logic to cause a restore from another row buffer than the row buffer used for the preceding read-out.
  • FIG. 2C provides a more detailed example of the row buffer 130 a, b of the DRB-DRAM architecture of the present disclosure. The DRB-DRAM architecture includes one or more dual or double row buffers 330 a, 330 b, each of which comprise a sense amplifier 310 and electrical components. That is, a double row buffer may be alternatively known as a set of “sense amplifiers”. Each row buffer 330 a, 330 b may include bit line access gate transistors 340 (also known as bit line access connection gates), which respectively assert a bit line access (BA) signal to a bit line 320. Each row buffer 330 a, 330 b may include data I/O access gate transistors 350 (also known as data I/O access connection gates), which respectively assert a data I/O access (DA) signal to local data I/O lines 370. The sense amplifiers 310 a, b in the row buffers are connected to the bit lines 320 via bit line access connection gates 340 controlled by BA signals. Similarly, the sense amplifiers 310 a, b are connected to column select transistors 360 (which are eventually connected to local I/O and global IO) via data I/O access connection gates 340 controlled by DA signals.
  • The double row buffers 330 a, 330 b are accessed through the bit line access connection gates 340 such that when the bit line access signal is high (BA=1), the bit lines 320 are connected to the sense amplifiers 310 a of the first row buffer 330 a. Conversely, when the bit line access signal is low (BA=0), the bit lines 320 are connected to the sense amplifiers 310 b of the second row buffer 330 b.
  • With data I/O access, if a column select signal (CS) of the column select transistors 360 is low (CS=0), then no data is to be retrieved from either of the two row buffers 330 a, 330 b and neither row buffer's sense amplifier 310 a, 310 b is to be connected to a local data I/O line 370. However, if column select is high (CS=1), and if the data access signal is high (DA=1), then the local data I/O line 370 is electrically connected to the sense amplifier 310 a of the first row buffer 330 a. If the data access signal is low (DA=0), then the local data I/O line 370 is electrically connected to the sense amplifier 310 b of the second row buffer 330 b. The bit line access signal BA and the data access signal may be respectively toggled, or switched from one state or effect to another, in any manner of timings and data combinations, e.g., DA will change from 0 to 1, or 1 to 0 at a period of time when BA is 0 or 1, etc.
  • It can thus be understood that, at any given time, only one of the row buffers is connected to the bit lines (BL) and to the data I/O lines (LIO) in a mutually exclusive manner. That is, if one row buffer is connected to a bit line, another cannot be connected to a bit line. Further, if one row buffer is connected to a data I/O line, then another cannot be connected to a data I/O line. Hence, the row buffers 330 a, b have inverted access signals to BA and DA.
  • An advantage to the aforementioned configuration is that it allows for one of the row buffers to holding an active row and to be accessed to the data I/O lines while the second row buffer can restore (or write-recover) its values to the DRAM array. The bit lines of the DRAM array may be precharged while data is still being accessed from another row buffer. This allows decoupling of the local I/O data lines from precharge and charge restore. This technology is used to implement early precharge and late restore, which reduces the critical path latency of row buffer misses.
  • As a further example of the present disclosure, examples may be implemented using a novel modified RAS timing that is divided into distinct stages or phases of Sense and Restore. As a general overview, a Restore phase according to an example of the present disclosure is controlled with a proposed restore (RES) micro-operation in the DRAM. In another example of the present disclosure, RAS timing is implemented to first sense a selected row, disrupting it in a DRAM array. Subsequently, a RES micro-operation restores a row that had been modified from the previous row cycle (RC). A Disrupted row from this row cycle will be restored in the next row cycle after being modified in a row buffer. The division of RAS timing into two distinct phases allows for implementation of “lazy” restore and early precharge operations, which are operations that may be overlapped with data IO. With the aforementioned implementations, DRAM access latency is decreased for several command sequences having a row buffer miss.
  • FIG. 3A is an example of a conventional method of RAS timing using a single buffer. Utilization of a single buffer in part means that only one row may be cached at a time. As stated above, when a row in the DRAM array is activated (ACT), the row is first sensed in the row buffer through precharged bit lines. At that point, the row in the DRAM array is disrupted, i.e., the data previously stored in the row has been compromised. Data must be restored back into the row in order to preserve the row contents, but conventional DRAM systems only use one row buffer. This means, the sensed row stored in the row buffer must be restored back in the disrupted row to its original location in the DRAM array. The RAS timing thus must include both sense and restore timings in a serial manner, i.e., consecutively performed, where the corresponding row of word-line A of FIG. 3A (WL A) of the selected row remains high, establishing the connection between the DRAM array and the row buffer.
  • In an example of the present disclosure, the conventional RAS timing is divided into the two phases: Sense and Restore. Thus, the sense and restore timings need not be performed serially but may instead be performed among other operations. FIG. 3B exhibits a new DRAM micro-operation (μcalled “restore” (RES) to effectively change the conventional RAS operation.
  • With the micro-operations received, the RAS timing may now be divided into two phases. The initial phase is sensing time TSENwhich is the time it takes to sense the row in the row buffer connected to a bit line. Alternatively, TSEN may be thought of as TRAS minus the new time TRES of micro-operation restore. At the end of the sense timing, a selected row is in a row buffer but is consequently disrupted in the array. As a marked difference from the conventional RAS timing method, the row shall be restored in the DRAM array after completion of the current row cycle.
  • With the DRB DRAM system, more than one row may be processed with the plurality of data row buffers. In the example of FIG. 3B, word-lines A and B are both processed with modified RAS operation with RES μIt is assumed in FIG. 3B that word line B has already been sensed in one of the row buffers in a previous row cycle. An operation signal is received to activate word-line A in a DRAM row (ACT A). As such, the bit line access signal goes low (BA=0) to sense word line A into a row buffer that is not currently storing word-line B. Once word line A has been sensed, the RES micro-operation closes the word line of the current row and opens the word line B of the modified row from the previous row cycle (RES B). Concurrent to this, the bit line access signal is toggled to go high (BA=1), which disconnects the bit lines from the sensed row of word-line A and connects the other row buffer containing the modified row (word line B) sensed from the precious row cycle. After a restore timing TRES, the modified row will be restored in the DRAM array. The bit line access signal will remain high until a new activate request is received (ACT X), whereby word line A is restored from the row buffer back into the DRAM array (RES A).
  • The timing diagram of the present disclosure is meant to be a conceptual timing diagram and is not limited to real or exact timings. For example, the timing of the ACT A command may not be exact, e.g., the real, internal word-line rising timing may not align to the ACT A command exactly but may generally be delayed inside the DRAM chip.
  • Utilizing double row buffers in the DRB DRAM system allows for efficient operation and reduced latency in data cycles. Further implementing the modified RAS timing with double row buffers allows for further reduction in latency through at least two important features: early precharge and “lazy” restore.
  • In single-buffer DRAM systems with conventional RAS timing, precharge must occur serially after activation of a particular row. However, early precharge, as with an example in the current embodiment, may occur while a particular row has been activated and sensed in a row buffer.
  • As an example of early precharge, FIG. 4A shows a row data cycle from start to end. In FIG. 4A, a row is activated (ACT) in the DRAM array (S1). The current row is subsequently sensed (READ) in an initial row buffer RB0, which is connected to bit lines in the DRAM array (S2). A restore micro-operation (RES μOp) is issued (S3A), which provides operation to toggle the bit line access signal to connect another row buffer RB1 to the bit lines BL. It is assumed that row buffer RB1 has contents from a previous row cycle. The bit line access signal is switched from 1 to 0, which connects the bit lines BL from an initial row buffer RB0 to the other row buffer RB1 (S4A). Then the previous row in the other row buffer RB1 is restored in the DRAM array (S5A). After the RES μOp has been completed at time TRAS after ACT, an early precharge (PRE) command is executed in the system whereby the bit lines are precharged (S6A). After time TRP, indicating the amount of row precharge time or the minimum number of clock cycles required between issuing the precharge command and opening a subsequent row, the bit lines BL and the other row buffer RB1 will be precharged, ready to sense a subsequent row.
  • Concurrent with the operations of row buffer RB1, the initial row buffer RB0 has been decoupled from the bit lines (S4A). A data I/O access signal is switched from 0 to 1, which connects the data I/O lines (LIO) to the initial row buffer RB0 (S3B). A valid open row is thus held so that data I/O may be performed from the row buffer RB0 (S4B). From FIG. 4A, data I/O (S4B) may therefore be performed in an overlapping timing with bit line precharging (S6A). After the PRE operation, the row data cycle is completed, at time TRAS+TRP.
  • From early precharge, as exhibited above, any subsequent row hit will be served from the initial row buffer RB0 with a latency of time TCL, that is, the Column Address Strobe (CAS) latency or the timing of the number of cycles between sending a column address to the DRAM memory and the beginning of the data I/O in response. Any row miss will have a latency of time TRCD, that is, the row address to column address delay, or the minimum number of clock cycles required between opening a row of memory and accessing columns within the row, plus time TCL(TRCD+TCL). The shortened timing is created as the bit lines and the other row buffer RB1 have already been precharged and are thus ready to be used by the DRAM system.
  • According to an example of the present disclosure, restore works to also reduce system latency. FIG. 4B shows a plurality of data cycles of a row buffer in the double row buffer DRAM system, according to a principle of “lazy” restore. In “lazy” restore, an initial row buffer RB0 is decoupled from bit lines BL (SL 3) after a current row is activated and sensed (S L 1 and SL 2) into the row buffer RB0. Once sensing finishes, all access requests for data are serviced directly from the initial row buffer RB0 holding the current row (SL 4 and SL 5). However, the row of the DRAM array corresponding to the contents of RB0 has been disrupted. The valid value of the aforementioned row is maintained in RB0.
  • When an I/O access misses (SL 6) the current row active in row buffer RB0, that is, when row buffer RB0 does not contain the address of the I/O access request, an activation request (ACT) is issued (SL 7), indicating the start of a new data cycle, and activation for a new row in the DRAM array is immediately serviced, whereby the new row is sensed (READ) into another row buffer RB1 immediately after the activation request (SL 8). This is possible since bit lines are already precharged and RB1 is ready to sense a new row, as explained previously. Additionally, in the example of the present disclosure, a middle write-recovery step is avoided and deferred to the next row cycle. Such a step would normally be required after a data I/O (WRITE) and before a precharge; DRB-DRAM allows activating the new row (SL 7) without restoring the contents of RB0 to the DRAM array, hence avoiding serialized TWR latency on the critical path.
  • After sensing the new row has completed, a restore micro-operation (RES μOp) is issued, which connects the initial row buffer RB0 to the bit lines BL (SL 9) and which asserts the word-line WL of the row stored in the initial row buffer RB0 (SL 10). This restores the disrupted row in the DRAM array according to the modified row in row buffer RB0.
  • The data cycle above is repeated for each row miss. Thus, it can be understood that an activation request for a new row can be issued immediately after a READ or WRITE hit. The activation request may avoid waiting for serialized latencies of, for example: TRTP, i.e., the read to precharge delay or the time that takes between the reading of data in the row and the closing of the row; TRP, i.e., the row precharge time; and TWR, i.e., the write recovery time or the time that must elapse between the last write command to a row and the precharge of said row.
  • The examples in FIG. 4A and FIG. 4B describe the case where RB0 acts as serving row buffer and RB1 acts as backing row buffer initially. However, RB0 and RB1 are not limited to this and toggle roles; hence the RB0 and RB1 can be switched in the flow diagrams of FIG. 4A and 4B.
  • FIG. 5 is a detailed timing diagram of scheduling of one or more data cycles using the example DRAM array and the modified RAS operation above. It can be understood by those skilled in the art that the timing diagram herein is meant to show relationships between listed stages of the scheduling and is not directed to specific time intervals.
  • According to a first stage S T 1, initial conditions are in place such that a row buffer RB1 (as referenced in, e.g., FIG. 1 as row buffer 130 b) holds row A, which has been modified from a previous row cycle. Row buffer RB1 is connected to local data I/O lines such that the data access signal DA=0. After opening the row A for data access, at least TRC amount of time has passed, implying that the bit line BL has been precharged. The bit line BL is connected to a row buffer RB0 (as referenced in, e.g., FIG. 1 as row buffer 130 a) such that the bit line access signal BA=1.
  • According to a second stage ST2, an access request has been made for row B. An activation operation (ACT B) is sent from a memory controller and arrived to the DRAM requesting that row B be activated in the DRAM array. The word-line B goes high, and row buffer RB0 starts sensing row B through the precharged bit lines (PRE B).
  • According to a third stage ST 3, time TRCD has elapsed after receiving the activation operation, corresponding to the delay of translating row address and column address. A read (RD) command is sent by the memory controller to read data of row B from the row buffer RB0. The data access signal switches to DA=1 such that row buffer RB0 is connected to the local data I/O lines.
  • According to a fourth stage ST 4, the first part, Sensing, of a modified RAS timing has elapsed at time TRAS minus TRES. At this time, row B has been fully sensed in row buffer RB0. Both rows A and B are disrupted in the DRAM array at the onset of the fourth stage, but row A is to be restored. As such, the bit line access signal is toggled to BA=0 to connect the bit line BL to row buffer RB1. Word-line A is asserted to restore (RES A) modified row A in row buffer RB1 to its corresponding location in the DRAM array. During this stage, time TCL, or CAS latency, may elapse such that data D may now be sent as a response to the RD command.
  • According to a fifth stage ST 5, the second part, Restore, of the modified RAS timing has elapsed (B→A), as measured from the fourth stage, at time TRES. The total time elapsed from the activation request ACT B is TRAS (or TSEN+TRES). That is, row A has been restored from row buffer RB1 back into the DRAM array. Meanwhile, row buffer RB0 serves column accesses to the open row B. Now that TRAS is completed, a precharge operation (PRE) is immediately started to precharge bit lines and row buffer RB1 (A→PRE).
  • According to a sixth stage ST 6, the precharge PRE has been completed. From now on, any access request (read RD/write WR) to the current row will be served from row buffer RB0. If the access request results in a miss, the bit line BL has already been precharged and row buffer RB1 has been connected to the bit line BL so as to be ready to sense a new row.
  • According to a seventh stage ST 7, a new row cycle is started with the arrival of an access request for row C. Access requests to the open row B in row buffer RB0 are still served directly. However word line C goes high such that row buffer RB1 starts to immediately sense row C through precharged bit lines (PRE C).
  • According to an eighth stage ST 8, row buffer RB0 still holds the modified row B but the data I/O access signal has been switched such that DA=0, connecting the local data I/O lines to row buffer RB1. The original location of row B has been disrupted in the DRAM array. Therefore, after row C has been fully sensed in row buffer RB1, the word-line WL of row B goes high and the bit line access signal BA=1, which connects the row buffer RB0 to the BL in order to restore (RES B) modified row B back in the DRAM array (RES).
  • According to a ninth stage ST9, row B has been fully restored back in the DRAM array (C B) such that time TRAS has again elapsed. The bit lines and row buffer RB0 are ready to be precharged (B→PRE). Meanwhile, row buffer RB1 is holding the open row C and performing column IO.
  • The stages repeat as part of a data row cycle, which starts upon receipt of a row activation request.
  • The above examples decrease the latency required to, e.g., open a subsequent row as measured from issuing the precharge command. This is particularly exhibited when encountering an open page miss. As seen in FIG. 6, row A has been open and data I/O has been performed through RB1. However, because the bit lines and row buffer RB0 have been precharged during data I/O from RB1 holding row A, any access request to row B will be started immediately, saving latency TRP.
  • Similarly, as seen in FIG. 7, row B has been open and read from a first column X, and then row C has been made open and read from a second column Y. Conventional DRAM needs to wait for timing TRTP, i.e., the read to precharge delay, to issue the precharge after the READ operation to column X in row B. Then, it waits for timing TRP for the precharge. Finally, there is a wait of timing TRCD and TCL to get the column Yin row C. In contrast, DRB-DRAM only waits for timing TCCD to send the ACT for row C after the READ column X in row B, since the bit lines are already precharged and RB0 is ready to sense a new row, assuming that more than timing TRC elapsed in the current row cycle while performing data I/O on row B. Then similarly, DRB-DRAM waits for timings TRCD and TCL to get the column Y. The latency that can be saved amounts to TRP plus TRTP, i.e., the read to precharge delay, minus TCCD, i.e., the minimum column-to-column command delay.
  • Further, as seen in FIG. 8, row B has been open and written to at a first column X, and then row C has been made open and read from a second column Y. Conventional DRAM first waits for the timings TCL and TCCD for writing into column X in the open row B. Then, it needs to wait for timing TWR to restore the updated row buffer to the array. Afterwards, it issues a precharge and waits for timing TRP. Finally, conventional DRAM activates the new row C and reads column Y after timings TRCD and TCL. In contrast, for DRB-DRAM, again assuming that at least TRC amount of time elapsed in the current row cycle, activating row C can be issued in timing TCCD after the writing column X in the open row B, which is held in RB1. Afterwards, DRB-DRAM waits for TRCD and TCL amount of time to activate row C in RB0 and read column Y. Updated row B in RB1 is restored with the RES μOp in the next row cycle, overlapped with the data I/O from RB0, which holds the new row C. The latency that can be saved amounts to TCL, i.e., the CAS latency, plus TWR, i.e., the write-recovery latency, plus TRP.
  • As memory latency is an important bottleneck in performance and power, implementation of a double row buffer DRAM system and/or modified RAS timing reduces the memory access latency to the DRAM. Specifically, the examples of the present disclosure keeps the low latency access of row buffer hits while reducing the long latency command sequences having a row buffer miss, e.g., a read RD/write WR hit and then miss. The examples give the ability to a DRAM array to keep an open row while providing the low latency miss benefit of a closed row. Aforementioned advantageous effects are realized, considering DRAM/eDRAM based memories implemented as last level cache, multi-level memory and main memory.
  • Furthermore, implementing examples of the present disclosure of the proposed DRAM architecture only changes the row buffer circuitry and I/O circuitry, keeping a DRAM cell array unchanged. Hence, the disclosed approaches will be a cost effective option to implement and adapt such technology.
  • The proposed DRAM architecture may be implemented in conjunction with other DRAM systems. A marked advantage may be seen, however, when using a double row buffer DRAM array and/or modified RAS timing.
  • Dual row buffers may increase area overhead of the DRAM since the number of sense amplifiers are doubled in the DRAM along with extra connection gate transistors per array. In one example, there exists several variations of doubling the number of DRAM banks (while keeping the DRAM capacity as the same). Doubling the number of DRAM banks increases the available parallelism and decreases the likelihood of bank conflicts. Twice as many banks can possibly reduce the chances of consecutive accesses that lead to a row miss to be mapped onto a same bank.
  • However, changing the number of DRAM banks does not give the same performance benefit as implementing a double row buffer system. Increasing DRAM bank numbers cannot, by itself, reduce the critical path latency of a row miss. The miss latency issue is further not solved when the consecutive accesses are mapped onto the same bank.
  • A DRB-DRAM example as disclosed provides a solution congruent to the doubling the number of banks in DRAM. By decoupling precharge, restore, and data I/O operations so that they are not serially performed, a DRB-DRAM may issue early precharge and deferred restore while performing I/O. As such, a proposed DRB-DRAM solution can reduce the critical path latency of a row buffer miss.
  • Furthermore, DRB-DRAM can reduce the long latency of a row buffer miss even within a single bank without depending on parallelism of multiple banks. Both bank increase and DRB-DRAM may be implemented together, but it is noted that doubling the number of banks only reduces the chances of a row buffer miss. When a row miss happens, DRB-DRAM can reduce the latency cost of said miss.
  • In one example, there exists several variations of cached DRAM, such as Virtual Channel SDRAM (VCRAM) and Enhanced SDRAM (ESDRAM). These proposals manage to keep multiple rows open, which increase the probability of row buffer hit. When a row is open in the cache structure, a DRAM array can be precharged for the next access. However, the solution of a cached DRAM still suffer from the fact that the updated rows in the cache structure need to be written back into the DRAM array in a serialized way during the switching of rows. Implementation of the proposed DRAM architecture of the present disclosure avoids the need for serialization.
  • The skilled person having benefit from the present disclosure will appreciate that the various examples described herein can be implemented individually or in combination.
  • A first example is a dynamic random access memory (DRAM) array, comprising row buffers; and a plurality of bit lines connectable, respectively, to at least two row buffers of the row buffers.
  • In example 2, in the DRAM array of example 1, the at least two row buffers are respectively connectable to data input/output (I/O) lines.
  • In example 3, in the DRAM array of example 2, wherein the at least two row buffers are configured to electrically connect the at least two row buffers to the bit lines and data I/O lines in a mutually exclusive manner.
  • In example 4, in the DRAM array of examples 1 to 3, the plurality of bit lines are coupled, respectively, to the at least two row buffers via a bit line access gate transistor, whereby when one of the at least two row buffers is electrically connected to a bit line, another of the at least two row buffers is not electrically connected to a bit line.
  • In example 5, in the DRAM array of examples 2 to 4, the plurality of data I/O lines are coupled, respectively, to the at least two row buffers via a data I/O access gate transistor, whereby when one of the at least two row buffers is electrically connected to a data I/O line, another of the at least two row buffers is not electrically connected to a data I/O line.
  • Example 6 is a dynamic random access memory (DRAM) chip comprising at least one DRAM array of any of examples 1 to 5.
  • In example 7, in the DRAM chip of example 6, the plurality of bit lines of the DRAM array are coupled, respectively, to the at least two row buffers via a bit line access gate transistor, wherein when one of the two row buffers is electrically connected to a bit line, another of the at least two row buffers is not electrically connected to a bit line.
  • In example 8, the DRAM chip of example 6 or 7 further comprises a signal interface configured to receive a first micro-operation for sensing that causes a sensing of a first row of the DRAM array in a row cycle; and a second micro-operation for restoring that causes a restoring of contents of a second row of the DRAM array in the row cycle.
  • Example 9 is a method for initiation of micro operations at a dynamic random access memory (DRAM) array comprising initiating a micro-operation for sensing a first row of the DRAM array in a row cycle; and initiating a micro-operation for restoring contents of a second row of the DRAM array in the row cycle.
  • In example 10, in the method of example 9, initiating the micro-operation for sensing is separable from initiating the micro-operation for restoring.
  • In example 11, in the method of example 10, initiating the micro-operation for sensing causes a sensing of a first row of the DRAM array with a first row buffer of the DRAM array connected via a bit line in a first row cycle, and initiating the micro-operation for restoring causes a restoring of contents of a second row buffer to a second row of the DRAM array in the first row cycle.
  • In example 12, the method of any of examples 9 to 11 further comprise initiating a micro-operation for precharging bit lines of the DRAM array in the row cycle.
  • Example 13 is a dynamic random access memory (DRAM) system, comprising a DRAM chip of any of examples 6 to 8 and at least a DRAM controller configured to initiate micro operations for the DRAM chip.
  • In example 14, in the DRAM system of example 13, the DRAM controller comprises: an output interface configured to output a micro-operation for sensing that causes a sensing of a first row of the DRAM array into a first row buffer in a first row cycle; and to output a micro-operation for restoring contents that causes a restoring of a second row of the DRAM array from a second row buffer in the first row cycle.
  • In example 15, in The DRAM system of example 13 or 14, the output interface of the DRAM controller is further configured to output a micro-operation for precharging that causes a precharging of bit lines of the DRAM array in the first row cycle.
  • Example 16 is a method of accessing a dynamic random access memory (DRAM) array, the method comprising sensing a first row of the DRAM array with a first row buffer connected via bit lines in a first row cycle; coupling the bit lines to a second row buffer of the DRAM array; and restoring contents of the second row buffer to a second row of the DRAM array in the first row cycle.
  • In example 17, the method of accessing a DRAM array of example 16 further comprises precharging the bit lines in the first row cycle after restoring contents of the second row buffer to the second row of the DRAM array.
  • In example 18, in the method of accessing a DRAM array of any of examples 16 or 17, coupling the bit lines to the first row buffer or second row buffer of the DRAM array comprises toggling a bit line access signal.
  • In example 19, the method of accessing a DRAM array of any of examples 16 to 18 further comprises coupling a data input/output (I/O) line to the first row buffer of the DRAM array in the first row cycle.
  • In example 20, in the method of accessing a DRAM array of example 19, coupling the data I/O lines to the first row buffer or the second row buffer of the DRAM array comprises toggling a data I/O access signal.
  • In example 21, the method of accessing a DRAM array of any of examples 16 to 20 further comprises receiving an access request to sense a third row of the DRAM array in a second row cycle; and sensing the third row of the DRAM array with the second row buffer of the DRAM array.
  • In example 22, in the method of accessing a DRAM array of example 21, receiving an access request to sense a third row is performed after precharging the bit lines of the DRAM array in the first row cycle.
  • In example 23, the method of accessing a DRAM array of any of examples 16 to 22, further comprises coupling the bit line to the first row buffer of the DRAM array; and restoring contents of the first row buffer to a row of the DRAM array in the second row cycle.
  • The aspects and features mentioned and described together with one or more of the previously detailed examples and figures, may as well be combined with one or more of the other examples in order to replace a like feature of the other example or in order to additionally introduce the feature to the other example.
  • Examples may further be a computer program having a program code for supporting one or more of the above methods, when the computer program is executed on a computer or processor. Steps, operations or processes of various above-described methods may be performed by programmed computers or processors. Examples may also cover program storage devices such as digital data storage media, which are machine, processor or computer readable and encode machine-executable, processor-executable or computer-executable programs of instructions. The instructions perform or cause performing some or all of the acts of the above-described methods. The program storage devices may comprise or be, for instance, digital memories, magnetic storage media such as magnetic disks and magnetic tapes, hard drives, or optically readable digital data storage media. Further examples may also cover computers, processors or control units programmed to perform the acts of the above-described methods or (field) programmable logic arrays ((F)PLAs) or (field) programmable gate arrays ((F)PGAs), programmed to perform the acts of the above-described methods.
  • The description and drawings merely illustrate the principles of the disclosure. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the disclosure and are included within its spirit and scope. Furthermore, all examples recited herein are principally intended expressly to be only for pedagogical purposes to aid the reader in understanding the principles of the disclosure and the concepts contributed by the inventor(s) to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and examples of the disclosure, as well as specific examples thereof, are intended to encompass equivalents thereof.
  • A functional block denoted as “means for . . . ” performing a certain function may refer to a circuit that is configured to perform a certain function. Hence, a “means for s.th.” may be implemented as a “means configured to or suited for s.th.”, such as a device or a circuit configured to or suited for the respective task.
  • Functions of various elements shown in the figures, including any functional blocks labeled as “means,” “means for providing a sensor signal,” “means for generating a transmit signal,” etc., may be implemented in the form of dedicated hardware, such as “a signal provider,” “a signal processing unit,” “a processor,” “a controller,” etc. as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which or all of which may be shared. However, the term “processor” or “controller” is by far not limited to hardware exclusively capable of executing software, but may include digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read only memory (ROM) for storing software, random access memory (RAM), and non-volatile storage. Other hardware, conventional and/or custom, may also be included.
  • A block diagram may, for instance, illustrate a high-level circuit diagram implementing the principles of the disclosure. Similarly, a flow chart, a flow diagram, a state transition diagram, a pseudo code, and the like may represent various processes, operations or steps, which may, for instance, be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown. Methods disclosed in the specification or in the claims may be implemented by a device having means for performing each of the respective acts of these methods.
  • It is to be understood that the disclosure of multiple acts, processes, operations, steps or functions disclosed in the specification or claims may not be construed as to be within the specific order, unless explicitly or implicitly stated otherwise, for instance for technical reasons. Therefore, the disclosure of multiple acts or functions will not limit these to a particular order unless such acts or functions are not interchangeable for technical reasons. Furthermore, in some examples a single act, function, process, operation or step may include or may be broken into multiple sub-acts, -functions, -processes, -operations or -steps, respectively. Such sub acts may be included and part of the disclosure of this single act unless explicitly excluded.
  • Furthermore, the following claims are hereby incorporated into the detailed description, where each claim may stand on its own as a separate example. While each claim may stand on its own as a separate example, it is to be noted that—although a dependent claim may refer in the claims to a specific combination with one or more other claims—other examples may also include a combination of the dependent claim with the subject matter of each other dependent or independent claim. Such combinations are explicitly proposed herein unless it is stated that a specific combination is not intended. Furthermore, it is intended to include also features of a claim to any other independent claim even if this claim is not directly made dependent to the independent claim.

Claims (20)

1. A dynamic random access memory (DRAM) array, comprising:
row buffers; and
a plurality of bit lines, wherein each bit line of the plurality of bit lines is connectable to at least two row buffers of the row buffers,
wherein the plurality of bit lines are coupled, respectively, to the at least two row buffers via a bit line access gate transistor, whereby when one of the at least two row buffers is electrically connected to the plurality of bit lines, another of the at least two row buffers is not electrically connected to the plurality of bit lines
wherein the at least two row buffers are respectively connectable to data input/output (I/O) lines,
wherein the data I/O lines are coupled, respectively, to the at least two row buffers via a data I/O access gate transistor, whereby when one of the at least two row buffers is electrically connected to the data I/O lines, another of the at least two row buffers is not electrically connected to the data I/O lines.
2. (canceled)
3. The DRAM array of claim 1, wherein the at least two row buffers are configured to electrically connect to the plurality of bit lines and data I/O lines in a mutually exclusive manner.
4. (canceled)
5. (canceled)
6. A dynamic random access memory (DRAM) chip comprising at least one DRAM array, the DRAM array comprising:
row buffers;
a plurality of bit lines, wherein each bit line of the plurality of bit lines is connectable to at least two row buffers of the row, and
a signal interface configured to receive a first micro-operation for sensing that causes a sensing of a first row of the DRAM array in a row cycle; and a second micro-operation for restoring that causes a restoring of contents of a second row of the DRAM array in the row cycle,
wherein the plurality of bit lines are coupled, respectively, to the at least two row buffers via a bit line access gate transistor, whereby when one of the at least two row buffers is electrically connected to the plurality of bit lines, another of the at least two row buffers is not electrically connected to the plurality of bit lines.
7. (canceled)
8. (canceled)
9. A method for initiation of micro operations at a dynamic random access memory (DRAM) array comprising:
initiating a micro-operation for sensing a first row of the DRAM array in a row cycle;
initiating a micro-operation for restoring contents of a second row of the DRAM array in the row cycle; and
initiating a micro-operation for precharging bit lines of the DRAM array in the row cycle.
10. The method of claim 9, wherein initiating the micro-operation for sensing is separable from initiating the micro-operation for restoring.
11. The method of claim 10,
wherein initiating the micro-operation for sensing causes a sensing of a first row of the DRAM array with a first row buffer of the DRAM array connected via a bit line in a first row cycle; and
wherein initiating the micro-operation for restoring causes a restoring of contents of a second row buffer to a second row of the DRAM array in the first row cycle.
12. (canceled)
13. A method of accessing a dynamic random access memory (DRAM) array, the method comprising:
sensing a first row of the DRAM array with a first row buffer connected via bit lines in a first row cycle;
coupling the bit lines to a second row buffer of the DRAM array; and
restoring contents of the second row buffer to a second row of the DRAM array in the first row cycle.
14. The method of accessing a DRAM array of claim 13, the method further comprising:
precharging the bit lines in the first row cycle after restoring contents of the second row buffer to the second row of the DRAM array.
15. The method of accessing a DRAM array of claim 13, wherein coupling the bit lines to the first row buffer or second row buffer of the DRAM array comprises toggling a bit line access signal.
16. The method of accessing a DRAM array of claim 13, the method further comprising:
coupling a data input/output (I/O) line to the first row buffer of the DRAM array in the first row cycle.
17. The method of accessing a DRAM array of claim 16, wherein coupling the data I/O lines to the first row buffer or the second row buffer of the DRAM array comprises toggling a data I/O access signal.
18. The method of accessing a DRAM array of claim 13, the method further comprising:
receiving an access request to sense a third row of the DRAM array in a second row cycle; and
sensing the third row of the DRAM array with the second row buffer of the DRAM array.
19. The method of accessing a DRAM array of claim 18, wherein receiving an access request to sense a third row is performed after precharging the bit lines of the DRAM array in the first row cycle.
20. The method of accessing a DRAM array of claim 13, the method further comprising:
coupling the bit line to the first row buffer of the DRAM array; and
restoring contents of the first row buffer to a row of the DRAM array in the second row cycle.
US15/394,860 2016-12-30 2016-12-30 Apparatuses and methods for accessing and scheduling between a plurality of row buffers Active US10068636B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/394,860 US10068636B2 (en) 2016-12-30 2016-12-30 Apparatuses and methods for accessing and scheduling between a plurality of row buffers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/394,860 US10068636B2 (en) 2016-12-30 2016-12-30 Apparatuses and methods for accessing and scheduling between a plurality of row buffers

Publications (2)

Publication Number Publication Date
US20180190339A1 true US20180190339A1 (en) 2018-07-05
US10068636B2 US10068636B2 (en) 2018-09-04

Family

ID=62712080

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/394,860 Active US10068636B2 (en) 2016-12-30 2016-12-30 Apparatuses and methods for accessing and scheduling between a plurality of row buffers

Country Status (1)

Country Link
US (1) US10068636B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220137869A1 (en) * 2020-11-02 2022-05-05 Deepx Co., Ltd. System and memory for artificial neural network
US11922051B2 (en) 2020-11-02 2024-03-05 Deepx Co., Ltd. Memory controller, processor and system for artificial neural network

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10275352B1 (en) 2017-12-28 2019-04-30 Advanced Micro Devices, Inc. Supporting responses for memory types with non-uniform latencies on same channel

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5305280A (en) * 1991-04-04 1994-04-19 Mitsubishi Denki Kabushiki Kaisha Semiconductor memory device having on the same chip a plurality of memory circuits among which data transfer is performed to each other and an operating method thereof
US5701095A (en) * 1994-02-25 1997-12-23 Kabushiki Kaisha Toshiba High speed, low noise CMOS multiplexer with precharge
US5831924A (en) * 1995-09-07 1998-11-03 Mitsubishi Denki Kabushiki Kaisha Synchronous semiconductor memory device having a plurality of banks distributed in a plurality of memory arrays
US6154385A (en) * 1997-09-30 2000-11-28 Nec Corporation Semiconductor memory with built-in row buffer and method of driving the same
US20020031035A1 (en) * 2000-09-08 2002-03-14 Mitsubishi Denki Kabushiki Kaisha Matsushita Electric Industrial Co., Ltd. Multi-bank semiconductor memory device
US20030214845A1 (en) * 2002-05-17 2003-11-20 Mitsubishi Denki Kabushiki Kaisha Semiconductor integrated circuit device having data input/output configuration variable
US20030231535A1 (en) * 2002-06-14 2003-12-18 Johann Pfeiffer Semiconductor memory with address decoding unit, and address loading method
US20090086551A1 (en) * 2007-10-01 2009-04-02 Elpida Memory, Inc. Semiconductor device
US20120026797A1 (en) * 2010-07-29 2012-02-02 Samsung Electronics Co., Ltd. NonVolatile Memory Devices, Methods Of Programming The Same, And Memory Systems Including The Same
US20130163353A1 (en) * 2011-12-26 2013-06-27 Elpida Memory, Inc. Semiconductor device having odt function
US20140219036A1 (en) * 2013-02-04 2014-08-07 Samsung Electronics Co., Ltd. Equalizer and semiconductor memory device including the same
US20140328122A1 (en) * 2013-05-06 2014-11-06 International Business Machines Corporation Reduced stress high voltage word line driver
US20150003172A1 (en) * 2013-06-26 2015-01-01 Sua KIM Memory module including buffer chip controlling refresh operation of memory devices
US20150085580A1 (en) * 2013-09-24 2015-03-26 Integrated Silicon Solution, Inc. Memory device with multiple cell write for a single input-output in a single write cycle
US20160284390A1 (en) * 2015-03-25 2016-09-29 Intel Corporation Method and apparatus for performing data operations within a memory device

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6311280B1 (en) * 1999-02-22 2001-10-30 Nband Communications Low-power memory system with incorporated vector processing
US6178479B1 (en) * 1999-02-22 2001-01-23 Nband Communications Cycle-skipping DRAM for power saving
US6799291B1 (en) * 2000-11-20 2004-09-28 International Business Machines Corporation Method and system for detecting a hard failure in a memory array
US7474557B2 (en) * 2001-06-29 2009-01-06 International Business Machines Corporation MRAM array and access method thereof
JP2006294216A (en) * 2005-03-15 2006-10-26 Renesas Technology Corp Semiconductor memory apparatus
US7945840B2 (en) * 2007-02-12 2011-05-17 Micron Technology, Inc. Memory array error correction apparatus, systems, and methods
US7492662B2 (en) * 2007-03-21 2009-02-17 International Business Machines Corporation Structure and method of implementing power savings during addressing of DRAM architectures
US8072256B2 (en) * 2007-09-14 2011-12-06 Mosaid Technologies Incorporated Dynamic random access memory and boosted voltage producer therefor
US7941594B2 (en) * 2007-09-21 2011-05-10 Freescale Semiconductor, Inc. SDRAM sharing using a control surrogate
US9275720B2 (en) * 2010-12-30 2016-03-01 Kandou Labs, S.A. Differential vector storage for dynamic random access memory
US9146867B2 (en) * 2011-10-31 2015-09-29 Hewlett-Packard Development Company, L.P. Methods and apparatus to access memory using runtime characteristics
KR101656599B1 (en) * 2012-06-28 2016-09-09 휴렛 팩커드 엔터프라이즈 디벨롭먼트 엘피 Multi-level cell memory
CN104854698A (en) * 2012-10-31 2015-08-19 三重富士通半导体有限责任公司 Dram-type device with low variation transistor peripheral circuits, and related methods
US20140173170A1 (en) * 2012-12-14 2014-06-19 Hewlett-Packard Development Company, L.P. Multiple subarray memory access
WO2014120193A1 (en) * 2013-01-31 2014-08-07 Hewlett-Packard Development Company, L. P. Non-volatile multi-level-cell memory with decoupled bits for higher performance and energy efficiency

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5305280A (en) * 1991-04-04 1994-04-19 Mitsubishi Denki Kabushiki Kaisha Semiconductor memory device having on the same chip a plurality of memory circuits among which data transfer is performed to each other and an operating method thereof
US5701095A (en) * 1994-02-25 1997-12-23 Kabushiki Kaisha Toshiba High speed, low noise CMOS multiplexer with precharge
US5831924A (en) * 1995-09-07 1998-11-03 Mitsubishi Denki Kabushiki Kaisha Synchronous semiconductor memory device having a plurality of banks distributed in a plurality of memory arrays
US6154385A (en) * 1997-09-30 2000-11-28 Nec Corporation Semiconductor memory with built-in row buffer and method of driving the same
US20020031035A1 (en) * 2000-09-08 2002-03-14 Mitsubishi Denki Kabushiki Kaisha Matsushita Electric Industrial Co., Ltd. Multi-bank semiconductor memory device
US20030214845A1 (en) * 2002-05-17 2003-11-20 Mitsubishi Denki Kabushiki Kaisha Semiconductor integrated circuit device having data input/output configuration variable
US20030231535A1 (en) * 2002-06-14 2003-12-18 Johann Pfeiffer Semiconductor memory with address decoding unit, and address loading method
US20090086551A1 (en) * 2007-10-01 2009-04-02 Elpida Memory, Inc. Semiconductor device
US20120026797A1 (en) * 2010-07-29 2012-02-02 Samsung Electronics Co., Ltd. NonVolatile Memory Devices, Methods Of Programming The Same, And Memory Systems Including The Same
US20130163353A1 (en) * 2011-12-26 2013-06-27 Elpida Memory, Inc. Semiconductor device having odt function
US20140219036A1 (en) * 2013-02-04 2014-08-07 Samsung Electronics Co., Ltd. Equalizer and semiconductor memory device including the same
US20140328122A1 (en) * 2013-05-06 2014-11-06 International Business Machines Corporation Reduced stress high voltage word line driver
US20150003172A1 (en) * 2013-06-26 2015-01-01 Sua KIM Memory module including buffer chip controlling refresh operation of memory devices
US20150085580A1 (en) * 2013-09-24 2015-03-26 Integrated Silicon Solution, Inc. Memory device with multiple cell write for a single input-output in a single write cycle
US20160284390A1 (en) * 2015-03-25 2016-09-29 Intel Corporation Method and apparatus for performing data operations within a memory device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220137869A1 (en) * 2020-11-02 2022-05-05 Deepx Co., Ltd. System and memory for artificial neural network
US11922051B2 (en) 2020-11-02 2024-03-05 Deepx Co., Ltd. Memory controller, processor and system for artificial neural network
US11972137B2 (en) * 2020-11-02 2024-04-30 Deepx Co., Ltd. System and memory for artificial neural network (ANN) optimization using ANN data locality

Also Published As

Publication number Publication date
US10068636B2 (en) 2018-09-04

Similar Documents

Publication Publication Date Title
US7433258B2 (en) Posted precharge and multiple open-page RAM architecture
US6687172B2 (en) Individual memory page activity timing method and system
US9905297B2 (en) Hybrid volatile and non-volatile memory device having a programmable register for shadowed storage locations
US7277996B2 (en) Modified persistent auto precharge command protocol system and method for memory devices
CN110675904B (en) Memory device and method of operating the same
US6647478B2 (en) Semiconductor memory device
CN111383676A (en) Memory device, memory system and related method
CN106055493B (en) Memory system, memory module and operation method thereof
US20200151070A1 (en) Inline buffer for in-memory post package repair (ppr)
US7782703B2 (en) Semiconductor memory having a bank with sub-banks
US8045416B2 (en) Method and memory device providing reduced quantity of interconnections
JP2004502267A (en) High-speed DRAM architecture with uniform access latency
US20190034344A1 (en) Method for accessing heterogeneous memories and memory module including heterogeneous memories
EP3220277B1 (en) Memory accessing method, storage-class memory, and computer system
KR102611898B1 (en) Semiconductor device and semiconductor system
US11354041B2 (en) Read latency reduction through command and polling overhead avoidance
TW201635152A (en) Operating method of memory controller
US10068636B2 (en) Apparatuses and methods for accessing and scheduling between a plurality of row buffers
US11216386B2 (en) Techniques for setting a 2-level auto-close timer to access a memory device
US6622222B2 (en) Sequencing data on a shared data bus via a memory buffer to prevent data overlap during multiple memory read operations
WO2009089044A1 (en) Memory system, memory controller and refresh operation control method of memory controller
KR102500896B1 (en) Nand dropped command detection and recovery
CN113454720B (en) Memory device and control method thereof
TW200402057A (en) DRAM for high-speed data access
US20240020235A1 (en) Storage module supporting prefetch function and operation method thereof

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTEL CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:AKIN, BERKIN;TOMISHIMA, SHIGEKI;REEL/FRAME:041257/0297

Effective date: 20161216

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

AS Assignment

Owner name: TAHOE RESEARCH, LTD., IRELAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTEL CORPORATION;REEL/FRAME:061175/0176

Effective date: 20220718