US20150149733A1 - Supporting speculative modification in a data cache - Google Patents
Supporting speculative modification in a data cache Download PDFInfo
- Publication number
- US20150149733A1 US20150149733A1 US13/007,015 US201113007015A US2015149733A1 US 20150149733 A1 US20150149733 A1 US 20150149733A1 US 201113007015 A US201113007015 A US 201113007015A US 2015149733 A1 US2015149733 A1 US 2015149733A1
- Authority
- US
- United States
- Prior art keywords
- cache
- state
- speculative
- cache line
- buffer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0815—Cache consistency protocols
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0804—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0891—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using clearing, invalidating or resetting means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/60—Details of cache memory
- G06F2212/603—Details of cache memory of operating mode, e.g. cache mode or local memory mode
Definitions
- Embodiments generally relate to data caches. More particularly, embodiments relate to the field of supporting speculative modification in a data cache.
- a data cache interacts with a processor to increase system performance. However, if the processor is speculatively executing instructions, a traditional data cache is unable to properly deal with speculative modifications.
- FIG. 1 illustrates a system in accordance with a first embodiment.
- FIG. 2 illustrates a state diagram of a data cache in accordance with a first embodiment.
- FIG. 3 illustrates a system in accordance with a second embodiment.
- FIG. 4 illustrates a first state diagram of a speculative cache buffer in accordance with a second embodiment.
- FIG. 5 illustrates a second state diagram of a speculative cache buffer in accordance with a second embodiment.
- FIG. 1 illustrates a system 100 in accordance with a first embodiment. As illustrated in FIG. 1 , the system 100 includes a processor 10 and a data cache 20 .
- the processor 10 is able to speculatively execute instructions. If the processor 10 speculatively executes instructions to a particular instruction boundary without generating errors, the speculative store operations to the data cache 20 can be made permanent with a commit operation. However, if errors occur before reaching the particular instruction boundary, the speculative store operations to the data cache 20 have to be undone with a rollback operation.
- the data cache 20 includes a plurality of cache lines 25 .
- Each cache line includes a state indicator 27 for indicating any one of a plurality of states.
- the plurality of states include an invalid state, a valid state, a dirty state, and a speculative state.
- the invalid state indicates that the respective cache line is not being used.
- the valid state indicates that the respective cache line has clean data.
- the dirty state indicates that the respective cache line has dirty data (or the most recent data compared to other memory components such as L2 data cache, main memory, etc.).
- the speculative state enables keeping track of speculative modification to data in said respective cache line.
- the speculative state enables a speculative modification to the data in the respective cache line to be made permanent in response to a commit operation.
- the speculative state enables the speculative modification to the data in the respective cache line to be undone in response to a rollback operation.
- Cache lines having the speculative state cannot be drained to other memory components such as L2 data cache, main memory, etc.
- FIG. 2 illustrates a state diagram of a data cache in accordance with a first embodiment.
- a cache line can have an invalid state I, a valid state V, a dirty state D, or a speculative state S.
- V, D, and S states For clarity, state transitions from V, D, and S states to the I state, corresponding to the traditional operation of the data cache evicting a cache line, have been omitted in the figure.
- the cache line moves to the dirty state D. If data is loaded from memory components such as L2 data cache, main memory, etc., the cache line moves to the valid state V, where the data is clean (has same version as the memory components such as L2 data cache, main memory, etc.). If a speculative store is performed by the processor 10 ( FIG. 1 ), the cache line moves to the speculative state S.
- the cache line moves to the dirty state D. If a speculative store is performed by the processor 10 ( FIG. 1 ), the cache line moves to the speculative state S.
- a speculative store is performed by the processor 10 ( FIG. 1 ) to this cache line, the cache line is first written back to a memory component such as L2 data cache, main memory, etc., thus preserving the cache line data as of before the speculative modification. Then, the speculative store is performed, moving the cache line to the speculative state S.
- FIG. 3 illustrates a system 300 in accordance with a second embodiment.
- the system 300 includes a processor 10 , a data cache 20 , and a speculative cache buffer 50 .
- the discussion with respect to the processor 10 and the data cache 20 is equally applicable to FIG. 3 .
- the speculative cache buffer 50 receives cache lines which have the speculative state S and are evicted or drained from the data cache 20 . Hence, the data cache 20 can send cache lines having the speculative state to the speculative cache buffer 50 and retrieve them when necessary.
- the speculative cache buffer 50 has a plurality of cache lines 55 .
- Each cache line 55 includes a state indicator 57 for indicating any one of a plurality of states.
- the plurality of states includes an invalid state, a dirty state, and a speculative state.
- the speculative cache buffer 50 is fully associative.
- the data cache 20 can drain cache lines that are in the dirty state D or the speculative state S to the speculative cache buffer 50 .
- the speculative cache buffer 50 can drain cache lines that are in the dirty state D to a memory component such as L2 data cache, main memory, etc.
- FIG. 4 illustrates a first state diagram of a speculative cache buffer in accordance with a second embodiment.
- a cache line can have an invalid state I, a dirty state D, or a speculative state S.
- the cache line moves to the dirty state D. If the data cache 20 evicts a cache line having the speculative state S, the cache line moves to the speculative state S.
- the speculative cache buffer 50 drains the cache line having the dirty state D to a memory component such as L2 data cache, main memory, etc., the cache line moves to the invalid state I. In case the data cache requests the cache line back, it moves to the invalid state I in the speculative cache buffer.
- the data cache 20 may drain the cache line having the dirty state to the speculative cache buffer 50 because a speculative store has to be performed to the cache line in the data cache 20 . If the cache line having the speculative state is later drained to the speculative cache buffer 50 and if a commit operation is performed, then the speculative cache buffer 50 would have two cache lines with different versions of the data, whereas only one version of the data needs to be drained to a memory component such as L2 data cache, main memory, etc.
- the plurality of states also includes a commit-kill state, in addition to the invalid state, the dirty state, and the speculative state.
- the commit-kill state indicates that the data cache 20 has evicted the respective cache line having the dirty state in response to a speculative modification operation (or speculative store) to the respective cache line in the data cache 20 .
- the commit-kill state reduces the number of copies of a cache line in the dirty state and saves bandwidth in case of the commit operation, as detailed below.
- FIG. 5 illustrates a second state diagram of a speculative cache buffer in accordance with a second embodiment.
- a cache line can have an invalid state I, a dirty state D, a commit-kill state K, or a speculative state S.
- the cache line Assuming the cache line is in the invalid state I, there are several possibilities for this cache line. If the data cache 20 evicts a cache line having the dirty state D but not due to a speculative store operation, the cache line moves to the dirty state D. If the data cache 20 evicts a cache line having the speculative state S, the cache line moves to the speculative state S. If the data cache 20 evicts a cache line having the dirty state D in response to a speculative store operation to that cache line, the cache line moves to the commit-kill state K.
- the speculative cache buffer 50 drains the cache line having the dirty state D to a memory component such as L2 data cache, main memory, etc., the cache line moves to the invalid state I. In case the data cache requests the cache line back, it moves to the invalid state I in the speculative cache buffer.
- the cache line moves to the invalid state I. If a commit operation is performed, the cache line moves to the invalid state I. If a rollback operation is performed, the cache line moves to the dirty state D. If the speculative cache buffer 50 drains the cache line having the commit-kill state K to a memory component such as L2 data cache, main memory, etc., the cache line moves to the invalid state I.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
Description
- This application is a Continuation of and claims priority to U.S. patent application Ser. No. 11/807,629, filed on May 29, 2007, which is a Continuation of and claims priority to U.S. patent application Ser. No. 10/662,028, filed on Jul. 16, 2003, which are hereby incorporated by reference in their entirety.
- Embodiments generally relate to data caches. More particularly, embodiments relate to the field of supporting speculative modification in a data cache.
- A data cache interacts with a processor to increase system performance. However, if the processor is speculatively executing instructions, a traditional data cache is unable to properly deal with speculative modifications.
- Method and system for supporting speculative modification in a data cache are provided and described.
- The accompanying drawings, which are incorporated in and form a part of this specification, illustrate embodiments, together with the description, serve to explain the principles of the disclosure.
-
FIG. 1 illustrates a system in accordance with a first embodiment. -
FIG. 2 illustrates a state diagram of a data cache in accordance with a first embodiment. -
FIG. 3 illustrates a system in accordance with a second embodiment. -
FIG. 4 illustrates a first state diagram of a speculative cache buffer in accordance with a second embodiment. -
FIG. 5 illustrates a second state diagram of a speculative cache buffer in accordance with a second embodiment. - Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. While the disclosure will be described in conjunction with these embodiments, it will be understood that they are not intended to limit the disclosure to these embodiments. On the contrary, the disclosure is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the disclosure as defined by the appended claims. Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a thorough understanding. However, it will be recognized by one of ordinary skill in the art that embodiments may be practiced without these specific details.
-
FIG. 1 illustrates asystem 100 in accordance with a first embodiment. As illustrated inFIG. 1 , thesystem 100 includes aprocessor 10 and adata cache 20. - The
processor 10 is able to speculatively execute instructions. If theprocessor 10 speculatively executes instructions to a particular instruction boundary without generating errors, the speculative store operations to thedata cache 20 can be made permanent with a commit operation. However, if errors occur before reaching the particular instruction boundary, the speculative store operations to thedata cache 20 have to be undone with a rollback operation. - The
data cache 20 includes a plurality ofcache lines 25. Each cache line includes astate indicator 27 for indicating any one of a plurality of states. The plurality of states include an invalid state, a valid state, a dirty state, and a speculative state. The invalid state indicates that the respective cache line is not being used. The valid state indicates that the respective cache line has clean data. The dirty state indicates that the respective cache line has dirty data (or the most recent data compared to other memory components such as L2 data cache, main memory, etc.). The speculative state enables keeping track of speculative modification to data in said respective cache line. The speculative state enables a speculative modification to the data in the respective cache line to be made permanent in response to a commit operation. Moreover, the speculative state enables the speculative modification to the data in the respective cache line to be undone in response to a rollback operation. Cache lines having the speculative state cannot be drained to other memory components such as L2 data cache, main memory, etc. -
FIG. 2 illustrates a state diagram of a data cache in accordance with a first embodiment. As described above, a cache line can have an invalid state I, a valid state V, a dirty state D, or a speculative state S. (For clarity, state transitions from V, D, and S states to the I state, corresponding to the traditional operation of the data cache evicting a cache line, have been omitted in the figure.) - Invalid State I
- Assuming the cache line is in the invalid state I, there are several possibilities for this cache line. If a non-speculative store is performed by the processor 10 (
FIG. 1 ), the cache line moves to the dirty state D. If data is loaded from memory components such as L2 data cache, main memory, etc., the cache line moves to the valid state V, where the data is clean (has same version as the memory components such as L2 data cache, main memory, etc.). If a speculative store is performed by the processor 10 (FIG. 1 ), the cache line moves to the speculative state S. - Valid State V
- Assuming the cache line is in the valid state V, there are several possibilities for this cache line. If a non-speculative store is performed by the processor 10 (
FIG. 1 ), the cache line moves to the dirty state D. If a speculative store is performed by the processor 10 (FIG. 1 ), the cache line moves to the speculative state S. - Dirty State D
- Assuming the cache line is in the dirty state D, there are several possibilities for this cache line. If a speculative store is performed by the processor 10 (
FIG. 1 ) to this cache line, the cache line is first written back to a memory component such as L2 data cache, main memory, etc., thus preserving the cache line data as of before the speculative modification. Then, the speculative store is performed, moving the cache line to the speculative state S. - Speculative State S
- Assuming the cache line is in the speculative state S, there are several possibilities for this cache line. If a commit operation is performed, the cache line moves to the dirty state D. If a rollback operation is performed, the cache line moves to the invalid state I.
-
FIG. 3 illustrates asystem 300 in accordance with a second embodiment. Thesystem 300 includes aprocessor 10, adata cache 20, and aspeculative cache buffer 50. The discussion with respect to theprocessor 10 and thedata cache 20 is equally applicable toFIG. 3 . - The
speculative cache buffer 50 receives cache lines which have the speculative state S and are evicted or drained from thedata cache 20. Hence, thedata cache 20 can send cache lines having the speculative state to thespeculative cache buffer 50 and retrieve them when necessary. - Moreover, the
speculative cache buffer 50 has a plurality ofcache lines 55. Eachcache line 55 includes a state indicator 57 for indicating any one of a plurality of states. The plurality of states includes an invalid state, a dirty state, and a speculative state. In one embodiment, thespeculative cache buffer 50 is fully associative. - The
data cache 20 can drain cache lines that are in the dirty state D or the speculative state S to thespeculative cache buffer 50. Moreover, thespeculative cache buffer 50 can drain cache lines that are in the dirty state D to a memory component such as L2 data cache, main memory, etc. -
FIG. 4 illustrates a first state diagram of a speculative cache buffer in accordance with a second embodiment. As described above, a cache line can have an invalid state I, a dirty state D, or a speculative state S. - Invalid State I
- Assuming the cache line is in the invalid state I, there are several possibilities for this cache line. If the
data cache 20 evicts a cache line having the dirty state D, the cache line moves to the dirty state D. If thedata cache 20 evicts a cache line having the speculative state S, the cache line moves to the speculative state S. - Dirty State D
- Assuming the cache line is in the dirty state D, there are several possibilities for this cache line. If the
speculative cache buffer 50 drains the cache line having the dirty state D to a memory component such as L2 data cache, main memory, etc., the cache line moves to the invalid state I. In case the data cache requests the cache line back, it moves to the invalid state I in the speculative cache buffer. - Speculative State S
- Assuming the cache line is in the speculative state S, there are several possibilities for this cache line. If a commit operation is performed, the cache line moves to the dirty state D. If a rollback operation is performed, the cache line moves to the invalid state I. In case the data cache requests the cache line back, it moves to the invalid state I in the speculative cache buffer.
- It is possible that multiple versions of a cache line in the dirty state may exist in the
speculative cache buffer 50. For instance, thedata cache 20 may drain the cache line having the dirty state to thespeculative cache buffer 50 because a speculative store has to be performed to the cache line in thedata cache 20. If the cache line having the speculative state is later drained to thespeculative cache buffer 50 and if a commit operation is performed, then thespeculative cache buffer 50 would have two cache lines with different versions of the data, whereas only one version of the data needs to be drained to a memory component such as L2 data cache, main memory, etc. - In an alternate embodiment of the
speculative cache buffer 50, the plurality of states also includes a commit-kill state, in addition to the invalid state, the dirty state, and the speculative state. The commit-kill state indicates that thedata cache 20 has evicted the respective cache line having the dirty state in response to a speculative modification operation (or speculative store) to the respective cache line in thedata cache 20. The commit-kill state reduces the number of copies of a cache line in the dirty state and saves bandwidth in case of the commit operation, as detailed below. -
FIG. 5 illustrates a second state diagram of a speculative cache buffer in accordance with a second embodiment. As described above, a cache line can have an invalid state I, a dirty state D, a commit-kill state K, or a speculative state S. - Invalid State I
- Assuming the cache line is in the invalid state I, there are several possibilities for this cache line. If the
data cache 20 evicts a cache line having the dirty state D but not due to a speculative store operation, the cache line moves to the dirty state D. If thedata cache 20 evicts a cache line having the speculative state S, the cache line moves to the speculative state S. If thedata cache 20 evicts a cache line having the dirty state D in response to a speculative store operation to that cache line, the cache line moves to the commit-kill state K. - Dirty State D
- Assuming the cache line is in the dirty state D, there are several possibilities for this cache line. If the
speculative cache buffer 50 drains the cache line having the dirty state D to a memory component such as L2 data cache, main memory, etc., the cache line moves to the invalid state I. In case the data cache requests the cache line back, it moves to the invalid state I in the speculative cache buffer. - Speculative State S
- Assuming the cache line is in the speculative state S, there are several possibilities for this cache line. If a commit operation is performed, the cache line moves to the dirty state D. If a rollback operation is performed, the cache line moves to the invalid state I. In case the data cache requests the cache line back, it moves to the invalid state I in the speculative cache buffer.
- Commit-Kill State K
- Assuming the cache line is in the commit-kill state K, there are several possibilities for this cache line. If a commit operation is performed, the cache line moves to the invalid state I. If a rollback operation is performed, the cache line moves to the dirty state D. If the
speculative cache buffer 50 drains the cache line having the commit-kill state K to a memory component such as L2 data cache, main memory, etc., the cache line moves to the invalid state I. - The foregoing descriptions of specific embodiments have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the disclosure to the precise forms disclosed, and many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the disclosure and its practical application, to thereby enable others skilled in the art to best utilize the disclosure and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the disclosure be defined by the Claims appended hereto and their equivalents.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/007,015 US20150149733A1 (en) | 2003-07-16 | 2011-01-14 | Supporting speculative modification in a data cache |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/622,028 US7225299B1 (en) | 2003-07-16 | 2003-07-16 | Supporting speculative modification in a data cache |
US11/807,629 US7873793B1 (en) | 2003-07-16 | 2007-05-29 | Supporting speculative modification in a data cache |
US13/007,015 US20150149733A1 (en) | 2003-07-16 | 2011-01-14 | Supporting speculative modification in a data cache |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/807,629 Continuation US7873793B1 (en) | 2003-07-16 | 2007-05-29 | Supporting speculative modification in a data cache |
Publications (1)
Publication Number | Publication Date |
---|---|
US20150149733A1 true US20150149733A1 (en) | 2015-05-28 |
Family
ID=38056865
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/622,028 Expired - Lifetime US7225299B1 (en) | 2003-07-16 | 2003-07-16 | Supporting speculative modification in a data cache |
US11/807,629 Expired - Fee Related US7873793B1 (en) | 2003-07-16 | 2007-05-29 | Supporting speculative modification in a data cache |
US13/007,015 Abandoned US20150149733A1 (en) | 2003-07-16 | 2011-01-14 | Supporting speculative modification in a data cache |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/622,028 Expired - Lifetime US7225299B1 (en) | 2003-07-16 | 2003-07-16 | Supporting speculative modification in a data cache |
US11/807,629 Expired - Fee Related US7873793B1 (en) | 2003-07-16 | 2007-05-29 | Supporting speculative modification in a data cache |
Country Status (1)
Country | Link |
---|---|
US (3) | US7225299B1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2572968A (en) * | 2018-04-17 | 2019-10-23 | Advanced Risc Mach Ltd | Tracking speculative data caching |
JP2021510434A (en) * | 2018-01-10 | 2021-04-22 | エイアールエム リミテッド | Speculative cache storage |
US11119780B2 (en) | 2018-04-30 | 2021-09-14 | Hewlett Packard Enterprise Development Lp | Side cache |
Families Citing this family (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7225299B1 (en) * | 2003-07-16 | 2007-05-29 | Transmeta Corporation | Supporting speculative modification in a data cache |
US7149851B1 (en) | 2003-08-21 | 2006-12-12 | Transmeta Corporation | Method and system for conservatively managing store capacity available to a processor issuing stores |
US7478226B1 (en) * | 2006-09-29 | 2009-01-13 | Transmeta Corporation | Processing bypass directory tracking system and method |
US7774583B1 (en) | 2006-09-29 | 2010-08-10 | Parag Gupta | Processing bypass register file system and method |
US8898401B2 (en) * | 2008-11-07 | 2014-11-25 | Oracle America, Inc. | Methods and apparatuses for improving speculation success in processors |
US8806145B2 (en) * | 2008-11-07 | 2014-08-12 | Oracle America, Inc. | Methods and apparatuses for improving speculation success in processors |
US9256514B2 (en) | 2009-02-19 | 2016-02-09 | Nvidia Corporation | Debugging and perfomance analysis of applications |
US10146545B2 (en) | 2012-03-13 | 2018-12-04 | Nvidia Corporation | Translation address cache for a microprocessor |
US9880846B2 (en) | 2012-04-11 | 2018-01-30 | Nvidia Corporation | Improving hit rate of code translation redirection table with replacement strategy based on usage history table of evicted entries |
US9875105B2 (en) | 2012-05-03 | 2018-01-23 | Nvidia Corporation | Checkpointed buffer for re-entry from runahead |
US10241810B2 (en) | 2012-05-18 | 2019-03-26 | Nvidia Corporation | Instruction-optimizing processor with branch-count table in hardware |
US9411595B2 (en) | 2012-05-31 | 2016-08-09 | Nvidia Corporation | Multi-threaded transactional memory coherence |
US9645929B2 (en) * | 2012-09-14 | 2017-05-09 | Nvidia Corporation | Speculative permission acquisition for shared memory |
US10001996B2 (en) | 2012-10-26 | 2018-06-19 | Nvidia Corporation | Selective poisoning of data during runahead |
US9740553B2 (en) | 2012-11-14 | 2017-08-22 | Nvidia Corporation | Managing potentially invalid results during runahead |
US9632976B2 (en) | 2012-12-07 | 2017-04-25 | Nvidia Corporation | Lazy runahead operation for a microprocessor |
US9569214B2 (en) | 2012-12-27 | 2017-02-14 | Nvidia Corporation | Execution pipeline data forwarding |
US20140189310A1 (en) | 2012-12-27 | 2014-07-03 | Nvidia Corporation | Fault detection in instruction translations |
US9823931B2 (en) | 2012-12-28 | 2017-11-21 | Nvidia Corporation | Queued instruction re-dispatch after runahead |
US10108424B2 (en) | 2013-03-14 | 2018-10-23 | Nvidia Corporation | Profiling code portions to generate translations |
US9547602B2 (en) | 2013-03-14 | 2017-01-17 | Nvidia Corporation | Translation lookaside buffer entry systems and methods |
US9477575B2 (en) | 2013-06-12 | 2016-10-25 | Nvidia Corporation | Method and system for implementing a multi-threaded API stream replay |
US9582280B2 (en) | 2013-07-18 | 2017-02-28 | Nvidia Corporation | Branching to alternate code based on runahead determination |
US9569385B2 (en) | 2013-09-09 | 2017-02-14 | Nvidia Corporation | Memory transaction ordering |
US10657057B2 (en) * | 2018-04-04 | 2020-05-19 | Nxp B.V. | Secure speculative instruction execution in a data processing system |
GB2598784B (en) * | 2020-09-14 | 2022-11-16 | Advanced Risc Mach Ltd | Draining operation for draining dirty cache lines to persistent memory |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030014602A1 (en) * | 2001-07-12 | 2003-01-16 | Nec Corporation | Cache memory control method and multi-processor system |
US20030182539A1 (en) * | 2002-03-20 | 2003-09-25 | International Business Machines Corporation | Storing execution results of mispredicted paths in a superscalar computer processor |
US7225299B1 (en) * | 2003-07-16 | 2007-05-29 | Transmeta Corporation | Supporting speculative modification in a data cache |
Family Cites Families (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5155831A (en) | 1989-04-24 | 1992-10-13 | International Business Machines Corporation | Data processing system with fast queue store interposed between store-through caches and a main memory |
US5428761A (en) | 1992-03-12 | 1995-06-27 | Digital Equipment Corporation | System for achieving atomic non-sequential multi-word operations in shared memory |
US5548735A (en) | 1993-09-15 | 1996-08-20 | International Business Machines Corporation | System and method for asynchronously processing store instructions to I/O space |
US5802574A (en) * | 1993-12-28 | 1998-09-01 | Intel Corporation | Method and apparatus for quickly modifying cache state |
US6006299A (en) | 1994-03-01 | 1999-12-21 | Intel Corporation | Apparatus and method for caching lock conditions in a multi-processor system |
US5634073A (en) | 1994-10-14 | 1997-05-27 | Compaq Computer Corporation | System having a plurality of posting queues associated with different types of write operations for selectively checking one queue based upon type of read operation |
US5901302A (en) | 1995-01-25 | 1999-05-04 | Advanced Micro Devices, Inc. | Superscalar microprocessor having symmetrical, fixed issue positions each configured to execute a particular subset of instructions |
US5838934A (en) | 1995-06-07 | 1998-11-17 | Texas Instruments Incorporated | Host port interface |
US5701432A (en) | 1995-10-13 | 1997-12-23 | Sun Microsystems, Inc. | Multi-threaded processing system having a cache that is commonly accessible to each thread |
US5838943A (en) | 1996-03-26 | 1998-11-17 | Advanced Micro Devices, Inc. | Apparatus for speculatively storing and restoring data to a cache memory |
US5974438A (en) | 1996-12-31 | 1999-10-26 | Compaq Computer Corporation | Scoreboard for cached multi-thread processes |
US6189074B1 (en) | 1997-03-19 | 2001-02-13 | Advanced Micro Devices, Inc. | Mechanism for storing system level attributes in a translation lookaside buffer |
US6658536B1 (en) | 1997-04-14 | 2003-12-02 | International Business Machines Corporation | Cache-coherency protocol with recently read state for extending cache horizontally |
US5930821A (en) | 1997-05-12 | 1999-07-27 | Integrated Device Technology, Inc. | Method and apparatus for shared cache lines in split data/code caches |
US5926645A (en) | 1997-07-22 | 1999-07-20 | International Business Machines Corporation | Method and system for enabling multiple store instruction completions in a processing system |
US6119205A (en) | 1997-12-22 | 2000-09-12 | Sun Microsystems, Inc. | Speculative cache line write backs to avoid hotspots |
US6263407B1 (en) | 1998-02-17 | 2001-07-17 | International Business Machines Corporation | Cache coherency protocol including a hovering (H) state having a precise mode and an imprecise mode |
US6625694B2 (en) | 1998-05-08 | 2003-09-23 | Fujitsu Ltd. | System and method for allocating a directory entry for use in multiprocessor-node data processing systems |
US6526480B1 (en) * | 1998-12-10 | 2003-02-25 | Fujitsu Limited | Cache apparatus and control method allowing speculative processing of data |
US6487639B1 (en) * | 1999-01-19 | 2002-11-26 | International Business Machines Corporation | Data cache miss lookaside buffer and method thereof |
US6460130B1 (en) | 1999-02-19 | 2002-10-01 | Advanced Micro Devices, Inc. | Detecting full conditions in a queue |
US6564301B1 (en) | 1999-07-06 | 2003-05-13 | Arm Limited | Management of caches in a data processing apparatus |
US6738864B2 (en) | 2000-08-21 | 2004-05-18 | Texas Instruments Incorporated | Level 2 cache architecture for multiprocessor with task—ID and resource—ID |
EP1182571B1 (en) | 2000-08-21 | 2011-01-26 | Texas Instruments Incorporated | TLB operations based on shared bit |
EP1182568A3 (en) | 2000-08-21 | 2004-07-21 | Texas Instruments Incorporated | TLB operation based on task-id |
US6725337B1 (en) | 2001-05-16 | 2004-04-20 | Advanced Micro Devices, Inc. | Method and system for speculatively invalidating lines in a cache |
US6877088B2 (en) | 2001-08-08 | 2005-04-05 | Sun Microsystems, Inc. | Methods and apparatus for controlling speculative execution of instructions based on a multiaccess memory condition |
US6775749B1 (en) | 2002-01-29 | 2004-08-10 | Advanced Micro Devices, Inc. | System and method for performing a speculative cache fill |
US6938130B2 (en) | 2003-02-13 | 2005-08-30 | Sun Microsystems Inc. | Method and apparatus for delaying interfering accesses from other threads during transactional program execution |
US6976110B2 (en) | 2003-12-18 | 2005-12-13 | Freescale Semiconductor, Inc. | Method and apparatus for reducing interrupt latency by dynamic buffer sizing |
-
2003
- 2003-07-16 US US10/622,028 patent/US7225299B1/en not_active Expired - Lifetime
-
2007
- 2007-05-29 US US11/807,629 patent/US7873793B1/en not_active Expired - Fee Related
-
2011
- 2011-01-14 US US13/007,015 patent/US20150149733A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030014602A1 (en) * | 2001-07-12 | 2003-01-16 | Nec Corporation | Cache memory control method and multi-processor system |
US20030182539A1 (en) * | 2002-03-20 | 2003-09-25 | International Business Machines Corporation | Storing execution results of mispredicted paths in a superscalar computer processor |
US7225299B1 (en) * | 2003-07-16 | 2007-05-29 | Transmeta Corporation | Supporting speculative modification in a data cache |
US7873793B1 (en) * | 2003-07-16 | 2011-01-18 | Guillermo Rozas | Supporting speculative modification in a data cache |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2021510434A (en) * | 2018-01-10 | 2021-04-22 | エイアールエム リミテッド | Speculative cache storage |
US11461243B2 (en) | 2018-01-10 | 2022-10-04 | Arm Limited | Speculative cache storage region |
JP7228592B2 (en) | 2018-01-10 | 2023-02-24 | アーム・リミテッド | speculative cache storage |
GB2572968A (en) * | 2018-04-17 | 2019-10-23 | Advanced Risc Mach Ltd | Tracking speculative data caching |
WO2019202288A1 (en) * | 2018-04-17 | 2019-10-24 | Arm Limited | Tracking speculative data caching |
GB2572968B (en) * | 2018-04-17 | 2020-09-02 | Advanced Risc Mach Ltd | Tracking speculative data caching |
US11397584B2 (en) | 2018-04-17 | 2022-07-26 | Arm Limited | Tracking speculative data caching |
US11119780B2 (en) | 2018-04-30 | 2021-09-14 | Hewlett Packard Enterprise Development Lp | Side cache |
Also Published As
Publication number | Publication date |
---|---|
US7225299B1 (en) | 2007-05-29 |
US7873793B1 (en) | 2011-01-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20150149733A1 (en) | Supporting speculative modification in a data cache | |
RU2212704C2 (en) | Shared cache structure for timing and non-timing commands | |
US6266744B1 (en) | Store to load forwarding using a dependency link file | |
US20180011748A1 (en) | Post-retire scheme for tracking tentative accesses during transactional execution | |
US8321634B2 (en) | System and method for performing memory operations in a computing system | |
US6981104B2 (en) | Method for conducting checkpointing within a writeback cache | |
US6636950B1 (en) | Computer architecture for shared memory access | |
US6704841B2 (en) | Method and apparatus for facilitating speculative stores in a multiprocessor system | |
US8838906B2 (en) | Evict on write, a management strategy for a prefetch unit and/or first level cache in a multiprocessor system with speculative execution | |
US6473837B1 (en) | Snoop resynchronization mechanism to preserve read ordering | |
US7698504B2 (en) | Cache line marking with shared timestamps | |
US6721855B2 (en) | Using an L2 directory to facilitate speculative loads in a multiprocessor system | |
US7461205B2 (en) | Performing useful computations while waiting for a line in a system with a software implemented cache | |
TWI533201B (en) | Cache control to reduce transaction roll back | |
US8943273B1 (en) | Method and apparatus for improving cache efficiency | |
US6718839B2 (en) | Method and apparatus for facilitating speculative loads in a multiprocessor system | |
US6473832B1 (en) | Load/store unit having pre-cache and post-cache queues for low latency load memory operations | |
US8601240B2 (en) | Selectively defering load instructions after encountering a store instruction with an unknown destination address during speculative execution | |
US8051247B1 (en) | Trace based deallocation of entries in a versioning cache circuit | |
US6427193B1 (en) | Deadlock avoidance using exponential backoff | |
US6415360B1 (en) | Minimizing self-modifying code checks for uncacheable memory types | |
US6766427B1 (en) | Method and apparatus for loading data from memory to a cache |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTELLECTUAL VENTURES HOLDING 81 LLC, NEVADA Free format text: MERGER;ASSIGNOR:INTELLECTUAL VENTURE FUNDING LLC;REEL/FRAME:036711/0160 Effective date: 20150827 |
|
AS | Assignment |
Owner name: INTELLECTUAL VENTURES HOLDING 81 LLC, NEVADA Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNOR'S NAME PREVIOUSLY RECORDED AT REEL: 036711 FRAME: 0160. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER;ASSIGNOR:INTELLECTUAL VENTURES FUNDING LLC;REEL/FRAME:036797/0356 Effective date: 20150827 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION |