WO2014206234A1 - Procédé et dispositif de mise en cache - Google Patents

Procédé et dispositif de mise en cache Download PDF

Info

Publication number
WO2014206234A1
WO2014206234A1 PCT/CN2014/080174 CN2014080174W WO2014206234A1 WO 2014206234 A1 WO2014206234 A1 WO 2014206234A1 CN 2014080174 W CN2014080174 W CN 2014080174W WO 2014206234 A1 WO2014206234 A1 WO 2014206234A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory page
allocated
replaced
cache
cached
Prior art date
Application number
PCT/CN2014/080174
Other languages
English (en)
Chinese (zh)
Inventor
董建波
张乐乐
李花芳
侯锐
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2014206234A1 publication Critical patent/WO2014206234A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/122Replacement control using replacement algorithms of the least frequently used [LFU] type, e.g. with individual count value

Definitions

  • the present invention relates to the field of computers, and in particular, to a cache method and apparatus.
  • the data center in order to effectively reduce memory power consumption while providing large-capacity memory, the data center usually uses phase-change memory (PRAM) and dynamic random access memory (DRAM) to build memory.
  • PRAM phase-change memory
  • DRAM dynamic random access memory
  • the structure of the memory system constructed by the PRAM and the DRAM is usually a vertical hybrid structure.
  • the DRAM acts as a buffer of the PRAM, and the accessed memory page in the PRAM can be cached into the cache block in the DRAM that is pre-mapped for the memory page.
  • the higher the heat of the cached memory pages in the cache block the higher the cache efficiency and effectiveness of the cache block.
  • the mapping method between DRAM and PRAM usually adopts a group association method, that is, a cache block in which a memory page is mapped in a DRAM is determined. Therefore, there may be a case where the memory page distribution is excessively concentrated in the above scheme, for example, a case where a single cache block corresponds to a plurality of memory pages.
  • the existing cache scheme is: if a memory page that needs to be cached has no free cache space in the currently mapped cache block, the memory page currently cached by the cache block is replaced with the current cached page. Memory page.
  • the present invention provides a cache method and apparatus for solving the problem that the cache efficiency of a cache block is reduced due to excessive concentration of memory pages in the existing cache scheme.
  • the present invention provides a cache method, including: determining, according to the heat of each memory page, a page to be allocated that has the highest heat and is not cached; if there is no free cache space in each cache block, the detection center Whether the memory page to be replaced is cached in the cache block, the heat of the memory page to be replaced is lower than the heat of the memory page to be allocated; if yes, the memory page to be replaced is replaced with the memory page to be allocated.
  • the detecting whether the memory page to be replaced is cached in the cache block includes: detecting whether the cache block currently mapped by the memory page to be allocated is in a cache block The memory page to be replaced is cached; the replacing the memory page to be replaced with the memory page to be allocated, including: if the memory page to be allocated is currently mapped, the memory page to be replaced is cached And replacing the memory page to be replaced with the memory page to be allocated.
  • the detecting after detecting, in the cache block that is currently mapped by the memory page to be allocated, whether the memory page to be replaced is cached, And: if the memory page to be replaced is not cached in the cache block currently mapped by the memory page to be allocated, mapping the memory page to be allocated to another cache block by page migration; performing the detecting the The step of allocating memory pages to be replaced in the currently mapped cache block of the allocated memory page.
  • the method further includes: performing the step of determining, according to the heat of each memory page, a page of the memory to be allocated that is currently hottest and not cached, until the heat of the currently cached memory page is not lower than the page of the memory to be allocated heat.
  • the replacing the memory page to be replaced with the memory page to be allocated including : if there are multiple memory pages to be replaced, the heat is to be replaced in the memory page to be replaced The lowest memory page is replaced with the memory page to be allocated.
  • the method further includes: if the cache block currently mapped by the memory page to be allocated has a free cache space, buffering the memory page to be allocated to the cache block currently mapped by the memory page to be allocated.
  • the method further includes: if the cache block currently mapped by the memory page to be allocated does not have a free cache space, and there is free cache space in the other cache block, mapping the to-be-allocated memory page by page migration The cache block to which the free cache space belongs, and cached.
  • the determining, before determining the current hottest and uncached memory page to be allocated further includes : According to the preset period, the heat of each memory page is regularly counted and updated.
  • the present invention provides a cache device, including: an obtaining module, configured to determine, according to the heat of each memory page, a page to be allocated that has the highest heat and is not cached; and a detecting module, if the current cache block is used If there is no free cache space, it is detected whether the memory page to be replaced is cached in the cache block, and the heat of the memory page to be replaced is lower than the heat of the memory page to be allocated; The memory page to be replaced is cached in each cache block, and the memory page to be replaced is replaced with the memory page to be allocated.
  • the detecting module is configured to: if there is no free cache space in each cache block, detect a cache of the current mapping of the memory page to be allocated Whether the memory page to be replaced is cached in the block; the first processing module is configured to: if the memory page to be replaced is cached in the cache block currently mapped by the memory page to be allocated, the to-be-replaced The memory page is replaced with the memory page to be allocated.
  • the first processing module is further configured to: if the memory page to be allocated is currently mapped by a cache The memory page to be replaced is not cached in the block, and the memory page to be allocated is mapped to another cache block by page migration, and the detecting module is instructed to perform the detecting the current mapping of the memory page to be allocated again. The step of caching the memory page to be replaced in the cache block.
  • the first processing module is further configured to replace the memory page to be replaced After the memory page is to be allocated, the obtaining module is configured to perform the step of determining the current hottest and uncached memory page according to the heat of each memory page, until the heat of the currently cached memory page is Not less than the heat of the memory page to be allocated.
  • the first processing module is specifically configured to: if the memory page to be replaced is And replacing the memory page with the lowest heat among the memory pages to be replaced with the memory page to be allocated.
  • the device further includes: a second processing module, configured to be used according to each memory page After the current memory table to be allocated is determined, if the cache block currently mapped by the memory page to be allocated has a free cache space, the memory page to be allocated is cached to the current mapped page of the memory page to be allocated. Cache block.
  • the device further includes: a third processing module, configured to be used according to each memory page If the cached block currently mapped by the memory page to be allocated does not have free cache space, and there is free cache space in other cache blocks, the page is passed.
  • the migration maps the to-be-allocated memory page to a cache block to which the free cache space belongs, and caches.
  • the device further includes: a statistics module, configured to perform timing statistics according to a preset period Update the heat of each memory page.
  • the cache method and device provided by the present invention after determining the hottest and uncached memory page to be allocated, if there is no free cache space in each cache block, the cached heat will be cached.
  • the technical solution of replacing the memory page to be replaced with the heat of the memory page to be allocated is replaced by the technical solution of the memory page to be allocated, so as to reduce the cache efficiency of the cache block.
  • FIG. 1 is a schematic flowchart of a cache method according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic flowchart of another cache method according to Embodiment 2 of the present invention
  • FIG. 4 is a schematic structural diagram of a cache device according to Embodiment 4 of the present invention
  • FIG. 5 is a schematic structural diagram of another cache device according to Embodiment 5 of the present invention
  • FIG. 1 is a schematic flowchart of a caching method according to Embodiment 1 of the present invention. As shown in FIG. 1, the method includes:
  • the method may further include: timing according to a preset period Count and update the heat of each memory page.
  • the heat of the memory page may be the number of times of reading and/or the number of times of the memory page in the preset period.
  • the heat of each memory page can be recorded in the heat meter.
  • the memory controller can be used to count the page read and write status of each memory page in the system, thereby reading and writing according to the page. Establish and maintain the heat meter.
  • the method may further include:
  • the memory page to be allocated is cached to the cache block currently mapped by the memory page to be allocated.
  • the method further includes:
  • the page to be allocated is mapped to the free cache space by page migration. Cache block, and cache.
  • the memory page to be allocated may be cached in the presence of a free cache space.
  • the method may include: detecting whether the memory page to be replaced is cached in a cache block currently mapped by the memory page to be allocated;
  • the replacing the memory page to be replaced with the memory page to be allocated may include: if the memory page to be replaced is cached in the cache block currently mapped by the memory page to be allocated, And replacing the memory page to be replaced with the memory page to be allocated.
  • the method may further include:
  • mapping the memory page to be allocated to another cache block by page migration If the memory page to be replaced is not cached in the cache block currently mapped by the memory page to be allocated, mapping the memory page to be allocated to another cache block by page migration;
  • the step of detecting whether the memory page to be replaced is cached in the cache block currently being mapped by the memory page to be allocated is performed.
  • the page to be allocated is mapped to another cache block through the page migration, and After the page is migrated, it is detected again that the cached block currently mapped by the memory page to be allocated has the memory page to be replaced, and the reliability of the cache is improved while avoiding reducing the efficiency of the cache block cache.
  • the replacing the memory page to be replaced with the memory page to be allocated may include: if there is one memory page to be replaced, replacing the memory page to be replaced with the to-be-allocated Memory page.
  • the replacing the memory page to be replaced with the memory page to be allocated may further include: if there are multiple memory pages to be replaced, the heat of the memory page to be replaced is the lowest The memory page is replaced with the memory page to be allocated.
  • the memory page with the lowest heat and the memory page to be allocated are selected to be replaced, which can effectively improve the cache efficiency of the cache block.
  • the cache method provided in this embodiment determines the cached heat is lower than the to-be-allocated memory page after determining the hot-to-be-allocated memory page that is not cached, if there is no free cache space in each cache block.
  • the hot memory page to be replaced is replaced by the technical solution of the memory page to be allocated, so that the cache of the high-heat memory page is realized, and the cache efficiency of the cache block is avoided.
  • FIG. 2 is a schematic flowchart of another cache method according to Embodiment 2 of the present invention. As shown in FIG. 2, the cache method according to the first embodiment may further include:
  • the memory page to be allocated that has the highest heat and is not cached is determined again, and is determined by the related method in Embodiment 1.
  • the currently hottest and uncached memory pages to be allocated are cached, and the loop is continued until the currently cached memory pages are not hotter than the currently uncached memory pages.
  • the specific method for determining whether a certain memory page has been cached may include multiple types. For example, after the memory page is cached, a cached identifier may be added to the memory page in the heat meter.
  • the memory page that does not carry the cached identifier is a memory page that is not currently cached, which is not limited in this embodiment.
  • the caching method provided in this embodiment after caching a page to be allocated, re-determines the program to be allocated that is the hottest and not cached, and caches the solution by using the caching method provided in this embodiment, so that The cached memory pages are not less hot than the cached memory pages, avoiding the cache efficiency of each cache block, and effectively ensuring the overall cache block cache efficiency.
  • FIG. 3 is a schematic flowchart of still another caching method according to Embodiment 3 of the present invention. As shown in FIG. 3, the method includes:
  • the cache method provided by this embodiment determines the cached heat to be the lowest and lower than the to-be-allocated memory after determining the hot-to-be-allocated memory page that is not cached, if there is no free cache space in each cache block.
  • the hot-storage memory page of the page is replaced with the technical solution of the memory page to be allocated, so that the cache of the hot-hot memory page is realized, and the cache efficiency of the cache block is avoided.
  • the device includes: an obtaining module 41, a detecting module 42 and a first processing module 43;
  • the obtaining module 41 is configured to determine, according to the heat of each memory page, a page to be allocated that has the highest heat and is not cached;
  • the detecting module 42 is configured to detect, if there is no free cache space in each cache block, whether the memory page to be replaced is cached in the cache block, where the heat of the memory page to be replaced is lower than the memory page to be allocated Heat
  • the first processing module 43 is configured to replace the memory page to be replaced with the memory page to be allocated if the memory page to be replaced is cached in each cache block.
  • the method may further include: periodically counting and updating the heat of each memory page according to a preset period.
  • the device may further include:
  • a second processing module configured to: after determining the current memory page to be allocated according to the heat of each memory page, if the cache block currently mapped by the memory page to be allocated has a free cache space, the to-be-allocated The memory page is cached to the cache block currently mapped by the memory page to be allocated.
  • the device may further include:
  • a third processing module configured to: after determining, according to the heat of each memory page, a cache page to be allocated, if the current hot page is not cached, if the memory page to be allocated is currently mapped If there is no free cache space, and there is free cache space in other cache blocks, the page to be allocated is mapped to the cache block to which the free cache space belongs by page migration, and cached.
  • the memory page to be allocated may be cached in the case that there is an idle cache space in each of the current cache blocks.
  • the detecting module 42 may be specifically configured to: if there is no free cache space in each of the current cache blocks, detecting whether the memory page to be replaced is cached in the cache block currently mapped by the memory page to be allocated;
  • the first processing module 43 may be specifically configured to: if the memory page to be replaced is cached in the cache block currently mapped by the memory page to be allocated, replace the memory page to be replaced with the to-be-allocated Memory page.
  • the detection module 42 detects that the heat of the memory page cached in the cache block currently mapped by the memory page to be allocated is not lower than the heat of the memory page to be allocated
  • the first process The module 43 is further configured to: if the memory page to be replaced is not cached in the cache block currently mapped by the memory page to be allocated, map the to-be-allocated memory page to another cache block by page migration, and indicate detection
  • the module 42 performs the step of detecting whether the memory page to be replaced is cached in the cache block that is currently mapped by the memory page to be allocated.
  • the reliability of the cache is improved while avoiding the reduction of cache block cache efficiency.
  • the first processing module 43 may be specifically configured to replace the memory page to be replaced with the memory page to be allocated if one of the memory pages to be replaced exists.
  • the first processing module 43 is further configured to: if there are multiple memory pages to be replaced, replace the memory page with the lowest heat among the memory pages to be replaced with the memory page to be allocated.
  • the cache efficiency of the cache block can be effectively improved by the present embodiment.
  • the first processing module 43 is further configured to: after the memory page to be replaced is replaced with the memory page to be allocated, instruct the obtaining module 41 to execute the according to each memory page The heat, the step of determining the current hottest and uncached memory page to be allocated until the currently cached memory page has a heat not lower than the memory page to be allocated The heat.
  • the instruction obtaining module 41 determines again that the current hottest and uncached memory page is to be allocated, and The determined hottest and uncached memory pages to be cached are cached, and the loop is until the currently cached memory pages are not hotter than the currently uncached memory pages.
  • the cache device provided in this embodiment determines the cached heat is lower than the to-be-allocated memory page after determining the hot-to-be-allocated memory page that is not cached, if there is no free cache space in each cache block.
  • the hot memory page to be replaced is replaced by the technical solution of the memory page to be allocated, so that the cache of the high-heat memory page is realized, and the cache efficiency of the cache block is avoided.
  • FIG. 5 is a schematic structural diagram of another cache device according to Embodiment 5 of the present invention. As shown in FIG. 5, the device includes:
  • the memory 51 is used to store the program.
  • the program can include program code, the program code including computer operating instructions.
  • the memory 51 may include a high speed RAM memory and may also include a non-volatile memory such as at least one disk memory.
  • the processor 52 executes a program stored in the memory 51, configured to: determine, according to the heat of each memory page, a page to be allocated that is currently hottest and not cached; if there is no free cache space in each cache block, then detecting Whether the memory page to be replaced is hotter than the page to be allocated; if yes, replacing the memory page to be replaced with the memory page to be allocated .
  • the processor 52 may be configured to determine, according to the heat of each memory page, a page to be allocated that is currently hottest and not cached; if there is no free cache space in each cache block, detecting the to-be-allocated memory Whether the memory page to be replaced is cached in the cache block currently mapped by the page; if yes, replacing the memory page to be replaced with the memory page to be allocated.
  • the processor 52 is further configured to: if the memory page to be replaced is not cached in the cache block currently mapped by the memory page to be allocated, pass the page Migrating the memory page to be allocated to another cache block; The step of detecting whether the memory page to be replaced is cached in the cache block currently being mapped by the memory page to be allocated is performed.
  • the processor 52 may be a central processing unit (CPU), or an application specific integrated circuit (ASIC), or one configured to implement an embodiment of the present invention. Multiple integrated circuits.
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • the processor 52 is further configured to: after replacing the memory page to be replaced with the memory page to be allocated, returning to perform the The heat of the memory page determines the current hottest and uncached page of the memory to be allocated until the heat of the currently cached memory page is not lower than the heat of the memory page to be allocated.
  • the processor 52 is configured to: if there are multiple memory pages to be replaced, replace the memory page with the lowest heat among the memory pages to be replaced with the memory page to be allocated.
  • the processor 52 is further configured to: if the cache block currently mapped by the memory page to be allocated has a free cache space, the to-be-allocated The memory page is cached to the cache block currently mapped by the memory page to be allocated; or, if there is no free cache space in the cache block currently mapped by the memory page to be allocated, and there is free cache space in other cache blocks, The page migration maps the to-be-allocated memory page to the cache block to which the free cache space belongs, and caches.
  • the processor 52 is further configured to periodically count and update the heat of each memory page according to a preset period.
  • the device may further include: a communication interface 53 configured to acquire the heat of each memory page.
  • a communication interface 53 configured to acquire the heat of each memory page.
  • the bus may be an Industry Standard Architecture (abbreviated as ISA) bus, and an external device interconnect (Peripheral) Component (referred to as PCI) bus or extended industry standard architecture (EISA) bus.
  • ISA Industry Standard Architecture
  • PCI Peripheral Component
  • EISA extended industry standard architecture
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in Figure 5, but it does not mean that there is only one bus or one type of bus.
  • the cache device provided in this embodiment determines the cached heat is lower than the to-be-allocated memory page after determining the hot-to-be-allocated memory page that is not cached, if there is no free cache space in each cache block.
  • the hot memory page to be replaced is replaced by the technical solution of the memory page to be allocated, so that the cache of the high-heat memory page is realized, and the cache efficiency of the cache block is avoided.
  • the aforementioned program can be stored in a computer readable storage medium.
  • the program when executed, performs the steps including the above-described method embodiments; and the foregoing storage medium includes: a medium that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.
  • the invention is not limited thereto; although the invention has been described in detail with reference to the foregoing embodiments, those skilled in the art should understand that the technical solutions described in the foregoing embodiments may be modified, or some or all of them may be modified.
  • the technical features are equivalently substituted; and the modifications or substitutions do not detract from the essence of the technical solutions of the embodiments of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

L'invention concerne un procédé et un dispositif de mise en cache. Le procédé comprend les étapes suivantes : en fonction du nombre d'accès de chaque page de mémoire, déterminer une page de mémoire à allouer qui a le nombre d'accès le plus élevé et qui n'est pas mise en cache; s'il n'y a pas actuellement d'espace de mémoire cache libre dans chaque bloc de mémoire cache, détecter si une page de mémoire à remplacer est mise en cache dans chacun des blocs de mémoire cache, le nombre d'accès de la page de mémoire à remplacer étant inférieur au nombre d'accès de la page de mémoire à allouer; et dans l'affirmative, remplacer la page de mémoire à remplacer par la page de mémoire à allouer.
PCT/CN2014/080174 2013-06-25 2014-06-18 Procédé et dispositif de mise en cache WO2014206234A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310257056.0A CN104252421A (zh) 2013-06-25 2013-06-25 缓存方法及装置
CN201310257056.0 2013-06-25

Publications (1)

Publication Number Publication Date
WO2014206234A1 true WO2014206234A1 (fr) 2014-12-31

Family

ID=52141039

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/080174 WO2014206234A1 (fr) 2013-06-25 2014-06-18 Procédé et dispositif de mise en cache

Country Status (2)

Country Link
CN (1) CN104252421A (fr)
WO (1) WO2014206234A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834609B (zh) * 2015-05-31 2017-12-22 上海交通大学 基于历史升降级频率的多级缓存方法
WO2022021158A1 (fr) * 2020-07-29 2022-02-03 华为技术有限公司 Système de mémoire cache, procédé et puce

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6397298B1 (en) * 1999-07-30 2002-05-28 International Business Machines Corporation Cache memory having a programmable cache replacement scheme
CN102063386A (zh) * 2010-12-17 2011-05-18 曙光信息产业(北京)有限公司 一种单载体多目标的缓存***的缓存管理方法
CN102253901A (zh) * 2011-07-13 2011-11-23 清华大学 一种基于相变内存的读写区分数据存储替换方法
CN102521161A (zh) * 2011-11-21 2012-06-27 华为技术有限公司 一种数据的缓存方法、装置和服务器
CN103076992A (zh) * 2012-12-27 2013-05-01 杭州华为数字技术有限公司 一种内存数据缓冲方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100787856B1 (ko) * 2006-11-29 2007-12-27 한양대학교 산학협력단 플래시 메모리 저장장치의 페이지 교체 방법
CN101727403A (zh) * 2008-10-15 2010-06-09 深圳市朗科科技股份有限公司 数据存储***、设备及方法
CN102156753B (zh) * 2011-04-29 2012-11-14 中国人民解放军国防科学技术大学 面向固态硬盘文件***的数据页缓存方法
CN103019955B (zh) * 2011-09-28 2016-06-08 中国科学院上海微***与信息技术研究所 基于pcram主存应用的内存管理方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6397298B1 (en) * 1999-07-30 2002-05-28 International Business Machines Corporation Cache memory having a programmable cache replacement scheme
CN102063386A (zh) * 2010-12-17 2011-05-18 曙光信息产业(北京)有限公司 一种单载体多目标的缓存***的缓存管理方法
CN102253901A (zh) * 2011-07-13 2011-11-23 清华大学 一种基于相变内存的读写区分数据存储替换方法
CN102521161A (zh) * 2011-11-21 2012-06-27 华为技术有限公司 一种数据的缓存方法、装置和服务器
CN103076992A (zh) * 2012-12-27 2013-05-01 杭州华为数字技术有限公司 一种内存数据缓冲方法及装置

Also Published As

Publication number Publication date
CN104252421A (zh) 2014-12-31

Similar Documents

Publication Publication Date Title
JP6431536B2 (ja) 最終レベルキャッシュシステム及び対応する方法
US9792220B2 (en) Microcontroller for memory management unit
US10593380B1 (en) Performance monitoring for storage-class memory
US10133677B2 (en) Opportunistic migration of memory pages in a unified virtual memory system
US10445243B2 (en) Fault buffer for resolving page faults in unified virtual memory system
WO2014190695A1 (fr) Système de mémoire, procédé de traitement de demande d'accès en mémoire et système informatique
CN105183662B (zh) 一种无cache一致性协议的分布式共享片上存储架构
WO2011107046A2 (fr) Dispositif et procédé de surveillance de l'accès mémoire
WO2015010646A1 (fr) Procédé, module, processeur et dispositif terminal d'accès aux données de mémoire hybride
US10114758B2 (en) Techniques for supporting for demand paging
US9639474B2 (en) Migration of peer-mapped memory pages
US11435952B2 (en) Memory system and control method controlling nonvolatile memory in accordance with command issued by processor
EP3534265A1 (fr) Technique d'accès mémoire
JP2015035010A (ja) メモリシステムおよび情報処理装置
CN102521179A (zh) 一种dma读操作的实现装置和方法
US10705977B2 (en) Method of dirty cache line eviction
US10762137B1 (en) Page table search engine
TWI696949B (zh) 直接記憶體存取方法、裝置、專用計算晶片及異構計算系統
US20170300255A1 (en) Method and Apparatus for Detecting Transaction Conflict and Computer System
EP4060505A1 (fr) Techniques de proximité d'accélération de données proches pour une architecture multic ur
US20170199819A1 (en) Cache Directory Processing Method for Multi-Core Processor System, and Directory Controller
WO2014206234A1 (fr) Procédé et dispositif de mise en cache
US10754789B1 (en) Address translation for storage class memory in a system that includes virtual machines
WO2016041156A1 (fr) Procédé et appareil d'ordonnancement d'uct
US20150154107A1 (en) Non-volatile memory sector rotation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14817273

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14817273

Country of ref document: EP

Kind code of ref document: A1