WO2014206234A1 - 缓存方法及装置 - Google Patents

缓存方法及装置 Download PDF

Info

Publication number
WO2014206234A1
WO2014206234A1 PCT/CN2014/080174 CN2014080174W WO2014206234A1 WO 2014206234 A1 WO2014206234 A1 WO 2014206234A1 CN 2014080174 W CN2014080174 W CN 2014080174W WO 2014206234 A1 WO2014206234 A1 WO 2014206234A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory page
allocated
replaced
cache
cached
Prior art date
Application number
PCT/CN2014/080174
Other languages
English (en)
French (fr)
Inventor
董建波
张乐乐
李花芳
侯锐
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2014206234A1 publication Critical patent/WO2014206234A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/122Replacement control using replacement algorithms of the least frequently used [LFU] type, e.g. with individual count value

Definitions

  • the present invention relates to the field of computers, and in particular, to a cache method and apparatus.
  • the data center in order to effectively reduce memory power consumption while providing large-capacity memory, the data center usually uses phase-change memory (PRAM) and dynamic random access memory (DRAM) to build memory.
  • PRAM phase-change memory
  • DRAM dynamic random access memory
  • the structure of the memory system constructed by the PRAM and the DRAM is usually a vertical hybrid structure.
  • the DRAM acts as a buffer of the PRAM, and the accessed memory page in the PRAM can be cached into the cache block in the DRAM that is pre-mapped for the memory page.
  • the higher the heat of the cached memory pages in the cache block the higher the cache efficiency and effectiveness of the cache block.
  • the mapping method between DRAM and PRAM usually adopts a group association method, that is, a cache block in which a memory page is mapped in a DRAM is determined. Therefore, there may be a case where the memory page distribution is excessively concentrated in the above scheme, for example, a case where a single cache block corresponds to a plurality of memory pages.
  • the existing cache scheme is: if a memory page that needs to be cached has no free cache space in the currently mapped cache block, the memory page currently cached by the cache block is replaced with the current cached page. Memory page.
  • the present invention provides a cache method and apparatus for solving the problem that the cache efficiency of a cache block is reduced due to excessive concentration of memory pages in the existing cache scheme.
  • the present invention provides a cache method, including: determining, according to the heat of each memory page, a page to be allocated that has the highest heat and is not cached; if there is no free cache space in each cache block, the detection center Whether the memory page to be replaced is cached in the cache block, the heat of the memory page to be replaced is lower than the heat of the memory page to be allocated; if yes, the memory page to be replaced is replaced with the memory page to be allocated.
  • the detecting whether the memory page to be replaced is cached in the cache block includes: detecting whether the cache block currently mapped by the memory page to be allocated is in a cache block The memory page to be replaced is cached; the replacing the memory page to be replaced with the memory page to be allocated, including: if the memory page to be allocated is currently mapped, the memory page to be replaced is cached And replacing the memory page to be replaced with the memory page to be allocated.
  • the detecting after detecting, in the cache block that is currently mapped by the memory page to be allocated, whether the memory page to be replaced is cached, And: if the memory page to be replaced is not cached in the cache block currently mapped by the memory page to be allocated, mapping the memory page to be allocated to another cache block by page migration; performing the detecting the The step of allocating memory pages to be replaced in the currently mapped cache block of the allocated memory page.
  • the method further includes: performing the step of determining, according to the heat of each memory page, a page of the memory to be allocated that is currently hottest and not cached, until the heat of the currently cached memory page is not lower than the page of the memory to be allocated heat.
  • the replacing the memory page to be replaced with the memory page to be allocated including : if there are multiple memory pages to be replaced, the heat is to be replaced in the memory page to be replaced The lowest memory page is replaced with the memory page to be allocated.
  • the method further includes: if the cache block currently mapped by the memory page to be allocated has a free cache space, buffering the memory page to be allocated to the cache block currently mapped by the memory page to be allocated.
  • the method further includes: if the cache block currently mapped by the memory page to be allocated does not have a free cache space, and there is free cache space in the other cache block, mapping the to-be-allocated memory page by page migration The cache block to which the free cache space belongs, and cached.
  • the determining, before determining the current hottest and uncached memory page to be allocated further includes : According to the preset period, the heat of each memory page is regularly counted and updated.
  • the present invention provides a cache device, including: an obtaining module, configured to determine, according to the heat of each memory page, a page to be allocated that has the highest heat and is not cached; and a detecting module, if the current cache block is used If there is no free cache space, it is detected whether the memory page to be replaced is cached in the cache block, and the heat of the memory page to be replaced is lower than the heat of the memory page to be allocated; The memory page to be replaced is cached in each cache block, and the memory page to be replaced is replaced with the memory page to be allocated.
  • the detecting module is configured to: if there is no free cache space in each cache block, detect a cache of the current mapping of the memory page to be allocated Whether the memory page to be replaced is cached in the block; the first processing module is configured to: if the memory page to be replaced is cached in the cache block currently mapped by the memory page to be allocated, the to-be-replaced The memory page is replaced with the memory page to be allocated.
  • the first processing module is further configured to: if the memory page to be allocated is currently mapped by a cache The memory page to be replaced is not cached in the block, and the memory page to be allocated is mapped to another cache block by page migration, and the detecting module is instructed to perform the detecting the current mapping of the memory page to be allocated again. The step of caching the memory page to be replaced in the cache block.
  • the first processing module is further configured to replace the memory page to be replaced After the memory page is to be allocated, the obtaining module is configured to perform the step of determining the current hottest and uncached memory page according to the heat of each memory page, until the heat of the currently cached memory page is Not less than the heat of the memory page to be allocated.
  • the first processing module is specifically configured to: if the memory page to be replaced is And replacing the memory page with the lowest heat among the memory pages to be replaced with the memory page to be allocated.
  • the device further includes: a second processing module, configured to be used according to each memory page After the current memory table to be allocated is determined, if the cache block currently mapped by the memory page to be allocated has a free cache space, the memory page to be allocated is cached to the current mapped page of the memory page to be allocated. Cache block.
  • the device further includes: a third processing module, configured to be used according to each memory page If the cached block currently mapped by the memory page to be allocated does not have free cache space, and there is free cache space in other cache blocks, the page is passed.
  • the migration maps the to-be-allocated memory page to a cache block to which the free cache space belongs, and caches.
  • the device further includes: a statistics module, configured to perform timing statistics according to a preset period Update the heat of each memory page.
  • the cache method and device provided by the present invention after determining the hottest and uncached memory page to be allocated, if there is no free cache space in each cache block, the cached heat will be cached.
  • the technical solution of replacing the memory page to be replaced with the heat of the memory page to be allocated is replaced by the technical solution of the memory page to be allocated, so as to reduce the cache efficiency of the cache block.
  • FIG. 1 is a schematic flowchart of a cache method according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic flowchart of another cache method according to Embodiment 2 of the present invention
  • FIG. 4 is a schematic structural diagram of a cache device according to Embodiment 4 of the present invention
  • FIG. 5 is a schematic structural diagram of another cache device according to Embodiment 5 of the present invention
  • FIG. 1 is a schematic flowchart of a caching method according to Embodiment 1 of the present invention. As shown in FIG. 1, the method includes:
  • the method may further include: timing according to a preset period Count and update the heat of each memory page.
  • the heat of the memory page may be the number of times of reading and/or the number of times of the memory page in the preset period.
  • the heat of each memory page can be recorded in the heat meter.
  • the memory controller can be used to count the page read and write status of each memory page in the system, thereby reading and writing according to the page. Establish and maintain the heat meter.
  • the method may further include:
  • the memory page to be allocated is cached to the cache block currently mapped by the memory page to be allocated.
  • the method further includes:
  • the page to be allocated is mapped to the free cache space by page migration. Cache block, and cache.
  • the memory page to be allocated may be cached in the presence of a free cache space.
  • the method may include: detecting whether the memory page to be replaced is cached in a cache block currently mapped by the memory page to be allocated;
  • the replacing the memory page to be replaced with the memory page to be allocated may include: if the memory page to be replaced is cached in the cache block currently mapped by the memory page to be allocated, And replacing the memory page to be replaced with the memory page to be allocated.
  • the method may further include:
  • mapping the memory page to be allocated to another cache block by page migration If the memory page to be replaced is not cached in the cache block currently mapped by the memory page to be allocated, mapping the memory page to be allocated to another cache block by page migration;
  • the step of detecting whether the memory page to be replaced is cached in the cache block currently being mapped by the memory page to be allocated is performed.
  • the page to be allocated is mapped to another cache block through the page migration, and After the page is migrated, it is detected again that the cached block currently mapped by the memory page to be allocated has the memory page to be replaced, and the reliability of the cache is improved while avoiding reducing the efficiency of the cache block cache.
  • the replacing the memory page to be replaced with the memory page to be allocated may include: if there is one memory page to be replaced, replacing the memory page to be replaced with the to-be-allocated Memory page.
  • the replacing the memory page to be replaced with the memory page to be allocated may further include: if there are multiple memory pages to be replaced, the heat of the memory page to be replaced is the lowest The memory page is replaced with the memory page to be allocated.
  • the memory page with the lowest heat and the memory page to be allocated are selected to be replaced, which can effectively improve the cache efficiency of the cache block.
  • the cache method provided in this embodiment determines the cached heat is lower than the to-be-allocated memory page after determining the hot-to-be-allocated memory page that is not cached, if there is no free cache space in each cache block.
  • the hot memory page to be replaced is replaced by the technical solution of the memory page to be allocated, so that the cache of the high-heat memory page is realized, and the cache efficiency of the cache block is avoided.
  • FIG. 2 is a schematic flowchart of another cache method according to Embodiment 2 of the present invention. As shown in FIG. 2, the cache method according to the first embodiment may further include:
  • the memory page to be allocated that has the highest heat and is not cached is determined again, and is determined by the related method in Embodiment 1.
  • the currently hottest and uncached memory pages to be allocated are cached, and the loop is continued until the currently cached memory pages are not hotter than the currently uncached memory pages.
  • the specific method for determining whether a certain memory page has been cached may include multiple types. For example, after the memory page is cached, a cached identifier may be added to the memory page in the heat meter.
  • the memory page that does not carry the cached identifier is a memory page that is not currently cached, which is not limited in this embodiment.
  • the caching method provided in this embodiment after caching a page to be allocated, re-determines the program to be allocated that is the hottest and not cached, and caches the solution by using the caching method provided in this embodiment, so that The cached memory pages are not less hot than the cached memory pages, avoiding the cache efficiency of each cache block, and effectively ensuring the overall cache block cache efficiency.
  • FIG. 3 is a schematic flowchart of still another caching method according to Embodiment 3 of the present invention. As shown in FIG. 3, the method includes:
  • the cache method provided by this embodiment determines the cached heat to be the lowest and lower than the to-be-allocated memory after determining the hot-to-be-allocated memory page that is not cached, if there is no free cache space in each cache block.
  • the hot-storage memory page of the page is replaced with the technical solution of the memory page to be allocated, so that the cache of the hot-hot memory page is realized, and the cache efficiency of the cache block is avoided.
  • the device includes: an obtaining module 41, a detecting module 42 and a first processing module 43;
  • the obtaining module 41 is configured to determine, according to the heat of each memory page, a page to be allocated that has the highest heat and is not cached;
  • the detecting module 42 is configured to detect, if there is no free cache space in each cache block, whether the memory page to be replaced is cached in the cache block, where the heat of the memory page to be replaced is lower than the memory page to be allocated Heat
  • the first processing module 43 is configured to replace the memory page to be replaced with the memory page to be allocated if the memory page to be replaced is cached in each cache block.
  • the method may further include: periodically counting and updating the heat of each memory page according to a preset period.
  • the device may further include:
  • a second processing module configured to: after determining the current memory page to be allocated according to the heat of each memory page, if the cache block currently mapped by the memory page to be allocated has a free cache space, the to-be-allocated The memory page is cached to the cache block currently mapped by the memory page to be allocated.
  • the device may further include:
  • a third processing module configured to: after determining, according to the heat of each memory page, a cache page to be allocated, if the current hot page is not cached, if the memory page to be allocated is currently mapped If there is no free cache space, and there is free cache space in other cache blocks, the page to be allocated is mapped to the cache block to which the free cache space belongs by page migration, and cached.
  • the memory page to be allocated may be cached in the case that there is an idle cache space in each of the current cache blocks.
  • the detecting module 42 may be specifically configured to: if there is no free cache space in each of the current cache blocks, detecting whether the memory page to be replaced is cached in the cache block currently mapped by the memory page to be allocated;
  • the first processing module 43 may be specifically configured to: if the memory page to be replaced is cached in the cache block currently mapped by the memory page to be allocated, replace the memory page to be replaced with the to-be-allocated Memory page.
  • the detection module 42 detects that the heat of the memory page cached in the cache block currently mapped by the memory page to be allocated is not lower than the heat of the memory page to be allocated
  • the first process The module 43 is further configured to: if the memory page to be replaced is not cached in the cache block currently mapped by the memory page to be allocated, map the to-be-allocated memory page to another cache block by page migration, and indicate detection
  • the module 42 performs the step of detecting whether the memory page to be replaced is cached in the cache block that is currently mapped by the memory page to be allocated.
  • the reliability of the cache is improved while avoiding the reduction of cache block cache efficiency.
  • the first processing module 43 may be specifically configured to replace the memory page to be replaced with the memory page to be allocated if one of the memory pages to be replaced exists.
  • the first processing module 43 is further configured to: if there are multiple memory pages to be replaced, replace the memory page with the lowest heat among the memory pages to be replaced with the memory page to be allocated.
  • the cache efficiency of the cache block can be effectively improved by the present embodiment.
  • the first processing module 43 is further configured to: after the memory page to be replaced is replaced with the memory page to be allocated, instruct the obtaining module 41 to execute the according to each memory page The heat, the step of determining the current hottest and uncached memory page to be allocated until the currently cached memory page has a heat not lower than the memory page to be allocated The heat.
  • the instruction obtaining module 41 determines again that the current hottest and uncached memory page is to be allocated, and The determined hottest and uncached memory pages to be cached are cached, and the loop is until the currently cached memory pages are not hotter than the currently uncached memory pages.
  • the cache device provided in this embodiment determines the cached heat is lower than the to-be-allocated memory page after determining the hot-to-be-allocated memory page that is not cached, if there is no free cache space in each cache block.
  • the hot memory page to be replaced is replaced by the technical solution of the memory page to be allocated, so that the cache of the high-heat memory page is realized, and the cache efficiency of the cache block is avoided.
  • FIG. 5 is a schematic structural diagram of another cache device according to Embodiment 5 of the present invention. As shown in FIG. 5, the device includes:
  • the memory 51 is used to store the program.
  • the program can include program code, the program code including computer operating instructions.
  • the memory 51 may include a high speed RAM memory and may also include a non-volatile memory such as at least one disk memory.
  • the processor 52 executes a program stored in the memory 51, configured to: determine, according to the heat of each memory page, a page to be allocated that is currently hottest and not cached; if there is no free cache space in each cache block, then detecting Whether the memory page to be replaced is hotter than the page to be allocated; if yes, replacing the memory page to be replaced with the memory page to be allocated .
  • the processor 52 may be configured to determine, according to the heat of each memory page, a page to be allocated that is currently hottest and not cached; if there is no free cache space in each cache block, detecting the to-be-allocated memory Whether the memory page to be replaced is cached in the cache block currently mapped by the page; if yes, replacing the memory page to be replaced with the memory page to be allocated.
  • the processor 52 is further configured to: if the memory page to be replaced is not cached in the cache block currently mapped by the memory page to be allocated, pass the page Migrating the memory page to be allocated to another cache block; The step of detecting whether the memory page to be replaced is cached in the cache block currently being mapped by the memory page to be allocated is performed.
  • the processor 52 may be a central processing unit (CPU), or an application specific integrated circuit (ASIC), or one configured to implement an embodiment of the present invention. Multiple integrated circuits.
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • the processor 52 is further configured to: after replacing the memory page to be replaced with the memory page to be allocated, returning to perform the The heat of the memory page determines the current hottest and uncached page of the memory to be allocated until the heat of the currently cached memory page is not lower than the heat of the memory page to be allocated.
  • the processor 52 is configured to: if there are multiple memory pages to be replaced, replace the memory page with the lowest heat among the memory pages to be replaced with the memory page to be allocated.
  • the processor 52 is further configured to: if the cache block currently mapped by the memory page to be allocated has a free cache space, the to-be-allocated The memory page is cached to the cache block currently mapped by the memory page to be allocated; or, if there is no free cache space in the cache block currently mapped by the memory page to be allocated, and there is free cache space in other cache blocks, The page migration maps the to-be-allocated memory page to the cache block to which the free cache space belongs, and caches.
  • the processor 52 is further configured to periodically count and update the heat of each memory page according to a preset period.
  • the device may further include: a communication interface 53 configured to acquire the heat of each memory page.
  • a communication interface 53 configured to acquire the heat of each memory page.
  • the bus may be an Industry Standard Architecture (abbreviated as ISA) bus, and an external device interconnect (Peripheral) Component (referred to as PCI) bus or extended industry standard architecture (EISA) bus.
  • ISA Industry Standard Architecture
  • PCI Peripheral Component
  • EISA extended industry standard architecture
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in Figure 5, but it does not mean that there is only one bus or one type of bus.
  • the cache device provided in this embodiment determines the cached heat is lower than the to-be-allocated memory page after determining the hot-to-be-allocated memory page that is not cached, if there is no free cache space in each cache block.
  • the hot memory page to be replaced is replaced by the technical solution of the memory page to be allocated, so that the cache of the high-heat memory page is realized, and the cache efficiency of the cache block is avoided.
  • the aforementioned program can be stored in a computer readable storage medium.
  • the program when executed, performs the steps including the above-described method embodiments; and the foregoing storage medium includes: a medium that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.
  • the invention is not limited thereto; although the invention has been described in detail with reference to the foregoing embodiments, those skilled in the art should understand that the technical solutions described in the foregoing embodiments may be modified, or some or all of them may be modified.
  • the technical features are equivalently substituted; and the modifications or substitutions do not detract from the essence of the technical solutions of the embodiments of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

一种缓存方法及装置,方法包括:根据各内存页的热度,确定当前热度最高且未被缓存的待分配内存页;若当前各缓存块中没有空闲的缓存空间,则检测所述各缓存块中是否缓存有待替换内存页,所述待替换内存页的热度低于所述待分配内存页的热度;若是,则将所述待替换内存页替换为所述待分配内存页。

Description

緩存方法及装置 本申请要求于 2013 年 6 月 25 日提交中国专利局、 申请号为 201310257056.0、 发明名称为 "緩存方法及装置" 的中国专利申请的优先权, 上述专利申请的全部内容通过引用结合在本申请中。
技术领域
本发明涉及计算机领域, 尤其涉及一种緩存方法及装置。
背景技术
当前, 为了在提供大容量内存的同时, 有效降低内存功耗, 数据中心通 常使用变相存储器( Phase-change memory, 简称 PRAM )和动态随机存取存 储器( Dynamic Random Access Memory , 简称 DRAM )共同构建内存***。 具体的, PRAM与 DRAM共同构建的内存***的结构通常为垂直混合结构。 具体的, 在垂直混合结构中, DRAM作为 PRAM的緩存, 可以将 PRAM中 被访问的内存页緩存至所述 DRAM中为该内存页预先映射的緩存块中。 对于 一个緩存块来说, 该緩存块中緩存的内存页的热度越高, 则緩存块的緩存效 率和效果越高。
具体的, DRAM与 PRAM的映射方法通常釆用组相联的方法, 即内存页 在 DRAM中映射的緩存块是确定的。 因此, 在上述方案中可能存在内存页分 布过于集中的情形, 例如, 单个緩存块对应多个内存页的情形。 对此, 现有 的緩存方案为, 若某个需要被緩存的内存页, 其当前映射的緩存块中没有空 闲的緩存空间, 则将该緩存块当前緩存的内存页替换为当前需要被緩存的内 存页。
但是, 在上述緩存方案中, 存在被替换的内存页的热度反而可能高于被 緩存的内存页的热度的现象, 而这就会导致该緩存块的緩存的效率降低。 发明内容
本发明提供一种緩存方法及装置, 用于解决现有的緩存方案中, 因 内存页分布过于集中导致的緩存块的緩存效率降低的问题。
第一方面, 本发明提供一种緩存方法, 包括: 根据各内存页的热度, 确定当前热度最高且未被緩存的待分配内存页; 若当前各緩存块中没有 空闲的緩存空间, 则检测所述各緩存块中是否緩存有待替换内存页, 所 述待替换内存页的热度低于所述待分配内存页的热度; 若是, 则将所述 待替换内存页替换为所述待分配内存页。
根据第一方面, 在第一方面的第一种可实施方式中, 所述检测所述 各緩存块中是否緩存有待替换内存页, 包括: 检测所述待分配内存页当 前映射的緩存块中是否緩存有所述待替换内存页; 所述将所述待替换内 存页替换为所述待分配内存页, 包括: 若所述待分配内存页当前映射的 緩存块中緩存有所述待替换内存页, 则将所述待替换内存页替换为所述 待分配内存页。
根据第一方面的第一种可实施方式, 在第一方面的第二种可实施方 式中, 所述检测所述待分配内存页当前映射的緩存块中是否緩存有待替 换内存页之后, 还包括: 若所述待分配内存页当前映射的緩存块中未緩 存有所述待替换内存页, 则通过页面迁移将所述待分配内存页映射至另 一緩存块; 再次执行所述检测所述待分配内存页当前映射的緩存块中是 否緩存有待替换内存页的步骤。
根据第一方面或第一方面的前两种可实施方式之一, 在第一方面的 第三种可实施方式中, 所述将所述待替换内存页替换为所述待分配内存 页之后, 还包括: 返回执行所述根据各内存页的热度, 确定当前热度最 高且未被緩存的待分配内存页的步骤, 直至当前已緩存的内存页的热度 均不低于所述待分配内存页的热度。
根据第一方面或第一方面的前三种可实施方式之一, 在第一方面的 第四种可实施方式中, 所述将所述待替换内存页替换为所述待分配内存 页, 包括: 若所述待替换内存页有多个, 则将所述待替换内存页中热度 最低的内存页替换为所述待分配内存页。
根据第一方面或第一方面的前四种可实施方式之一, 在第一方面的 第五种可实施方式中, 所述根据各内存页的热度, 确定当前热度最高且 未被緩存的待分配内存页之后, 还包括: 若所述待分配内存页当前映射 的緩存块存在空闲的緩存空间, 则将所述待分配内存页緩存至所述待分 配内存页当前映射的緩存块。
根据第一方面或第一方面的前四种可实施方式之一, 在第一方面的 第六种可实施方式中, 所述根据各内存页的热度, 确定当前热度最高且 未被緩存的待分配内存页之后, 还包括: 若所述待分配内存页当前映射 的緩存块不存在空闲的緩存空间,且其它緩存块中存在空闲的緩存空间, 则通过页面迁移将所述待分配内存页映射至所述空闲的緩存空间所属的 緩存块, 并緩存。
根据第一方面或第一方面的前六种可实施方式之一, 在第一方面的 第七种可实施方式中, 所述确定当前热度最高且未被緩存的待分配内存 页之前, 还包括: 根据预设周期, 定时统计并更新所述各内存页的热度。
第二方面, 本发明提供一种緩存装置, 包括: 获取模块, 用于根据 各内存页的热度, 确定当前热度最高且未被緩存的待分配内存页; 检测 模块, 用于若当前各緩存块中没有空闲的緩存空间, 则检测所述各緩存 块中是否緩存有待替换内存页, 所述待替换内存页的热度低于所述待分 配内存页的热度; 第一处理模块, 用于若所述各緩存块中緩存有待替换 内存页, 则将所述待替换内存页替换为所述待分配内存页。
根据第二方面, 在第二方面的第一种可实施方式中, 所述检测模块, 具体用于若当前各緩存块中没有空闲的緩存空间, 则检测所述待分配内 存页当前映射的緩存块中是否緩存有所述待替换内存页; 所述第一处理 模块, 具体用于若所述待分配内存页当前映射的緩存块中緩存有所述待 替换内存页, 则将所述待替换内存页替换为所述待分配内存页。
根据第二方面的第一种可实施方式, 在第二方面的第二种可实施方 式中, 所述第一处理模块, 还用于若所述待分配内存页当前映射的緩存 块中未緩存有所述待替换内存页, 则通过页面迁移将所述待分配内存页 映射至另一緩存块, 并指示所述检测模块再次执行所述检测所述待分配 内存页当前映射的緩存块中是否緩存有待替换内存页的步骤。
根据第二方面或第二方面的前两种可实施方式之一, 在第二方面的 第三种可实施方式中, 所述第一处理模块, 还用于在将所述待替换内存 页替换为所述待分配内存页之后, 指示所述获取模块执行所述根据各内 存页的热度, 确定当前热度最高且未被緩存的待分配内存页的步骤, 直 至当前已緩存的内存页的热度均不低于所述待分配内存页的热度。
根据第二方面或第二方面的前三种可实施方式之一, 在第二方面的 第四种可实施方式中, 所述第一处理模块, 具体用于若所述待替换内存 页有多个, 则将所述待替换内存页中热度最低的内存页替换为所述待分 配内存页。
根据第二方面或第二方面的前四种可实施方式之一, 在第二方面的 第五种可实施方式中, 所述装置还包括: 第二处理模块, 用于在根据各 内存页的热度, 确定当前的所述待分配内存页之后, 若所述待分配内存 页当前映射的緩存块存在空闲的緩存空间, 则将所述待分配内存页緩存 至所述待分配内存页当前映射的緩存块。
根据第二方面或第二方面的前四种可实施方式之一, 在第二方面的 第六种可实施方式中, 所述装置还包括: 第三处理模块, 用于在根据各 内存页的热度, 确定当前热度最高且未被緩存的待分配内存页之后, 若 所述待分配内存页当前映射的緩存块不存在空闲的緩存空间, 且其它緩 存块中存在空闲的緩存空间, 则通过页面迁移将所述待分配内存页映射 至所述空闲的緩存空间所属的緩存块, 并緩存。
根据第二方面或第二方面的前六种可实施方式之一, 在第二方面的 第七种可实施方式中, 所述装置还包括: 统计模块, 用于根据预设周期, 定时统计并更新所述各内存页的热度。
本发明提供的緩存方法及装置, 通过确定热度最高且未被緩存的待 分配内存页后, 若各緩存块中不存在空闲的緩存空间, 则将已緩存的热 度低于所述待分配内存页的热度的待替换内存页替换为所述待分配内存 页的技术方案, 实现对高热度内存页的緩存, 避免降低緩存块的緩存效 率。 附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案, 下面将 对实施例或现有技术描述中所需要使用的附图作简单地介绍, 显而易见 地, 下面描述中的附图是本发明的一些实施例, 对于本领域普通技术人 员来讲, 在不付出创造性劳动的前提下, 还可以根据这些附图获得其他 的附图。
图 1为本发明实施例一提供的一种緩存方法的流程示意图; 图 2为本发明实施例二提供的另一种緩存方法的流程示意图; 图 3为本发明实施例三提供的又一种緩存方法的流程示意图; 图 4为本发明实施例四提供的一种緩存装置的结构示意图; 图 5为本发明实施例五提供的另一种緩存装置的结构示意图
具体实施方式
为使本发明实施例的目的、 技术方案和优点更加清楚, 下面将结合 本发明实施例中的附图, 对本发明实施例中的技术方案进行清楚、 完整 地描述, 显然, 所描述的实施例是本发明一部分实施例, 而不是全部的 实施例。 基于本发明中的实施例, 本领域普通技术人员在没有作出创造 性劳动前提下所获得的所有其他实施例, 都属于本发明保护的范围。
图 1为本发明实施例一提供的一种緩存方法的流程示意图, 如图 1 所示, 所述方法包括:
101、 根据各内存页的热度, 确定当前热度最高且未被緩存的待分 配内存页。
具体的, 在 101之前, 所述方法还可以包括: 根据预设周期, 定时 统计并更新所述各内存页的热度。
在实际应用中, 所述内存页的热度可以为, 在所述预设周期内所述 内存页的读次数和 /或写次数。 具体的, 所述各内存页的热度可以记录 在热度表中, 进一步具体的, 可以通过内存控制器( Memory Controller ) 统计***中各内存页的页面读写情况, 从而根据所述页面读写情况建立 并维护所述热度表。
102、 若当前各緩存块中没有空闲的緩存空间, 则检测所述各緩存 块中是否緩存有待替换内存页, 所述待替换内存页的热度低于所述待分 配内存页的热度。
可选的, 在 101之后, 若当前的各緩存块中存在空闲的緩存空间, 则在 101之后, 还可以包括:
若所述待分配内存页当前映射的緩存块存在空闲的緩存空间, 则将 所述待分配内存页緩存至所述待分配内存页当前映射的緩存块。
可选的, 在 101之后, 还可以包括:
若所述待分配内存页当前映射的緩存块不存在空闲的緩存空间, 且 其它緩存块中存在空闲的緩存空间, 则通过页面迁移将所述待分配内存 页映射至所述空闲的緩存空间所属的緩存块, 并緩存。
其中, 页面迁移的具体方法在此不再赘述。 并且通过上述两种实施 方式, 可以在存在有空闲的緩存空间的情况下, 实现对所述待分配内存 页进行緩存。
103、 若是, 则将所述待替换内存页替换为所述待分配内存页。 其中, 102中所述检测所述各緩存块中是否緩存有待替换内存页, 具体可以包括: 检测所述待分配内存页当前映射的緩存块中是否緩存有 所述待替换内存页;
则相应的, 103中所述将所述待替换内存页替换为所述待分配内存 页, 具体可以包括: 若所述待分配内存页当前映射的緩存块中緩存有所 述待替换内存页, 则将所述待替换内存页替换为所述待分配内存页。
在上述实施方式中, 在 102之后, 除 103之外, 可能存在的另一种 场景为, 所述待分配内存页当前映射的緩存块中緩存的内存页的热度均 不低于所述待分配内存页的热度, 则在 102之后, 还可以包括:
若所述待分配内存页当前映射的緩存块中未緩存有所述待替换内 存页, 则通过页面迁移将所述待分配内存页映射至另一緩存块;
再次执行所述检测所述待分配内存页当前映射的緩存块中是否緩 存有待替换内存页的步骤。
具体的, 通过上述实施方式, 当待分配内存页当前映射的緩存块中 未緩存有所述待替换内存页时, 则将所述待分配内存页通过页面迁移映 射至另一緩存块, 并在页面迁移后, 再次检测所述待分配内存页当前映 射的緩存块中有没有所述待替换内存页的方案, 在避免降低緩存块緩存 效率的同时, 提高緩存的可靠性。
可选的, 所述将所述待替换内存页替换为所述待分配内存页, 具体 可以包括: 若存在一个所述待替换内存页, 则将所述待替换内存页替换 为所述待分配内存页。
可选的, 所述将所述待替换内存页替换为所述待分配内存页, 具体 还可以包括: 若所述待替换内存页有多个, 则将所述待替换内存页中热 度最低的内存页替换为所述待分配内存页。
本实施方式, 通过存在多个待替换内存页时, 则选取其中热度最低 的内存页与待分配内存页进行替换的方案, 能够有效提高緩存块的緩存 效率。
本实施例提供的緩存方法, 通过确定热度最高且未被緩存的待分配 内存页后, 若各緩存块中不存在空闲的緩存空间, 则将已緩存的热度低 于所述待分配内存页的热度的待替换内存页替换为所述待分配内存页 的技术方案,实现对高热度内存页的緩存,避免降低緩存块的緩存效率。
图 2为本发明实施例二提供的另一种緩存方法的流程示意图, 如图 2所示, 根据实施例一所述的緩存方法, 在 103之后, 还可以包括:
201、 返回执行所述根据各内存页的热度, 确定当前热度最高且未 被緩存的待分配内存页的步骤, 直至当前被緩存的内存页的热度均不低 于所述待分配内存页的热度。
具体的, 在本实施例中, 当完成对某个待分配内存页的緩存后, 则 再次确定当前热度最高且未被緩存的待分配内存页, 并通过实施例一中 的相关方法对重新确定的当前热度最高且未被緩存的待分配内存页进 行緩存, 以此循环, 直至当前被緩存的内存页的热度均不低于当前未被 緩存的内存页的热度。
进一步具体的, 确定某个内存页是否已被緩存的具体方法可以包括 多种, 具体举例来说, 在緩存该内存页之后, 则可以在所述热度表中为 该内存页添加已緩存标识, 则相应的, 未携带所述已緩存标识的内存页 则为当前未被緩存的内存页, 本实施例不对其进行限制。
本实施例提供的緩存方法, 通过在緩存某个待分配内存页之后, 再 次确定当前热度最高且未被緩存的待分配内存页, 并通过本实施例提供 的緩存方法进行緩存的方案, 使得被緩存的内存页的热度均不低于未被 緩存的内存页的热度, 避免各緩存块的緩存效率的降低, 有效保证了整 体緩存块的緩存效率。
图 3为本发明实施例三提供的又一种緩存方法的流程示意图, 如图 3所示, 所述方法包括:
301、 根据各内存页的热度, 确定当前热度最高且未被緩存的待分 配内存页;
302、 判断当前被緩存的内存页的热度是否均不小于所述待分配内 存页的热度, 若是则结束, 若否则执行 303 ;
303、 检测当前各緩存块中是否存在空闲的緩存空间, 若是, 则执 行 304 , 否则执行 305 ;
304、 将所述待分配内存页緩存至所述空闲的緩存空间;
305、 检测所述待分配内存页当前映射的緩存块中是否緩存有待替 换内存页, 若是, 则执行 306 , 否则执行 307 ;
306、 若所述待替换内存页有多个, 则将所述待替换内存页中热度 最低的内存页替换为所述待分配内存页, 并返回执行 301 ; 307、 通过页面迁移将所述待分配内存页映射至另一緩存块, 并返 回执行 305。
本实施例提供的緩存方法, 通过确定热度最高且未被緩存的待分配 内存页后, 若各緩存块中不存在空闲的緩存空间, 则将已緩存的热度最 低且低于所述待分配内存页的热度的待替换内存页替换为所述待分配 内存页的技术方案, 实现对高热度内存页的緩存, 避免降低緩存块的緩 存效率。
图 4为本发明实施例四提供的一种緩存装置的结构示意图, 如图 4 所示, 所述装置包括: 获取模块 41、 检测模块 42和第一处理模块 43 ; 其中,
获取模块 41 , 用于根据各内存页的热度, 确定当前热度最高且未被 緩存的待分配内存页;
检测模块 42 , 用于若当前各緩存块中没有空闲的緩存空间, 则检测 所述各緩存块中是否緩存有待替换内存页, 所述待替换内存页的热度低 于所述待分配内存页的热度;
第一处理模块 43 , 用于若所述各緩存块中緩存有待替换内存页, 则 将所述待替换内存页替换为所述待分配内存页。
具体的, 在 101之前, 所述方法还可以包括: 根据预设周期, 定时 统计并更新所述各内存页的热度。
再具体的, 在确定所述待分配内存页后, 若当前的各緩存块中存在 空闲的緩存空间, 则所述装置还可以包括:
第二处理模块, 用于在根据各内存页的热度, 确定当前的所述待分 配内存页之后, 若所述待分配内存页当前映射的緩存块存在空闲的緩存 空间, 则将所述待分配内存页緩存至所述待分配内存页当前映射的緩存 块。
可选的, 所述装置还可以包括:
第三处理模块, 用于在根据各内存页的热度, 确定当前热度最高且 未被緩存的待分配内存页之后, 若所述待分配内存页当前映射的緩存块 不存在空闲的緩存空间, 且其它緩存块中存在空闲的緩存空间, 则通过 页面迁移将所述待分配内存页映射至所述空闲的緩存空间所属的緩存 块, 并緩存。
通过上述两种实施方式, 可以在当前各緩存块中存在空闲的緩存空 间的情况下, 实现对所述待分配内存页进行緩存。
其中,检测模块 42 , 可以具体用于若当前各緩存块中没有空闲的緩 存空间, 则检测所述待分配内存页当前映射的緩存块中是否緩存有所述 待替换内存页;
则相应的, 第一处理模块 43 , 可以具体用于若所述待分配内存页当 前映射的緩存块中緩存有所述待替换内存页, 则将所述待替换内存页替 换为所述待分配内存页。
对应的, 在上述实施方式中, 若检测模块 42检测到所述待分配内 存页当前映射的緩存块中緩存的内存页的热度均不低于所述待分配内 存页的热度, 则第一处理模块 43 , 还用于若所述待分配内存页当前映射 的緩存块中未緩存有所述待替换内存页, 则通过页面迁移将所述待分配 内存页映射至另一緩存块, 并指示检测模块 42再次执行所述检测所述 待分配内存页当前映射的緩存块中是否緩存有待替换内存页的步骤。
通过上述实施方式, 在避免降低緩存块緩存效率的同时, 提高緩存 的可靠性。
进一步的, 第一处理模块 43 , 可以具体用于若存在一个所述待替换 内存页, 则将所述待替换内存页替换为所述待分配内存页。
第一处理模块 43 , 还可以具体用于若所述待替换内存页有多个, 则 将所述待替换内存页中热度最低的内存页替换为所述待分配内存页。 通 过本实施方式能够有效提高緩存块的緩存效率。
可选的, 在上述任一实施方式中, 第一处理模块 43 , 还用于在将所 述待替换内存页替换为所述待分配内存页之后, 指示获取模块 41 执行 所述根据各内存页的热度, 确定当前热度最高且未被緩存的待分配内存 页的步骤, 直至当前被緩存的内存页的热度均不低于所述待分配内存页 的热度。
具体的, 在本实施例中, 当第一处理模块 43 完成对某个待分配内 存页的緩存后, 则指示获取模块 41 再次确定当前热度最高且未被緩存 的待分配内存页, 并对重新确定的当前热度最高且未被緩存的待分配内 存页进行緩存, 以此循环, 直至当前被緩存的内存页的热度均不低于当 前未被緩存的内存页的热度。
本实施例提供的緩存装置, 通过确定热度最高且未被緩存的待分配 内存页后, 若各緩存块中不存在空闲的緩存空间, 则将已緩存的热度低 于所述待分配内存页的热度的待替换内存页替换为所述待分配内存页 的技术方案,实现对高热度内存页的緩存,避免降低緩存块的緩存效率。
图 5为本发明实施例五提供的另一种緩存装置的结构示意图, 如图 5所示, 所述装置包括:
存储器 51 , 用于存放程序。 具体地, 程序可以包括程序代码, 所述 程序代码包括计算机操作指令。 存储器 51可能包含高速 RAM存储器, 也可能还包括非易失性存储器( non-volatile memory ), 例如至少一个磁 盘存储器。
处理器 52 , 执行存储器 51存放的程序, 以用于: 根据各内存页的 热度, 确定当前热度最高且未被緩存的待分配内存页; 若当前各緩存块 中没有空闲的緩存空间, 则检测所述各緩存块中是否緩存有待替换内存 页, 所述待替换内存页的热度低于所述待分配内存页的热度; 若是, 则 将所述待替换内存页替换为所述待分配内存页。
具体的, 处理器 52 , 可以用于根据各内存页的热度, 确定当前热度 最高且未被緩存的待分配内存页; 若当前各緩存块中没有空闲的緩存空 间, 则检测所述待分配内存页当前映射的緩存块中是否緩存有所述待替 换内存页; 若是, 则将所述待替换内存页替换为所述待分配内存页。
可选的, 基于存储器 51 存放的程序, 在上述实施方式中, 处理器 52 , 还用于若所述待分配内存页当前映射的緩存块中未緩存有所述待替 换内存页, 则通过页面迁移将所述待分配内存页映射至另一緩存块; 再 次执行所述检测所述待分配内存页当前映射的緩存块中是否緩存有待 替换内存页的步骤。
其中, 处理器 52可能是一个中央处理器 (Central Processing Unit, 简称为 CPU ) , 或者是特定集成电路 ( Application Specific Integrated Circuit, 简称为 ASIC ), 或者是被配置成实施本发明实施例的一个或多 个集成电路。
可选的, 基于存储器 51 存放的程序, 在上述任一实施方式中, 处 理器 52 , 还用于在将所述待替换内存页替换为所述待分配内存页之后, 返回执行所述根据各内存页的热度, 确定当前热度最高且未被緩存的待 分配内存页的步骤, 直至当前被緩存的内存页的热度均不低于所述待分 配内存页的热度。
可选的, 处理器 52 , 具体用于若所述待替换内存页有多个, 则将所 述待替换内存页中热度最低的内存页替换为所述待分配内存页。
可选的, 基于存储器 51存放的程序, 在上述任一实施方式中, 处理器 52 ,还用于若所述待分配内存页当前映射的緩存块存在空闲 的緩存空间, 则将所述待分配内存页緩存至所述待分配内存页当前映射 的緩存块; 或者, 若所述待分配内存页当前映射的緩存块不存在空闲的 緩存空间, 且其它緩存块中存在空闲的緩存空间, 则通过页面迁移将所 述待分配内存页映射至所述空闲的緩存空间所属的緩存块, 并緩存。
通过本实施方式能够在緩存块存在空闲的緩存空间时, 实现对待分 配内存页的緩存。
具体的, 基于存储器 51 存放的程序, 在上述任一实施方式中, 处 理器 52还用于根据预设周期, 定时统计并更新所述各内存页的热度。
可选的, 所述装置还可以包括: 通信接口 53 , 用于获取所述各内存 页的热度。 在具体实现上, 如果存储器 51、 处理器 52 和通信接口 53 独立实现, 则存储器 51、处理器 52和通信接口 53可以通过总线相互连 接并完成相互间的通信。 所述总线可以是工业标准体系结构 (Industry Standard Architecture , 简称为 ISA ) 总线、 夕卜部设备互连 (Peripheral Component , 简称为 PCI ) 总线或扩展工业标准体系结构 ( Extended Industry Standard Architecture , 简称为 EISA ) 总线等。 所述总线可以分 为地址总线、 数据总线、 控制总线等。 为便于表示, 图 5中仅用一条粗 线表示, 但并不表示仅有一根总线或一种类型的总线。
本实施例提供的緩存装置, 通过确定热度最高且未被緩存的待分配 内存页后, 若各緩存块中不存在空闲的緩存空间, 则将已緩存的热度低 于所述待分配内存页的热度的待替换内存页替换为所述待分配内存页 的技术方案,实现对高热度内存页的緩存,避免降低緩存块的緩存效率。
所属领域的技术人员可以清楚地了解到, 为描述的方便和简洁, 上 述描述的装置的具体工作过程, 可以参考前述方法实施例中的对应过 程, 在此不再赘述。
本领域普通技术人员可以理解: 实现上述各方法实施例的全部或部 分步骤可以通过程序指令相关的硬件来完成。 前述的程序可以存储于一 计算机可读取存储介质中。 该程序在执行时, 执行包括上述各方法实施 例的步骤; 而前述的存储介质包括: ROM、 RAM, 磁碟或者光盘等各 种可以存储程序代码的介质。 非对其限制; 尽管参照前述各实施例对本发明进行了详细的说明, 本领 域的普通技术人员应当理解: 其依然可以对前述各实施例所记载的技术 方案进行修改, 或者对其中部分或者全部技术特征进行等同替换; 而这 些修改或者替换, 并不使相应技术方案的本质脱离本发明各实施例技术 方案的范围。

Claims

权 利 要求 书
1、 一种緩存方法, 其特征在于, 包括:
根据各内存页的热度, 确定当前热度最高且未被緩存的待分配内存 页;
若当前各緩存块中没有空闲的緩存空间, 则检测所述各緩存块中是 否緩存有待替换内存页, 所述待替换内存页的热度低于所述待分配内存 页的热度;
若是, 则将所述待替换内存页替换为所述待分配内存页。
2、 根据权利要求 1所述的方法, 其特征在于, 所述检测所述各緩存 块中是否緩存有待替换内存页, 包括:
检测所述待分配内存页当前映射的緩存块中是否緩存有所述待替换 内存页;
所述将所述待替换内存页替换为所述待分配内存页, 包括: 若所述待分配内存页当前映射的緩存块中緩存有所述待替换内存 页, 则将所述待替换内存页替换为所述待分配内存页。
3、 根据权利要求 2所述的方法, 其特征在于, 所述检测所述待分配 内存页当前映射的緩存块中是否緩存有待替换内存页之后, 还包括: 若所述待分配内存页当前映射的緩存块中未緩存有所述待替换内存 页, 则通过页面迁移将所述待分配内存页映射至另一緩存块;
再次执行所述检测所述待分配内存页当前映射的緩存块中是否緩存 有待替换内存页的步骤。
4、 根据权利要求 1 -3任一项所述的方法, 其特征在于, 所述将所述 待替换内存页替换为所述待分配内存页之后, 还包括:
返回执行所述根据各内存页的热度, 确定当前热度最高且未被緩存 的待分配内存页的步骤, 直至当前已緩存的内存页的热度均不低于所述 待分配内存页的热度。
5、 根据权利要求 1 -4中任一项所述的方法, 其特征在于, 所述将所 述待替换内存页替换为所述待分配内存页, 包括: 若所述待替换内存页有多个, 则将所述待替换内存页中热度最低的 内存页替换为所述待分配内存页。
6、 根据权利要求 1 -5中任一项所述的方法, 其特征在于, 所述根据 各内存页的热度, 确定当前热度最高且未被緩存的待分配内存页之后, 还包括: 若所述待分配内存页当前映射的緩存块存在空闲的緩存空间, 则将 所述待分配内存页緩存至所述待分配内存页当前映射的緩存块。
7、 根据权利要求 1 -5中任一项所述的方法, 其特征在于, 所述根据 各内存页的热度, 确定当前热度最高且未被緩存的待分配内存页之后, 还包括: 若所述待分配内存页当前映射的緩存块不存在空闲的緩存空间, 且 其它緩存块中存在空闲的緩存空间, 则通过页面迁移将所述待分配内存 页映射至所述空闲的緩存空间所属的緩存块, 并緩存。
8、 根据权利要求 1 -7中任一项所述的方法, 其特征在于, 所述确定 当前热度最高且未被緩存的待分配内存页之前, 还包括:
根据预设周期, 定时统计并更新所述各内存页的热度。
9、 一种緩存装置, 其特征在于, 包括: 获取模块, 用于根据各内存页的热度, 确定当前热度最高且未被緩 存的待分配内存页;
检测模块, 用于若当前各緩存块中没有空闲的緩存空间, 则检测所 述各緩存块中是否緩存有待替换内存页, 所述待替换内存页的热度低于 所述待分配内存页的热度;
第一处理模块, 用于若所述各緩存块中緩存有待替换内存页, 则将 所述待替换内存页替换为所述待分配内存页。
10、 根据权利要求 9所述的装置, 其特征在于, 所述检测模块, 具体用于若当前各緩存块中没有空闲的緩存空间, 则检测所述待分配内存页当前映射的緩存块中是否緩存有所述待替换内 存页; 所述第一处理模块, 具体用于若所述待分配内存页当前映射的緩存 块中緩存有所述待替换内存页, 则将所述待替换内存页替换为所述待分 配内存页。
11、 根据权利要求 10所述的装置, 其特征在于, 所述第一处理模块, 还用于若所述待分配内存页当前映射的緩存块 中未緩存有所述待替换内存页, 则通过页面迁移将所述待分配内存页映 射至另一緩存块, 并指示所述检测模块再次执行所述检测所述待分配内 存页当前映射的緩存块中是否緩存有待替换内存页的步骤。
12、 根据权利要求 9-11任一项所述的装置, 其特征在于,
所述第一处理模块, 还用于在将所述待替换内存页替换为所述待分 配内存页之后, 指示所述获取模块执行所述根据各内存页的热度, 确定 当前热度最高且未被緩存的待分配内存页的步骤, 直至当前已緩存的内 存页的热度均不低于所述待分配内存页的热度。
13、 根据权利要求 9-12中任一项所述的装置, 其特征在于, 所述第一处理模块, 具体用于若所述待替换内存页有多个, 则将所 述待替换内存页中热度最低的内存页替换为所述待分配内存页。
14、 根据权利要求 9-13中任一项所述的装置, 其特征在于, 所述装 置还包括:
第二处理模块, 用于在根据各内存页的热度, 确定当前的所述待分 配内存页之后, 若所述待分配内存页当前映射的緩存块存在空闲的緩存 空间, 则将所述待分配内存页緩存至所述待分配内存页当前映射的緩存 块。
15、 根据权利要求 9-13中任一项所述的装置, 其特征在于, 所述装 置还包括:
第三处理模块, 用于在根据各内存页的热度, 确定当前热度最高且 未被緩存的待分配内存页之后, 若所述待分配内存页当前映射的緩存块 不存在空闲的緩存空间, 且其它緩存块中存在空闲的緩存空间, 则通过 页面迁移将所述待分配内存页映射至所述空闲的緩存空间所属的緩存 块, 并緩存。
16、 根据权利要求 9-15中任一项所述的装置, 其特征在于, 所述装 置还包括:
统计模块, 用于根据预设周期, 定时统计并更新所述各内存页的热 度。
PCT/CN2014/080174 2013-06-25 2014-06-18 缓存方法及装置 WO2014206234A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310257056.0 2013-06-25
CN201310257056.0A CN104252421A (zh) 2013-06-25 2013-06-25 缓存方法及装置

Publications (1)

Publication Number Publication Date
WO2014206234A1 true WO2014206234A1 (zh) 2014-12-31

Family

ID=52141039

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/080174 WO2014206234A1 (zh) 2013-06-25 2014-06-18 缓存方法及装置

Country Status (2)

Country Link
CN (1) CN104252421A (zh)
WO (1) WO2014206234A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834609B (zh) * 2015-05-31 2017-12-22 上海交通大学 基于历史升降级频率的多级缓存方法
WO2022021158A1 (zh) * 2020-07-29 2022-02-03 华为技术有限公司 缓存***、方法和芯片

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6397298B1 (en) * 1999-07-30 2002-05-28 International Business Machines Corporation Cache memory having a programmable cache replacement scheme
CN102063386A (zh) * 2010-12-17 2011-05-18 曙光信息产业(北京)有限公司 一种单载体多目标的缓存***的缓存管理方法
CN102253901A (zh) * 2011-07-13 2011-11-23 清华大学 一种基于相变内存的读写区分数据存储替换方法
CN102521161A (zh) * 2011-11-21 2012-06-27 华为技术有限公司 一种数据的缓存方法、装置和服务器
CN103076992A (zh) * 2012-12-27 2013-05-01 杭州华为数字技术有限公司 一种内存数据缓冲方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100787856B1 (ko) * 2006-11-29 2007-12-27 한양대학교 산학협력단 플래시 메모리 저장장치의 페이지 교체 방법
CN101727403A (zh) * 2008-10-15 2010-06-09 深圳市朗科科技股份有限公司 数据存储***、设备及方法
CN102156753B (zh) * 2011-04-29 2012-11-14 中国人民解放军国防科学技术大学 面向固态硬盘文件***的数据页缓存方法
CN103019955B (zh) * 2011-09-28 2016-06-08 中国科学院上海微***与信息技术研究所 基于pcram主存应用的内存管理方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6397298B1 (en) * 1999-07-30 2002-05-28 International Business Machines Corporation Cache memory having a programmable cache replacement scheme
CN102063386A (zh) * 2010-12-17 2011-05-18 曙光信息产业(北京)有限公司 一种单载体多目标的缓存***的缓存管理方法
CN102253901A (zh) * 2011-07-13 2011-11-23 清华大学 一种基于相变内存的读写区分数据存储替换方法
CN102521161A (zh) * 2011-11-21 2012-06-27 华为技术有限公司 一种数据的缓存方法、装置和服务器
CN103076992A (zh) * 2012-12-27 2013-05-01 杭州华为数字技术有限公司 一种内存数据缓冲方法及装置

Also Published As

Publication number Publication date
CN104252421A (zh) 2014-12-31

Similar Documents

Publication Publication Date Title
JP6431536B2 (ja) 最終レベルキャッシュシステム及び対応する方法
US9792220B2 (en) Microcontroller for memory management unit
US10593380B1 (en) Performance monitoring for storage-class memory
US10133677B2 (en) Opportunistic migration of memory pages in a unified virtual memory system
US10445243B2 (en) Fault buffer for resolving page faults in unified virtual memory system
WO2014190695A1 (zh) 一种内存***、内存访问请求的处理方法和计算机***
CN105183662B (zh) 一种无cache一致性协议的分布式共享片上存储架构
US10762137B1 (en) Page table search engine
WO2011107046A2 (zh) 内存访问监测方法和装置
US9639474B2 (en) Migration of peer-mapped memory pages
US10114758B2 (en) Techniques for supporting for demand paging
US11435952B2 (en) Memory system and control method controlling nonvolatile memory in accordance with command issued by processor
EP3534265A1 (en) Memory access technique
US10705977B2 (en) Method of dirty cache line eviction
JP2015035010A (ja) メモリシステムおよび情報処理装置
CN102521179A (zh) 一种dma读操作的实现装置和方法
TWI696949B (zh) 直接記憶體存取方法、裝置、專用計算晶片及異構計算系統
US10216634B2 (en) Cache directory processing method for multi-core processor system, and directory controller
US20170300255A1 (en) Method and Apparatus for Detecting Transaction Conflict and Computer System
EP4060505A1 (en) Techniques for near data acceleration for a multi-core architecture
WO2014206234A1 (zh) 缓存方法及装置
US10754789B1 (en) Address translation for storage class memory in a system that includes virtual machines
WO2016041156A1 (zh) Cpu调度的方法和装置
US20150154107A1 (en) Non-volatile memory sector rotation
US10579519B2 (en) Interleaved access of memory

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14817273

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14817273

Country of ref document: EP

Kind code of ref document: A1