CN109918131A - A kind of instruction read method based on non-obstruction command cache - Google Patents

A kind of instruction read method based on non-obstruction command cache Download PDF

Info

Publication number
CN109918131A
CN109918131A CN201910180780.5A CN201910180780A CN109918131A CN 109918131 A CN109918131 A CN 109918131A CN 201910180780 A CN201910180780 A CN 201910180780A CN 109918131 A CN109918131 A CN 109918131A
Authority
CN
China
Prior art keywords
cache
cache line
data
fetching
sram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910180780.5A
Other languages
Chinese (zh)
Other versions
CN109918131B (en
Inventor
刘新华
王捷
梅平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongdian Haikang Wuxi Technology Co Ltd
Original Assignee
Zhongdian Haikang Wuxi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongdian Haikang Wuxi Technology Co Ltd filed Critical Zhongdian Haikang Wuxi Technology Co Ltd
Priority to CN201910180780.5A priority Critical patent/CN109918131B/en
Publication of CN109918131A publication Critical patent/CN109918131A/en
Application granted granted Critical
Publication of CN109918131B publication Critical patent/CN109918131B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The present invention relates to computer hardware technology fields, specifically disclose a kind of instruction read method based on non-obstruction command cache, wherein, index marker register group is provided in non-obstruction command cache, for index marker register group for storing index marker, the instruction read method based on non-obstruction command cache includes: whether to have fetching request in decision instruction bus;When there is fetching request on instruction bus, the index marker in index marker register group is read;Index marker is compared with the address information in fetching request;If index marker is consistent with the address information in fetching request, cache hit is indicated, then read director data from data SRAM and return to instruction bus;If index marker and the address information in fetching request are inconsistent, indicate that cache is not hit by, then subsequent fetching is handled according to the corresponding situation of keyword mode of priority and fetching operation and cache line and requested.Instruction read method provided by the invention based on non-obstruction command cache has been obviously improved the performance of processor.

Description

A kind of instruction read method based on non-obstruction command cache
Technical field
The present invention relates to computer hardware technology field more particularly to a kind of instruction readings based on non-obstruction command cache Take method.
Background technique
With the rapid development of integrated circuit fabrication process, the frequency of processor is in recent years with the speed per year over 40% It is improving, and the speed of memory is often only raising 1% or so, therefore the gaps between their growth rates between processor and memory are increasingly Greatly, access memory speed becomes the bottleneck for restricting processor performance.Cache is as the speed buffering between processor and main memory The gaps between their growth rates between processor and main memory have been filled up in area.
What current mainstream cache design mainly selected is that group is connected form, including single port index marker SRAM, single-ended Mouth data SRAM and control logic.The address information on instruction bus issued according to processor, reads index marker SRAM Index value (TAG) compared with current address, if unanimously, show cache hit, then from data SRAM read director data Back to instruction bus;If unequal, show that cache is lacked, then to read out a line cache line data from main memory, according to It is least recently used according to LRU(Least recently used) algorithm, cache line data are backfilling into data SRAM's Certain all the way, and updates index marker SRAM, then carries out next stage instruction flow line.
Scheme at present is needed until all data return in a cache line and is backfilling into data SRAM, Next stage instruction flow line is just carried out, and current operation often only needs to access a certain partial data in cache line, thus The next stage instruction flow line of processor is blocked, processor performance is reduced.
In sram, the data for reading SRAM export the clock that can be delayed and export again for index marker storage, will cause in this way The value of comparison result can prolong the latter clock output, if comparison result is hit, then reads command adapted thereto number from data SRAM According to back to processor instruction bus on, from index SRAM read index value with from data SRAM read director data the two Operation serially carries out, and processor instruction fetch will at least wait a clock cycle, reduces the performance of processor.
Summary of the invention
The present invention is directed at least solve one of the technical problems existing in the prior art, provide a kind of based on the instruction of non-obstruction The instruction read method of cache, to solve the problems of the prior art.
As one aspect of the present invention, a kind of instruction read method based on non-obstruction command cache is provided, wherein Index marker register group is provided in the non-obstruction command cache, the index marker register group is for storing index Mark, the instruction read method based on non-obstruction command cache include:
Whether fetching request is had in decision instruction bus;
When there is fetching request in described instruction bus, the index marker in the index marker register group is read;
The index marker is compared with the address information in fetching request;
If the index marker is consistent with the address information in fetching request, cache hit is indicated, then from data SRAM It reads director data and returns to instruction bus;
If the index marker and the address information in fetching request are inconsistent, indicate that cache is not hit by, then according to key Word mode of priority and fetching operation, which handle subsequent fetching with the corresponding situation of cache line, requests.
Preferably, it is described handled according to keyword mode of priority and fetching operation with the corresponding situation of cache line after Continuous fetching is requested
It initiates to main memory comprising the cache line access request including keyword;
Judge whether there is keyword return;
If there is keyword return, keyword is deposited to the register buffers cache line, and simultaneously by the keyword It is backfilling into data SRAM;
Subsequent fetching is handled with the corresponding situation of cache line according to fetching operation to request.
Preferably, if returning without keyword, return has continued to determine whether keyword return.
Preferably, described operated according to fetching with the corresponding situation of cache line includes: that the fetching operation corresponds to Same cache line in cache, other cache line in the fetching operation hits cache and the fetching Operation corresponds to other cache line and cache is not hit by.
Preferably, described to include: with the subsequent fetching request of the corresponding situation of cache line processing according to fetching operation
Judge whether the fetching operation hits other cache line in cache;
If other cache line in the fetching operation hits cache, the keyword in data SRAM is read, and return Cache line data be backfilling into data SRAM operation hang up;
If the fetching operation corresponds to other cache line and cache is not hit by, the key in data SRAM is read Word operation is hung up, and the cache line data of return are backfilling into data SRAM;
If the fetching operation corresponds to the same cache line in cache, it is same to determine that fetching operation corresponds to Cache line, and data are obtained from the register buffers cache line, the cache line data of return are backfilling into data In SRAM.
Preferably, described to further include with the subsequent fetching request of the corresponding situation of cache line processing according to fetching operation It is hung up reading the crucial word operation in data SRAM, after the cache line data of return are backfilling into the step in data SRAM It carries out:
Judge whether cache line data all read from main memory;
If cache line data are all read from main memory, judges whether cache line data have and be not backfilling into data In SRAM;
It is not backfilling into data SRAM if cache line data have, cache line data is backfilling into data SRAM In.
Preferably, if cache line data are not backfilling into data SRAM, return to main memory initiation include The step of cache line access request including keyword.
Preferably, it if cache line data are not read from main memory all, returns to execution and reads in data SRAM Crucial word operation is hung up, and the cache line data of return are backfilling into the step in data SRAM.
Preferably, described also to be wrapped according to fetching operation with the subsequent fetching request of the corresponding situation of cache line processing It includes: the keyword in the reading data SRAM, and the cache line data returned are backfilling into data SRAM operation and hang up The step of complete and it is described obtain data from the register buffers cache line, the cache line data of return are backfilling into After the completion of step in data SRAM, the step for executing and judging whether the fetching operation hits in other cache line is returned Suddenly.
Preferably, lru algorithm module, cache control logic module, number are additionally provided in the non-obstruction command cache According to the register buffers SRAM and cache line, the lru algorithm module respectively with the index marker register group and The cache control logic module communication connection, the index marker register group are communicated with the cache control logic module Connection, the data SRAM and the cache control logic module communicate to connect, the register buffers the cache line with The data SRAM communication connection.
Instruction read method provided by the invention based on non-obstruction command cache, it is preferential using keyword, and increase The register buffers cache line, solving must wait cache line total data to return in the case where a cache is not hit by And the problem of being backfilling into data SRAM, just carrying out next stage instruction flow line, it realizes the function of non-obstruction command cache, significantly mentions The performance of processor is risen.
Detailed description of the invention
The drawings are intended to provide a further understanding of the invention, and constitutes part of specification, with following tool Body embodiment is used to explain the present invention together, but is not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is the flow chart of the instruction read method provided by the invention based on non-obstruction command cache.
Fig. 2 is the specific embodiment process of the instruction read method provided by the invention based on non-obstruction command cache Figure.
Fig. 3 is the structural block diagram of non-obstruction command cache provided by the invention.
Specific embodiment
Below in conjunction with attached drawing, detailed description of the preferred embodiments.It should be understood that this place is retouched The specific embodiment stated is merely to illustrate and explain the present invention, and is not intended to restrict the invention.
As one aspect of the present invention, a kind of instruction read method based on non-obstruction command cache is provided, wherein Index marker register group is provided in the non-obstruction command cache, the index marker register group is for storing index Mark, as shown in Figure 1, the instruction read method based on non-obstruction command cache includes:
Whether there is fetching request on S110, decision instruction bus;
S120, when there is fetching request in described instruction bus, read the index marker in the index marker register group;
S130, the index marker is compared with the address information in fetching request;
If S140, the index marker are consistent with the address information in fetching request, cache hit is indicated, then from data Director data is read in SRAM returns to instruction bus;
If S150, the index marker and the address information in fetching request are inconsistent, indicate that cache is not hit by, then root Subsequent fetching is handled with the corresponding situation of cache line according to keyword mode of priority and fetching operation to request.
Instruction read method provided by the invention based on non-obstruction command cache, it is preferential using keyword, and increase The register buffers cache line, solving must wait cache line total data to return in the case where a cache is not hit by And the problem of being backfilling into data SRAM, just carrying out next stage instruction flow line, it realizes the function of non-obstruction command cache, significantly mentions The performance of processor is risen.
Below with reference to shown in Fig. 2, to the present invention how according to keyword mode of priority and fetching operation and cache The correspondence situation of line handles subsequent fetching request and is described in detail.
It should be noted that as shown in figure 3, being additionally provided with lru algorithm module, cache in the non-obstruction command cache Control logic module, the register buffers data SRAM and cache line, the lru algorithm module are marked with the index respectively Will register group and cache control logic module communication connection, the index marker register group and the cache are controlled Logic module communication connection processed, the data SRAM and the cache control logic module communicate to connect, the cache line Register buffers and the data SRAM are communicated to connect.Non- obstruction command cache, 128 word(512 bytes of capacity), it adopts It is connected with 4 group of 4 tunnel group, 8 words of each cache line (32 byte), using single-port SRAM as data storage, index Mark stores in the register bank, the register buffers cache line.
Lru algorithm module: using least recently used algorithm, and the every road Zu Mei cache line has the LRU of oneself to count Value, if cache missing, that road cache line that replacement count value is 0, while the road count value becomes maximum value, He subtracts 1 at each road count value.When cache hit count value is maximum, then all count values are remained unchanged all the way, if hit is non-most It is worth greatly all the way, than the road count value, big other each roads subtract 1, while the road count value becomes maximum value.
Index marker register group: every register buffers the road Zu Mei cache line and cache line are saved Address information (TAG) and effective marker position.
Data SRAM: the SRAM of single port, it is the mapping of main memory data.
The register buffers cache line: in the case where cache missing occurs, from the cache line of main memory reading Data buffering arrives this.
Cache control logic: according to comparison result, read-write data SRAM, and the control with main memory interface are generated.
Specifically, it is described handled according to keyword mode of priority and fetching operation with the corresponding situation of cache line after Continuous fetching is requested
It initiates to main memory comprising the cache line access request including keyword;
Judge whether there is keyword return;
If there is keyword return, keyword is deposited to the register buffers cache line, and simultaneously by the keyword It is backfilling into data SRAM;
Subsequent fetching is handled with the corresponding situation of cache line according to fetching operation to request.
Further specifically, if returning without keyword, return has continued to determine whether keyword return.
It should be understood that when there is fetching request on instruction bus, according to the address wire in present instruction bus The value of bit6 ~ bit5 searches index marker register group, and TAG and bit31 ~ bit7 of fetching address are compared, if hit, It then reads director data from data SRAM to return on instruction bus, while lru algorithm module more new count value.If missing, Cache control logic module initiates access request to main memory, word (i.e. keyword) required for requesting first, when keyword is from master It deposits and reads, while by keyword back to after instruction bus, processor can enter next stage instruction flow line.
It should be noted that described operated according to fetching with the corresponding situation of cache line includes: that the fetching operates Corresponding to same cache line in cache, other cache line in the fetching operation hits cache and institute Fetching operation is stated corresponding to other cache line and cache is not hit by.
Specifically, described to include: with the subsequent fetching request of the corresponding situation of cache line processing according to fetching operation
Judge whether the fetching operation hits other cache line in cache;
If other cache line in the fetching operation hits cache, the keyword in data SRAM is read, and return Cache line data be backfilling into data SRAM operation hang up;
If the fetching operation corresponds to other cache line and cache is not hit by, the key in data SRAM is read Word operation is hung up, and the cache line data of return are backfilling into data SRAM;
If the fetching operation corresponds to the same cache line in cache, it is same to determine that fetching operation corresponds to Cache line, and data are obtained from the register buffers cache line, the cache line data of return are backfilling into data In SRAM.
It should be understood that after keyword return, it may appear that three kinds of situations, the first situation: subsequent fetching operation It is hit in other cache line, then normally can return to fetching as a result, not by the cache line shadow currently returned It rings;Second situation: subsequent fetching operation corresponds to other cache line and is not hit by, then blocks subsequent fetching behaviour Make, and needs to wait to be not hit by the corresponding cache line of fetching operation and all return and be backfilling into data SRAM It can continue;The third situation: subsequent fetching operation corresponds to the same cache line, then slow from cache line register It rushes area and obtains data.
Specifically, described to further include with the subsequent fetching request of the corresponding situation of cache line processing according to fetching operation It is hung up reading the crucial word operation in data SRAM, after the cache line data of return are backfilling into the step in data SRAM It carries out:
Judge whether cache line data all read from main memory;
If cache line data are all read from main memory, judges whether cache line data have and be not backfilling into data In SRAM;
It is not backfilling into data SRAM if cache line data have, cache line data is backfilling into data SRAM In.
Further specifically, it if cache line data are not backfilling into data SRAM, returns and is sent out to main memory Act the step of including the cache line access request including keyword.
If cache line data are not read from main memory all, the keyword behaviour for executing and reading in data SRAM is returned It hangs up, the cache line data of return are backfilling into the step in data SRAM.
Further specifically, described that subsequent fetching request is handled with the corresponding situation of cache line according to fetching operation Further include: the keyword in the reading data SRAM, and the cache line data returned are backfilling into data SRAM operation The step of hang-up complete and it is described obtain data from the register buffers cache line, the cache line data of return are returned It fills out after the completion of the step in data SRAM, returns to execution and judge whether the fetching operation hits in other cache line The step of.
It should be noted that during cache line data return to the register buffers cache line, such as There is above-mentioned second and the third situation in fruit, data is backfilling into data SRAM according to lru algorithm, if there is the first feelings Condition (the subsequent fetching operation of processor is hit in other cache line), i.e., will carry out read operation to data SRAM, because of number It is single port according to SRAM, read SRAM can not operate simultaneously with SRAM is write, then and it is preferential to guarantee to read SRAM, it is not backfilling into data SRAM That director data record, when cache next time is lacked, will first not be backfilling into the cache of data SRAM The data of the register buffers line write back to data SRAM, and new wrap burst access request is then being initiated to main memory, The time that can save backfill is done so, processor performance is promoted.
Index marker is not hit by fetching to upper one and operates corresponding cache when cache next time is lacked Line data are all backfilling into data SRAM and just update.
Therefore, the instruction read method provided by the invention based on non-obstruction command cache, it is preferential by using keyword Technology reduces the miss penalty of cache with the realization cost of very little, improves the performance of processor;In cache line data During the register buffers cache line, data are backfilling into data SRAM, rather than until next time The register buffers cache line data are backfilling into data SRAM when missing, are handled with less logistical overhead raising The performance of device;Index (TAG) value is stored using register group, stores index (TAG) value with SRAM than traditional, comparison result mentions Previous clock output, can be with improving performance.
It is understood that the principle that embodiment of above is intended to be merely illustrative of the present and the exemplary implementation that uses Mode, however the present invention is not limited thereto.For those skilled in the art, essence of the invention is not being departed from In the case where mind and essence, various changes and modifications can be made therein, these variations and modifications are also considered as protection scope of the present invention.

Claims (10)

1. a kind of instruction read method based on non-obstruction command cache, which is characterized in that in the non-obstruction command cache It is provided with index marker register group, the index marker register group is described to refer to based on non-obstruction for storing index marker The instruction read method for enabling cache includes:
Whether fetching request is had in decision instruction bus;
When there is fetching request in described instruction bus, the index marker in the index marker register group is read;
The index marker is compared with the address information in fetching request;
If the index marker is consistent with the address information in fetching request, cache hit is indicated, then from data SRAM It reads director data and returns to instruction bus;
If the index marker and the address information in fetching request are inconsistent, indicate that cache is not hit by, then according to key Word mode of priority and fetching operation, which handle subsequent fetching with the corresponding situation of cache line, requests.
2. the instruction read method according to claim 1 based on non-obstruction command cache, which is characterized in that described Handling subsequent fetching request with the corresponding situation of cache line according to keyword mode of priority and fetching operation includes:
It initiates to main memory comprising the cache line access request including keyword;
Judge whether there is keyword return;
If there is keyword return, keyword is deposited to the register buffers cache line, and simultaneously by the keyword It is backfilling into data SRAM;
Subsequent fetching is handled with the corresponding situation of cache line according to fetching operation to request.
3. the instruction read method according to claim 2 based on non-obstruction command cache, which is characterized in that if not having Keyword returns, then returns and continued to determine whether keyword return.
4. the instruction read method according to claim 2 based on non-obstruction command cache, which is characterized in that described Operating according to fetching with the corresponding situation of cache line includes: the fetching operation corresponding to the same cache in cache Line, other cache line in the fetching operation hits cache and fetching operation correspond to other cache Line and cache is not hit by.
5. the instruction read method according to claim 4 based on non-obstruction command cache, which is characterized in that described Handling subsequent fetching request with the corresponding situation of cache line according to fetching operation includes:
Judge whether the fetching operation hits other cache line in cache;
If other cache line in the fetching operation hits cache, the keyword in data SRAM is read, and return Cache line data be backfilling into data SRAM operation hang up;
If the fetching operation corresponds to other cache line and cache is not hit by, the key in data SRAM is read Word operation is hung up, and the cache line data of return are backfilling into data SRAM;
If the fetching operation corresponds to the same cache line in cache, it is same to determine that fetching operation corresponds to Cache line, and data are obtained from the register buffers cache line, the cache line data of return are backfilling into data In SRAM.
6. the instruction read method according to claim 5 based on non-obstruction command cache, which is characterized in that described Handling subsequent fetching request with the corresponding situation of cache line according to fetching operation further includes reading the pass in data SRAM Key word operation is hung up, and the cache line data of return, which are backfilling into, to be carried out after the step in data SRAM:
Judge whether cache line data all read from main memory;
If cache line data are all read from main memory, judges whether cache line data have and be not backfilling into data In SRAM;
It is not backfilling into data SRAM if cache line data have, cache line data is backfilling into data SRAM In.
7. the instruction read method according to claim 6 based on non-obstruction command cache, which is characterized in that if cache Line data are not backfilling into data SRAM, then return and initiate to visit comprising the cache line including keyword to main memory The step of asking request.
8. the instruction read method according to claim 6 based on non-obstruction command cache, which is characterized in that if cache Line data are not read from main memory all, then return to the crucial word operation for executing and reading in data SRAM and hang up, return Cache line data are backfilling into the step in data SRAM.
9. the instruction read method according to claim 5 based on non-obstruction command cache, which is characterized in that described It handles subsequent fetching with the corresponding situation of cache line according to fetching operation to request further include: in the reading data SRAM Keyword, and return cache line data be backfilling into data SRAM operation hang up the step of complete and it is described from The register buffers cache line obtain data, and the cache line data of return are backfilling into the completion of the step in data SRAM Afterwards, it returns to execute and the step of whether the fetching operation hits in other cache line is judged.
10. the instruction read method according to claim 1 based on non-obstruction command cache, which is characterized in that described non- Lru algorithm module, cache control logic module, data SRAM and cache line deposit are additionally provided in obstruction command cache Device buffer area, the lru algorithm module are logical with the index marker register group and the cache control logic module respectively Letter connection, the index marker register group and the cache control logic module communicate to connect, the data SRAM with it is described The communication connection of cache control logic module, the register buffers the cache line and the data SRAM are communicated to connect.
CN201910180780.5A 2019-03-11 2019-03-11 Instruction reading method based on non-blocking instruction cache Active CN109918131B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910180780.5A CN109918131B (en) 2019-03-11 2019-03-11 Instruction reading method based on non-blocking instruction cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910180780.5A CN109918131B (en) 2019-03-11 2019-03-11 Instruction reading method based on non-blocking instruction cache

Publications (2)

Publication Number Publication Date
CN109918131A true CN109918131A (en) 2019-06-21
CN109918131B CN109918131B (en) 2021-04-30

Family

ID=66964166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910180780.5A Active CN109918131B (en) 2019-03-11 2019-03-11 Instruction reading method based on non-blocking instruction cache

Country Status (1)

Country Link
CN (1) CN109918131B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110515660A (en) * 2019-08-28 2019-11-29 中国人民解放军国防科技大学 Method and device for accelerating execution of atomic instruction
CN111142941A (en) * 2019-11-27 2020-05-12 核芯互联科技(青岛)有限公司 Non-blocking cache miss processing method and device
CN111414321A (en) * 2020-02-24 2020-07-14 中国农业大学 Cache protection method and device based on dynamic mapping mechanism
CN112711383A (en) * 2020-12-30 2021-04-27 浙江大学 Non-volatile storage reading acceleration method for power chip
CN113204370A (en) * 2021-03-16 2021-08-03 南京英锐创电子科技有限公司 Instruction caching method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1499382A (en) * 2002-11-05 2004-05-26 华为技术有限公司 Method for implementing cache in high efficiency in redundancy array of inexpensive discs
CN103399824A (en) * 2013-07-17 2013-11-20 北京航空航天大学 Method and device for holding cache miss states of caches in processor of computer
CN103593306A (en) * 2013-11-15 2014-02-19 浪潮电子信息产业股份有限公司 Design method for Cache control unit of protocol processor
US20140351554A1 (en) * 2007-06-01 2014-11-27 Intel Corporation Linear to physical address translation with support for page attributes
CN104809179A (en) * 2015-04-16 2015-07-29 华为技术有限公司 Device and method for accessing Hash table
US20180095886A1 (en) * 2016-09-30 2018-04-05 Fujitsu Limited Arithmetic processing device, information processing apparatus, and method for controlling arithmetic processing device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1499382A (en) * 2002-11-05 2004-05-26 华为技术有限公司 Method for implementing cache in high efficiency in redundancy array of inexpensive discs
US20140351554A1 (en) * 2007-06-01 2014-11-27 Intel Corporation Linear to physical address translation with support for page attributes
CN103399824A (en) * 2013-07-17 2013-11-20 北京航空航天大学 Method and device for holding cache miss states of caches in processor of computer
CN103593306A (en) * 2013-11-15 2014-02-19 浪潮电子信息产业股份有限公司 Design method for Cache control unit of protocol processor
CN104809179A (en) * 2015-04-16 2015-07-29 华为技术有限公司 Device and method for accessing Hash table
US20180095886A1 (en) * 2016-09-30 2018-04-05 Fujitsu Limited Arithmetic processing device, information processing apparatus, and method for controlling arithmetic processing device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
误海旋: "主机与存储器之间的缓存专利技术分析", 《河南科技》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110515660A (en) * 2019-08-28 2019-11-29 中国人民解放军国防科技大学 Method and device for accelerating execution of atomic instruction
CN111142941A (en) * 2019-11-27 2020-05-12 核芯互联科技(青岛)有限公司 Non-blocking cache miss processing method and device
CN111414321A (en) * 2020-02-24 2020-07-14 中国农业大学 Cache protection method and device based on dynamic mapping mechanism
CN111414321B (en) * 2020-02-24 2022-07-15 中国农业大学 Cache protection method and device based on dynamic mapping mechanism
CN112711383A (en) * 2020-12-30 2021-04-27 浙江大学 Non-volatile storage reading acceleration method for power chip
CN112711383B (en) * 2020-12-30 2022-08-26 浙江大学 Non-volatile storage reading acceleration method for power chip
CN113204370A (en) * 2021-03-16 2021-08-03 南京英锐创电子科技有限公司 Instruction caching method and device

Also Published As

Publication number Publication date
CN109918131B (en) 2021-04-30

Similar Documents

Publication Publication Date Title
CN109918131A (en) A kind of instruction read method based on non-obstruction command cache
US20240078190A1 (en) Write merging on stores with different privilege levels
US8977819B2 (en) Prefetch stream filter with FIFO allocation and stream direction prediction
US5634027A (en) Cache memory system for multiple processors with collectively arranged cache tag memories
US6446171B1 (en) Method and apparatus for tracking and update of LRU algorithm using vectors
CN103076992B (en) A kind of internal storage data way to play for time and device
CN101918925B (en) Second chance replacement mechanism for a highly associative cache memory of a processor
KR20000052480A (en) System and method for cache process
US8621152B1 (en) Transparent level 2 cache that uses independent tag and valid random access memory arrays for cache access
US8880847B2 (en) Multistream prefetch buffer
CN109891397A (en) Device and method for the operating system cache memory in solid-state device
JPH07311711A (en) Data processor and its operating method as well as operatingmethod of memory cache
US7454575B2 (en) Cache memory and its controlling method
CN115617712A (en) LRU replacement algorithm based on set associative Cache
EP1467284A2 (en) Data memory cache unit and data memory cache system
CN107562806B (en) Self-adaptive sensing acceleration method and system of hybrid memory file system
US20050188158A1 (en) Cache memory with improved replacement policy
US7555610B2 (en) Cache memory and control method thereof
US7010649B2 (en) Performance of a cache by including a tag that stores an indication of a previously requested address by the processor not stored in the cache
CN107180118A (en) A kind of file system cache data managing method and device
CN111124297B (en) Performance improving method for stacked DRAM cache
JP6224684B2 (en) Store merge apparatus, information processing apparatus, store control method, and computer program
US8214597B2 (en) Cache tentative read buffer
US7805572B2 (en) Cache pollution avoidance
CN100489813C (en) Method for selective prefetch withdraw

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant