CN117687936A - Method, device, equipment and storage medium for improving cache hit rate - Google Patents

Method, device, equipment and storage medium for improving cache hit rate Download PDF

Info

Publication number
CN117687936A
CN117687936A CN202311515487.2A CN202311515487A CN117687936A CN 117687936 A CN117687936 A CN 117687936A CN 202311515487 A CN202311515487 A CN 202311515487A CN 117687936 A CN117687936 A CN 117687936A
Authority
CN
China
Prior art keywords
way
cache
cache line
reserved
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311515487.2A
Other languages
Chinese (zh)
Inventor
汪磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hongjun Microelectronics Technology Co ltd
Original Assignee
Hangzhou Hongjun Microelectronics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hongjun Microelectronics Technology Co ltd filed Critical Hangzhou Hongjun Microelectronics Technology Co ltd
Priority to CN202311515487.2A priority Critical patent/CN117687936A/en
Publication of CN117687936A publication Critical patent/CN117687936A/en
Pending legal-status Critical Current

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a method, a device, equipment and a storage medium for improving the Cache hit rate, which is applied to a Cache structure adopting a group association mapping relation, wherein the Cache structure is provided with a plurality of groups, each group comprises a plurality of paths, the paths in each group comprise a conventional path and a reserved path, and the reserved path is used for storing Cache lines replaced by the conventional path in a first-in first-out manner.

Description

Method, device, equipment and storage medium for improving cache hit rate
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for improving a cache hit rate.
Background
At present, a server chip is generally designed for a large data center, and performance and energy efficiency are the most typical indexes of the server chip. Particularly, the improvement and optimization of the performance are of great significance to running high-performance computing tasks on a server chip. Cache (Cache) is a high-speed memory in computer systems for temporarily storing data read from a slower host memory to increase the speed of access to the data by the processor.
Set associative caches are a common cache structure that divides the total capacity of a cache into a plurality of sets, each set containing a plurality of cache lines. Each cache line may store a block of data (e.g., the contents of a memory address) along with some additional control information (e.g., tag/tag information, valid bits, etc.). The data blocks are selected by address information and put into a certain group, but in the group, a random put mode is generally adopted in the existing scheme, namely, if a vacancy exists, a random mode is adopted to select a vacancy for putting. If there is no room in the group, a cache line needs to be selected for replacement, and the replaced cache line is replaced into the downstream main memory. However, the cache line which is replaced currently is likely to be adopted for next access, so that the next access needs to be carried out to fetch data again in the main memory, and a certain cache line is replaced again, thereby greatly increasing access delay, reducing cache hit rate and affecting the performance of the server chip.
Therefore, a method for improving the cache hit rate is needed to improve the cache hit rate, so as to optimize the performance of the server chip.
Disclosure of Invention
The invention mainly aims to provide a method, a device, equipment and a storage medium for improving the cache hit rate, which aim to solve the technical problem of how to improve the cache hit rate and realize the optimization of the performance of a server chip in the prior art.
In order to achieve the above object, the present invention provides a method for improving a Cache hit rate, the method is applied to a Cache structure adopting a set associative mapping relationship, the Cache structure is provided with a plurality of sets, each set includes a plurality of ways, the ways in each set include two types of regular ways and reserved ways, the reserved ways store Cache lines replaced from the regular ways based on a first-in first-out manner, the method includes the following steps:
when a search request is received, determining a target group of a target Cache line in the Cache structure according to the read-write request;
if the target cache line is not in the regular way of the target group, the target cache line is in the reserved way of the target group, and whether the regular way has a vacancy is judged;
if the regular ways do not have the vacancy, any one of the regular ways is selected as a hit way based on a preset mode, the target cache line in the reserved way is put into the hit way, and the target cache line is returned;
and according to the position of the target cache line in the reserved path, placing the original cache line corresponding to the hit path into the reserved path.
Optionally, the method further comprises:
when a new Cache line needs to be put into a target group in the Cache structure, judging whether a vacancy exists in the conventional way in the target group;
if the regular ways in the target group do not have empty spaces, any one of the regular ways is selected as a replacement way based on the preset mode, and the new cache line is put into the replacement way;
if the reserved way does not have a vacancy, based on the first-in first-out mode, the original cache line corresponding to the replacement way is put into the last way in the reserved way; and storing the original cache line corresponding to the next path in the reserved paths in sequence into the previous reserved path of the next path until the original cache line corresponding to the forefront path in the reserved paths is written back to the main memory.
Optionally, the step of determining, when a lookup request is received, a target group of a target Cache line in the Cache structure according to the read-write request includes:
when a search request is received, analyzing the search request to obtain an analysis result;
judging whether the target Cache line is in the Cache structure according to the analysis result;
if the target Cache line is in the Cache structure, determining a target group of the target Cache line in the Cache structure;
and if the target Cache line is not in the Cache structure, forwarding the search request to a next-level Cache or main memory.
Optionally, after the step of determining whether the regular way has a vacancy, if the target cache line is not in the regular way of the target group, the step of determining whether the target cache line is in the reserved way of the target group further includes:
if the regular road has a vacancy, selecting any road in the vacancy of the regular road as a hit road based on a preset mode;
and placing the target cache line in the reserved path into the hit path, and returning the target cache line.
Optionally, the step of placing the original cache line corresponding to the hit way into the reserved way according to the position of the target cache line in the reserved way includes:
if the target cache line is stored in the last path in the reserved path, the original cache line corresponding to the hit path is put in the last path in the reserved path;
and if the target cache line is not stored in the last way in the reserved ways, placing the original cache line corresponding to the hit way in the last way in the reserved ways, and placing the original cache line corresponding to the last way in the reserved way before the last way.
Optionally, after the step of determining whether the regular way in the target set has a vacancy when a new Cache line needs to be placed in the target set in the Cache structure, the method further includes:
if the regular road in the target group has a vacancy, selecting any road in the vacancy of the regular road as a replacement road based on the preset mode;
and placing the new cache line into the replacement way.
Optionally, if the regular way in the target group has no vacancy, selecting any one of the regular ways as a replacement way based on the preset manner, and after the step of placing the new cache line into the replacement way, further includes:
and if the reserved path has a vacancy, placing the original cache line corresponding to the replacement path into a vacancy path with the forefront position in the reserved path.
In addition, in order to achieve the above object, the present invention further provides a device for improving a Cache hit rate, the device including a Cache structure adopting a set associative mapping relationship, the Cache structure having a plurality of sets, each set including a plurality of ways, the ways in each set including two types of regular ways and reserved ways, the reserved ways storing Cache lines replaced from the regular ways based on a first-in first-out manner, the device including:
the determining module is used for determining a target group of a target Cache line in the Cache structure according to the read-write request when a search request is received;
the judging module is used for judging whether the regular way exists in the reserved way of the target group or not if the target cache line is not in the regular way of the target group;
the replacing module is used for selecting any one of the conventional ways as a hit way based on a preset mode if the conventional way does not have a vacancy, placing the target cache line in the reserved way into the hit way and returning the target cache line;
and the storage module is used for placing the original cache line corresponding to the hit path into the reserved path according to the position of the target cache line in the reserved path.
In addition, to achieve the above object, the present invention further provides an apparatus for improving a cache hit rate, the apparatus comprising: a memory, a processor and a cache hit rate enhancing program stored on the memory and executable on the processor, the cache hit rate enhancing program configured to implement the steps of the method of enhancing cache hit rate as described above.
In addition, in order to achieve the above object, the present invention also proposes a storage medium having stored thereon a program for increasing the cache hit rate, which when executed by a processor, implements the steps of the method for increasing the cache hit rate as described above.
The invention is applied to a Cache structure adopting a group association mapping relation, wherein the Cache structure is provided with a plurality of groups, each group comprises a plurality of ways, the ways in each group comprise two types of conventional ways and reserved ways, and the reserved ways store Cache lines replaced from the conventional ways based on a first-in first-out mode; when a search request is received, determining a target group of a target Cache line in the Cache structure according to the read-write request; if the target cache line is not in the regular way of the target group, the target cache line is in the reserved way of the target group, and whether the regular way has a vacancy is judged; if the regular ways do not have the vacancy, any one of the regular ways is selected as a hit way based on a preset mode, the target cache line in the reserved way is put into the hit way, and the target cache line is returned; and according to the position of the target cache line in the reserved path, placing the original cache line corresponding to the hit path into the reserved path. Compared with the prior art, the invention divides the way structure in the Cache structure of the original group association mapping relation into two ways of the conventional way and the reserved way, and the reserved way stores the Cache line replaced from the conventional way in a first-in first-out way, thereby effectively avoiding the situation that the Cache line which is still requested for many times in the future is directly kicked out of the Cache by error selection, improving the Cache hit rate and further effectively improving the performance of the server chip.
Drawings
FIG. 1 is a schematic diagram of a device for improving cache hit rate in a hardware running environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for improving cache hit rate according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram of a Cache architecture employing a reserved way architecture based on a 4-set 8-way set associative mapping;
FIG. 4 is a flow chart of searching a target cache line in the method for improving the cache hit rate according to the present invention;
FIG. 5 is a flowchart illustrating a method for improving cache hit rate according to a second embodiment of the present invention;
FIG. 6 is a flow chart of a method for increasing cache hit rate according to the present invention, wherein a new cache line is placed in the cache;
FIG. 7 is a block diagram illustrating a first embodiment of an apparatus for improving cache hit rate according to the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic diagram of an apparatus for improving cache hit rate in a hardware running environment according to an embodiment of the present invention.
As shown in fig. 1, the apparatus for improving cache hit rate may include: a processor 1001, such as a central processing unit (Central Processing Unit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (WI-FI) interface). The Memory 1005 may be a high-speed random access Memory (Random Access Memory, RAM) or a stable nonvolatile Memory (NVM), such as a disk Memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 does not constitute a limitation of the apparatus for improving cache hit rate, and may include more or less components than illustrated, or may combine certain components, or may be a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and a program for improving a cache hit rate may be included in the memory 1005 as one type of storage medium.
In the device for improving cache hit rate shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the device for improving the cache hit rate may be provided in the device for improving the cache hit rate, where the device for improving the cache hit rate invokes a program for improving the cache hit rate stored in the memory 1005 through the processor 1001, and executes the method for improving the cache hit rate provided by the embodiment of the present invention.
An embodiment of the present invention provides a method for improving cache hit rate, and referring to fig. 2, fig. 2 is a flowchart of a first embodiment of the method for improving cache hit rate of the present invention.
In this embodiment, the method for improving the Cache hit rate is applied to a Cache structure adopting a group associative mapping relationship, where the Cache structure is provided with a plurality of groups, each group includes a plurality of ways, the ways in each group include two types of regular ways and reserved ways, and the reserved ways store Cache lines replaced from the regular ways based on a first-in first-out manner, and the method includes the following steps:
step S10: and when a search request is received, determining a target group of a target Cache line in the Cache structure according to the read-write request.
It should be noted that, the execution body of the embodiment may be a computing service device having functions of data processing, network communication and program running, such as a server, a tablet computer, a personal computer, or an electronic device capable of implementing the above functions, a device for improving a cache hit rate, or the like. The present embodiment and the following embodiments will be described by way of example using an apparatus for improving cache hit rate.
It should be understood that the target cache line may be data information searched by the user, or may be address information searched by the user, which is not limited in this embodiment.
It should be explained that, the way of reserving ways adopted by the present invention is implemented based on the set associative mapping, referring to fig. 3, fig. 3 is a schematic diagram of a Cache structure adopting a reserved way structure based on the 4-set 8-way set associative mapping, and this structure is taken as an example, and this embodiment and the following embodiments are illustrated. More complex structures may exist in a particular implementation, such as 4 sets of 16 ways, 8 sets of 8 ways, etc., but the ideas are similar.
The Cache structure in fig. 3 includes 4 groups, namely group 0, group 1, group 2 and group 3, each group includes 8 ways, that is, each group includes 8 Cache lines, wherein 2 ways are reserved ways, and the other 6 ways are regular ways. The group 0 comprises a CacheLine0, a … … CacheLine5, a CacheLine6 and a CacheLine7, wherein the CacheLine6 and the CacheLine7 are reserved paths 0 and 1, and the rest are conventional paths; the group 1 comprises a CacheLine8, a … … CacheLine13, a CacheLine14 and a CacheLine15, wherein the CacheLine14 and the CacheLine15 are reserved paths 0 and 1, and the rest are conventional paths; the group 2 comprises a CacheLine16, a … … CacheLine21, a CacheLine22 and a CacheLine23, wherein the CacheLine22 and the CacheLine23 are reserved paths 0 and 1, and the rest are conventional paths; the group 3 contains a CacheLine24, … …, a CacheLine29, a CacheLine30 and a CacheLine31, wherein the CacheLine30 and the CacheLine31 are a reserved path 0 and a reserved path 1, and the rest are conventional paths.
In a specific implementation, when a search request is received, the search request can be analyzed to obtain an analysis result; judging whether the target Cache line is in the Cache structure according to the analysis result; if the target Cache line is in the Cache structure, determining a target group of the target Cache line in the Cache structure; and if the target Cache line is not in the Cache structure, forwarding the search request to a next-level Cache or main memory.
It should be appreciated that the above analysis results may target address information of the Cache line (e.g., group number in the Cache structure, etc.).
Step S20: if the target cache line is not in the regular way of the target group, the target cache line is in the reserved way of the target group, and whether the regular way has a vacancy is judged.
It should be appreciated that since the target cache line is in the target set, the target set includes regular ways and reserved ways, when the target cache line is not in the regular ways of the target set, the target cache line must be in the reserved ways of the target set.
Step S30: if the regular ways do not have the vacancy, any one of the regular ways is selected as a hit way based on a preset mode, the target cache line in the reserved way is put into the hit way, and the target cache line is returned.
If the regular road has a vacancy, selecting any road in the vacancy of the regular road as a hit road based on a preset mode; and placing the target cache line in the reserved path into the hit path, and returning the target cache line.
It should be noted that the above-mentioned preset mode may be a random selection mode or other selection modes, which is not limited in this embodiment.
Step S40: and according to the position of the target cache line in the reserved path, placing the original cache line corresponding to the hit path into the reserved path.
It should be understood that if the target cache line is stored in the last way of the reserved ways, the original cache line corresponding to the hit way is put in the last way of the reserved ways; and if the target cache line is not stored in the last way in the reserved ways, placing the original cache line corresponding to the hit way in the last way in the reserved ways, and placing the original cache line corresponding to the last way in the reserved way before the last way.
For example, referring to fig. 4, fig. 4 is a schematic flow chart of searching a target cache line in the method for improving the cache hit rate according to the present invention. Taking a Cache structure adopting a reserved way structure based on 4-group 8-way group association mapping as an example, when a search request is received, searching a Cache, judging whether the Cache hits or not, and if the Cache does not hit (NO), forwarding the request to the next-level Cache or searching in a main memory; if the target cache line is hit in the cache (yes), determining a target group of the target cache line in the cache, and judging whether the target group is hit in a conventional way of the target group; if hit in the regular way of the target group (yes), returning the cache line hit in the regular way to the requester; if the target group is not hit in the conventional road (NO), the reserved road of the target group is hit at the moment, and whether a vacancy exists in the conventional road of the target group is judged; if the regular way of the target group has a vacancy (yes), putting the hit cache line into the regular way, randomly selecting the vacancy, and simultaneously returning the hit cache line to the requester; if the regular way of the target group has no vacancy (NO), selecting a certain way in the regular way in a random mode, putting the hit cache line into the position, and simultaneously returning the hit cache line to a requester, wherein the selected cache line in the original regular way is to be processed in the next step; judging whether the original hit reserved path is stored in the reserved path 0 or not; if the original hit reserved way is not stored in the reserved way 0 (NO), putting the selected cache line in the conventional way into the reserved way 1; if the original hit reserved way is stored in the reserved way 0 (yes), the selected cache line in the conventional way is put into the reserved way 1, and meanwhile, the cache line at the original reserved way 1 is put into the reserved way 0.
The embodiment is applied to a Cache structure adopting a group association mapping relation, the Cache structure is provided with a plurality of groups, each group comprises a plurality of ways, the ways in each group comprise two types of conventional ways and reserved ways, and the reserved ways store Cache lines replaced from the conventional ways based on a first-in first-out mode; when a search request is received, determining a target group of a target Cache line in the Cache structure according to the read-write request; if the target cache line is not in the regular way of the target group, the target cache line is in the reserved way of the target group, and whether the regular way has a vacancy is judged; if the regular ways do not have the vacancy, any one of the regular ways is selected as a hit way based on a preset mode, the target cache line in the reserved way is put into the hit way, and the target cache line is returned; and according to the position of the target cache line in the reserved path, placing the original cache line corresponding to the hit path into the reserved path. Compared with the prior art, the embodiment divides the way structure in the Cache structure of the original group association mapping relation into two ways of a conventional way and a reserved way, and the reserved way stores the Cache line replaced from the conventional way in a first-in first-out way, so that the situation that the Cache line which is still requested for many times in the future is directly kicked out of the Cache in error selection can be effectively avoided, the Cache hit rate is improved, and further the performance of a server chip is effectively improved.
Referring to fig. 5, fig. 5 is a flowchart illustrating a method for improving cache hit rate according to a second embodiment of the present invention.
Based on the first embodiment, in this embodiment, the method for improving the cache hit rate further includes:
step S50: when a new Cache line needs to be put into a target group in the Cache structure, judging whether a vacancy exists in the conventional way in the target group.
If the regular road in the target group has a vacancy, selecting any road in the vacancy of the regular road as a replacement road based on the preset mode; and placing the new cache line into the replacement way.
Step S60: and if the regular ways in the target group do not have empty spaces, selecting any one of the regular ways as a replacement way based on the preset mode, and placing the new cache line into the replacement way.
It should be explained that, if the reserved way has a vacancy, the original cache line corresponding to the replacement way is put into a vacancy way with the forefront position in the reserved way.
Step S70: if the reserved way does not have a vacancy, based on the first-in first-out mode, the original cache line corresponding to the replacement way is put into the last way in the reserved way; and storing the original cache line corresponding to the next path in the reserved paths in sequence into the previous reserved path of the next path until the original cache line corresponding to the forefront path in the reserved paths is written back to the main memory.
For example, referring to fig. 6, fig. 6 is a flow chart illustrating a method for increasing cache hit rate according to the present invention. When a new Cache line needs to be put into a Cache, determining a target group with a Cache structure (Cache structure), and judging whether a conventional way in the target group has a vacancy or not; if the regular way in the target group has a vacancy (yes), a random way is adopted in the vacancy to select a certain way to be put into a new cache line; if the regular way in the target group has no vacancy (NO), selecting one of the regular ways in a random mode, putting a new cache line, and processing the replaced cache line in the next step; judging whether a reserved path of the target group has a vacancy or not; if the reserved path of the target group has a vacancy (yes), if the reserved path 0 has a vacancy, the replaced cache line is placed in the reserved path 0, otherwise, the replaced cache line is placed in the reserved path 1; if the reserved way of the target group has no vacancy (no), putting the replaced cache line into the reserved way 1, putting the cache line at the original reserved way 1 into the reserved way 0, and writing the cache line at the reserved way 0 back into the main memory.
The embodiment is applied to a Cache structure adopting a group association mapping relation, the Cache structure is provided with a plurality of groups, each group comprises a plurality of ways, the ways in each group comprise two types of conventional ways and reserved ways, and the reserved ways store Cache lines replaced from the conventional ways based on a first-in first-out mode; when a new Cache line needs to be put into a target group in the Cache structure, judging whether a vacancy exists in the conventional way in the target group; if the regular ways in the target group do not have empty spaces, any one of the regular ways is selected as a replacement way based on the preset mode, and the new cache line is put into the replacement way; if the reserved way does not have a vacancy, based on the first-in first-out mode, the original cache line corresponding to the replacement way is put into the last way in the reserved way; and storing the original cache line corresponding to the next path in the reserved paths in sequence into the previous reserved path of the next path until the original cache line corresponding to the forefront path in the reserved paths is written back to the main memory. Compared with the prior art, the conventional way is not directly replaced when the conventional way is replaced, but is put into the reserved way, the reserved way adopts a first-in first-out mode, and when the reserved way is full, a cache line which firstly enters the reserved way structure is kicked out, so that the selector in replacement is optimized, for example, in a 4-group 8-way set association mapping structure, the selector is optimized from 8 to 6. The difficulty of time sequence convergence when the server chip is realized is reduced, the higher frequency of operation of the server chip is facilitated, and the performance of the server chip is further effectively improved.
In addition, the embodiment of the invention also provides a storage medium, wherein the storage medium stores a program for improving the cache hit rate, and the program for improving the cache hit rate realizes the steps of the method for improving the cache hit rate when being executed by a processor.
Referring to fig. 7, fig. 7 is a block diagram illustrating a first embodiment of an apparatus for improving cache hit rate according to the present invention.
As shown in fig. 7, a device for improving Cache hit rate according to an embodiment of the present invention includes a Cache structure adopting a set associative mapping relationship, where the Cache structure is provided with a plurality of sets, each set includes a plurality of ways, and ways in each set include two types of regular ways and reserved ways, and the reserved ways store Cache lines replaced from the regular ways based on a first-in first-out manner, and the device includes: a determining module 701, a judging module 702, a replacing module 703 and a storing module 704.
The determining module 701 is configured to determine, when a lookup request is received, a target group of a target Cache line in the Cache structure according to the read-write request.
The determining module 702 is configured to, if the target cache line is not in a regular way of the target set, determine whether the target cache line is in a reserved way of the target set, and determine whether a vacancy exists in the regular way.
The replacing module 703 is configured to, if the regular ways do not have a vacancy, select any one of the regular ways as a hit way based on a preset manner, place the target cache line in the reserved way into the hit way, and return the target cache line.
The storing module 704 is configured to put the original cache line corresponding to the hit way into the reserved way according to the position of the target cache line in the reserved way.
The determining module 701 is further configured to parse the search request to obtain a parsing result when the search request is received; judging whether the target Cache line is in the Cache structure according to the analysis result; if the target Cache line is in the Cache structure, determining a target group of the target Cache line in the Cache structure; and if the target Cache line is not in the Cache structure, forwarding the search request to a next-level Cache or main memory.
The judging module 702 is further configured to select, if the regular road has a vacancy, any one of the vacancies of the regular road as a hit road based on a preset manner; and placing the target cache line in the reserved path into the hit path, and returning the target cache line.
The storing module 704 is further configured to, if the target cache line is stored in a last way in the reserved ways, put an original cache line corresponding to the hit way into the last way in the reserved ways; and if the target cache line is not stored in the last way in the reserved ways, placing the original cache line corresponding to the hit way in the last way in the reserved ways, and placing the original cache line corresponding to the last way in the reserved way before the last way.
The embodiment is applied to a Cache structure adopting a group association mapping relation, the Cache structure is provided with a plurality of groups, each group comprises a plurality of ways, the ways in each group comprise two types of conventional ways and reserved ways, and the reserved ways store Cache lines replaced from the conventional ways based on a first-in first-out mode; when a search request is received, determining a target group of a target Cache line in the Cache structure according to the read-write request; if the target cache line is not in the regular way of the target group, the target cache line is in the reserved way of the target group, and whether the regular way has a vacancy is judged; if the regular ways do not have the vacancy, any one of the regular ways is selected as a hit way based on a preset mode, the target cache line in the reserved way is put into the hit way, and the target cache line is returned; and according to the position of the target cache line in the reserved path, placing the original cache line corresponding to the hit path into the reserved path. Compared with the prior art, the embodiment divides the way structure in the Cache structure of the original group association mapping relation into two ways of a conventional way and a reserved way, and the reserved way stores the Cache line replaced from the conventional way in a first-in first-out way, so that the situation that the Cache line which is still requested for many times in the future is directly kicked out of the Cache in error selection can be effectively avoided, the Cache hit rate is improved, and further the performance of a server chip is effectively improved.
Based on the first embodiment of the device for improving the cache hit rate of the present invention, a second embodiment of the device for improving the cache hit rate of the present invention is provided.
In this embodiment, the apparatus for improving the cache hit rate further includes:
and the vacancy judging module is used for judging whether the conventional way in the target group has a vacancy or not when a new Cache line needs to be put into the target group in the Cache structure.
And the first replacing module is used for selecting any one of the regular ways to be used as a replacing way based on the preset mode if the regular ways in the target group do not have a vacancy, and placing the new cache line into the replacing way.
The second replacing module is used for placing the original cache line corresponding to the replacing path into the last path in the reserved path based on the first-in first-out mode if the reserved path does not have a vacancy; and storing the original cache line corresponding to the next path in the reserved paths in sequence into the previous reserved path of the next path until the original cache line corresponding to the forefront path in the reserved paths is written back to the main memory.
The vacancy judging module is further configured to select any one of the vacancies of the regular ways as a replacement way based on the preset manner if the regular ways in the target group have vacancies; and placing the new cache line into the replacement way.
And the first replacing module is further used for placing the original cache line corresponding to the replacing path into a vacancy path with the forefront position in the reserved path if the reserved path has a vacancy.
The embodiment is applied to a Cache structure adopting a group association mapping relation, the Cache structure is provided with a plurality of groups, each group comprises a plurality of ways, the ways in each group comprise two types of conventional ways and reserved ways, and the reserved ways store Cache lines replaced from the conventional ways based on a first-in first-out mode; when a new Cache line needs to be put into a target group in the Cache structure, judging whether a vacancy exists in the conventional way in the target group; if the regular ways in the target group do not have empty spaces, any one of the regular ways is selected as a replacement way based on the preset mode, and the new cache line is put into the replacement way; if the reserved way does not have a vacancy, based on the first-in first-out mode, the original cache line corresponding to the replacement way is put into the last way in the reserved way; and storing the original cache line corresponding to the next path in the reserved paths in sequence into the previous reserved path of the next path until the original cache line corresponding to the forefront path in the reserved paths is written back to the main memory. Compared with the prior art, the conventional way is not directly replaced when the conventional way is replaced, but is put into the reserved way, the reserved way adopts a first-in first-out mode, and when the reserved way is full, a cache line which firstly enters the reserved way structure is kicked out, so that the selector in replacement is optimized, for example, in a 4-group 8-way set association mapping structure, the selector is optimized from 8 to 6. The difficulty of time sequence convergence when the server chip is realized is reduced, the higher frequency of operation of the server chip is facilitated, and the performance of the server chip is further effectively improved.
Other embodiments or specific implementation manners of the apparatus for improving cache hit rate of the present invention may refer to the above method embodiments, and will not be described herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. read-only memory/random-access memory, magnetic disk, optical disk), comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. The method for improving the Cache hit rate is characterized by being applied to a Cache structure adopting a group association mapping relation, wherein the Cache structure is provided with a plurality of groups, each group comprises a plurality of ways, the ways in each group comprise two types of conventional ways and reserved ways, the reserved ways store Cache lines replaced from the conventional ways on the basis of a first-in first-out mode, and the method comprises the following steps:
when a search request is received, determining a target group of a target Cache line in the Cache structure according to the read-write request;
if the target cache line is not in the regular way of the target group, the target cache line is in the reserved way of the target group, and whether the regular way has a vacancy is judged;
if the regular ways do not have the vacancy, any one of the regular ways is selected as a hit way based on a preset mode, the target cache line in the reserved way is put into the hit way, and the target cache line is returned;
and according to the position of the target cache line in the reserved path, placing the original cache line corresponding to the hit path into the reserved path.
2. The method for increasing cache hit rate according to claim 1, wherein the method further comprises:
when a new Cache line needs to be put into a target group in the Cache structure, judging whether a vacancy exists in the conventional way in the target group;
if the regular ways in the target group do not have empty spaces, any one of the regular ways is selected as a replacement way based on the preset mode, and the new cache line is put into the replacement way;
if the reserved way does not have a vacancy, based on the first-in first-out mode, the original cache line corresponding to the replacement way is put into the last way in the reserved way; and storing the original cache line corresponding to the next path in the reserved paths in sequence into the previous reserved path of the next path until the original cache line corresponding to the forefront path in the reserved paths is written back to the main memory.
3. The method of claim 1, wherein the step of determining a target set of target Cache lines in the Cache structure based on the read-write request when a lookup request is received comprises:
when a search request is received, analyzing the search request to obtain an analysis result;
judging whether the target Cache line is in the Cache structure according to the analysis result;
if the target Cache line is in the Cache structure, determining a target group of the target Cache line in the Cache structure;
and if the target Cache line is not in the Cache structure, forwarding the search request to a next-level Cache or main memory.
4. The method for increasing cache hit rate according to claim 1, wherein said step of if said target cache line is not on a regular way of said target set, said target cache line is on a reserved way of said target set and determining whether said regular way has a void, further comprises:
if the regular road has a vacancy, selecting any road in the vacancy of the regular road as a hit road based on a preset mode;
and placing the target cache line in the reserved path into the hit path, and returning the target cache line.
5. The method for increasing cache hit rate according to claim 1, wherein said step of placing the original cache line corresponding to said hit way into said reserved way according to the position of said target cache line in said reserved way comprises:
if the target cache line is stored in the last path in the reserved path, the original cache line corresponding to the hit path is put in the last path in the reserved path;
and if the target cache line is not stored in the last way in the reserved ways, placing the original cache line corresponding to the hit way in the last way in the reserved ways, and placing the original cache line corresponding to the last way in the reserved way before the last way.
6. The method as claimed in claim 2, wherein, when a new Cache line needs to be placed in a target set in the Cache structure, the step of determining whether the regular way in the target set has a vacancy further comprises:
if the regular road in the target group has a vacancy, selecting any road in the vacancy of the regular road as a replacement road based on the preset mode;
and placing the new cache line into the replacement way.
7. The method for increasing cache hit rate according to claim 2, wherein, if there is no vacancy in the regular ways in the target set, selecting any one of the regular ways as a replacement way based on the preset manner, and after the step of placing the new cache line in the replacement way, further comprising:
and if the reserved path has a vacancy, placing the original cache line corresponding to the replacement path into a vacancy path with the forefront position in the reserved path.
8. The device for improving the Cache hit rate is characterized by comprising a Cache structure adopting a group association mapping relation, wherein the Cache structure is provided with a plurality of groups, each group comprises a plurality of ways, the ways in each group comprise a conventional way and a reserved way, the reserved way stores Cache lines replaced from the conventional way based on a first-in first-out way, and the device comprises:
the determining module is used for determining a target group of a target Cache line in the Cache structure according to the read-write request when a search request is received;
the judging module is used for judging whether the regular way exists in the reserved way of the target group or not if the target cache line is not in the regular way of the target group;
the replacing module is used for selecting any one of the conventional ways as a hit way based on a preset mode if the conventional way does not have a vacancy, placing the target cache line in the reserved way into the hit way and returning the target cache line;
and the storage module is used for placing the original cache line corresponding to the hit path into the reserved path according to the position of the target cache line in the reserved path.
9. An apparatus for increasing cache hit rate, the apparatus comprising: a memory, a processor and a cache hit rate increasing program stored on the memory and executable on the processor, the cache hit rate increasing program being configured to implement the steps of the method of increasing cache hit rate as claimed in any one of claims 1 to 7.
10. A storage medium having stored thereon a program for increasing the cache hit rate, the program for increasing the cache hit rate, when executed by a processor, implementing the steps of the method for increasing the cache hit rate according to any one of claims 1 to 7.
CN202311515487.2A 2023-11-14 2023-11-14 Method, device, equipment and storage medium for improving cache hit rate Pending CN117687936A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311515487.2A CN117687936A (en) 2023-11-14 2023-11-14 Method, device, equipment and storage medium for improving cache hit rate

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311515487.2A CN117687936A (en) 2023-11-14 2023-11-14 Method, device, equipment and storage medium for improving cache hit rate

Publications (1)

Publication Number Publication Date
CN117687936A true CN117687936A (en) 2024-03-12

Family

ID=90129110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311515487.2A Pending CN117687936A (en) 2023-11-14 2023-11-14 Method, device, equipment and storage medium for improving cache hit rate

Country Status (1)

Country Link
CN (1) CN117687936A (en)

Similar Documents

Publication Publication Date Title
US5983324A (en) Data prefetch control method for main storage cache for protecting prefetched data from replacement before utilization thereof
US6807607B1 (en) Cache memory management system and method
EP1370946B1 (en) Cache way prediction based on instruction base register
CN100407167C (en) Fast and acurate cache way selection
US11093410B2 (en) Cache management method, storage system and computer program product
US7065613B1 (en) Method for reducing access to main memory using a stack cache
US11314689B2 (en) Method, apparatus, and computer program product for indexing a file
CN110555001B (en) Data processing method, device, terminal and medium
US20150143045A1 (en) Cache control apparatus and method
US20140040541A1 (en) Method of managing dynamic memory reallocation and device performing the method
US11907164B2 (en) File loading method and apparatus, electronic device, and storage medium
CN113760787B (en) Multi-level cache data push system, method, apparatus, and computer medium
US8856449B2 (en) Method and apparatus for data storage and access
CN107133183B (en) Cache data access method and system based on TCMU virtual block device
EP1980945A1 (en) Memory access control apparatus and memory access control method
WO2022057749A1 (en) Method and apparatus for handling missing memory page abnomality, and device and storage medium
CN115168248A (en) Cache memory supporting SIMT architecture and corresponding processor
CN109478163B (en) System and method for identifying a pending memory access request at a cache entry
KR20100005539A (en) Cache memory system and prefetching method thereof
US9311988B2 (en) Storage control system and method, and replacing system and method
US8356141B2 (en) Identifying replacement memory pages from three page record lists
CN115658625B (en) Data decompression system, graphic processing system, device, equipment and decompression method
CN117687936A (en) Method, device, equipment and storage medium for improving cache hit rate
CN107967306B (en) Method for rapidly mining association blocks in storage system
US20010032297A1 (en) Cache memory apparatus and data processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: Room 07-1, 2001, No. 37 Huangge Section, Fanzhong Road, Nansha District, Guangzhou City, Guangdong Province, 510000

Applicant after: Guangdong Hongjun Microelectronics Technology Co.,Ltd.

Address before: 813-3, Building 1, No. 371 Mingxing Road, Economic and Technological Development Zone, Xiaoshan District, Hangzhou City, Zhejiang Province, 310000

Applicant before: Hangzhou Hongjun Microelectronics Technology Co.,Ltd.

Country or region before: China