CN113791989B - Cache-based cache data processing method, storage medium and chip - Google Patents

Cache-based cache data processing method, storage medium and chip Download PDF

Info

Publication number
CN113791989B
CN113791989B CN202111081748.5A CN202111081748A CN113791989B CN 113791989 B CN113791989 B CN 113791989B CN 202111081748 A CN202111081748 A CN 202111081748A CN 113791989 B CN113791989 B CN 113791989B
Authority
CN
China
Prior art keywords
cache
address
queue
cache line
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111081748.5A
Other languages
Chinese (zh)
Other versions
CN113791989A (en
Inventor
谢林庭
卢知伯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhongke Lanxun Technology Co ltd
Original Assignee
Shenzhen Zhongke Lanxun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhongke Lanxun Technology Co ltd filed Critical Shenzhen Zhongke Lanxun Technology Co ltd
Priority to CN202111081748.5A priority Critical patent/CN113791989B/en
Publication of CN113791989A publication Critical patent/CN113791989A/en
Application granted granted Critical
Publication of CN113791989B publication Critical patent/CN113791989B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0895Caches characterised by their organisation or structure of parts of caches, e.g. directory or tag array
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • G06F2212/1024Latency reduction
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention relates to the technical field of data caching, and discloses a cache data processing method, a storage medium and a chip based on cache. The cache-based cache data processing method comprises the following steps: traversing a first address lookup table of a first cache queue; determining a cache line meeting the continuous condition of the memory address as a candidate cache line according to the starting address of the memory block of each cache line; and processing the cache data of each candidate cache line according to the cache length of each candidate cache line and a preset length threshold value. According to the embodiment, the cache data of each candidate cache line can be processed according to the continuity of the starting address of the memory block by combining the cache length of each candidate cache line, so that the cache data of a plurality of cache lines with continuous addresses are prevented from being easily eliminated when the cache data is eliminated, and the cache data of a plurality of cache lines with continuous addresses which have been brushed off are prevented from being loaded with more loading time when the cache data is accessed next time.

Description

Cache-based cache data processing method, storage medium and chip
Technical Field
The present invention relates to the field of data caching technologies, and in particular, to a cache data processing method, a storage medium, and a chip based on cache.
Background
Cache (cache) technology is one of the core technologies in modern processor designs, effectively solving the matching problem between processing speed and memory speed. The cache is used for caching data (cache data) of the memory. When the main equipment accesses the memory, the main equipment can be transferred to the cache for access, and cache data are captured from the cache. When the storage space of the cache is full, the loaded cache data in the cache needs to be eliminated.
The prior art can provide various cache elimination algorithms to eliminate cache data, however, the existing cache elimination algorithms all determine the cache condition of a single cache line in isolation, and determine whether to eliminate the cache data of the single cache line according to the cache condition of the single cache line, wherein the cache condition comprises access frequency, access times of appointed duration, write time and the like.
However, in an application scenario where the continuity of the access of the main device to the memory is strong, the existing cache elimination algorithm easily eliminates the cache data of one or more memories with continuous memory block start addresses, however, when the cache controller loads the cache data of a plurality of memories with continuous memory block start addresses from the memory to the cache, it takes more loading time. When the plurality of cache data are eliminated and the master device needs to use the plurality of cache data next time, the cache controller needs more time to load the plurality of cache data from the memory into the cache, so that the data access efficiency is reduced.
Disclosure of Invention
An object of an embodiment of the present invention is to provide a cache data processing method, a storage medium and a chip based on cache, which are used for solving the above technical defects existing in the prior art.
In a first aspect, an embodiment of the present invention provides a cache data processing method based on cache, including:
traversing a first address lookup table of a first cache queue, wherein the first cache queue comprises a plurality of cache lines, the cache length of each cache line is variable, each cache line is used for storing cache data of a mapping memory, and the first address lookup table comprises a memory block starting address corresponding to each cache line;
determining a cache line meeting a continuous condition of the memory address as a candidate cache line according to the memory block starting address of each cache line;
and processing the cache data of each candidate cache line according to the cache length of each candidate cache line and a preset length threshold value.
Optionally, the processing the cache data of each candidate cache line according to the cache length of each candidate cache line and a preset length threshold value includes:
starting from the initial candidate cache line, accumulating the cache length of each candidate cache line according to the address continuous sequence to obtain a length result after each accumulation;
Judging whether the length result after each accumulation is larger than a preset length threshold value or not;
if yes, taking the candidate cache line participating in the accumulation process as a target cache line, and processing cache data of each target cache line;
if not, continuously accumulating the buffer length of each candidate buffer line according to the address continuous sequence.
Optionally, the processing the cache data of each target cache line includes:
acquiring a second cache queue and a second address lookup table;
and transferring all the cache data of the target cache line to the reference cache line in the second cache queue, and updating the first address lookup table and the second address lookup table.
Optionally, before traversing the first address lookup table of the first cache queue, the method further comprises:
determining a target cache queue according to the cache length of the cache data to be loaded and a preset length threshold;
and mapping the cache data to be loaded to a corresponding cache line of the target cache queue.
Optionally, the determining the target cache queue according to the cache length of the cache data to be loaded and the preset length threshold includes:
judging whether the cache length of the cache data to be loaded is greater than or equal to a preset length threshold value;
If yes, selecting the second cache queue as a target cache queue;
and if not, selecting the first cache queue as a target cache queue.
Optionally, determining, according to the starting address of the memory block of each cache line, the cache line meeting the continuous condition of the memory address as the candidate cache line includes:
calculating the end address of the memory block of each cache line according to the start address of the memory block of each cache line and the cache length;
and if the starting address of the memory block of one cache line and the ending address of the memory block of the other cache line in the first cache queue are continuous, determining one cache line and the other cache line as the candidate cache line.
Optionally, before traversing the first address lookup table of the first cache queue, the method further comprises:
detecting whether the first cache queue loads new cache data or not;
if yes, a step of traversing a first address lookup table of a first cache queue is entered;
if not, maintaining the cache state of the first cache queue.
In a second aspect, an embodiment of the present invention provides a storage medium storing computer executable instructions for causing an electronic device to execute the cache-based cache data processing method described above.
In a third aspect, an embodiment of the present invention provides a chip, including:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the cache-based cache data processing method described above.
In a fourth aspect, an embodiment of the present invention provides a cache controller, including:
the cache storage module comprises at least one cache queue, each cache queue comprises a plurality of cache lines, the cache length of each cache line is variable, and each cache line is used for storing cache data of a mapping memory;
the hit judgment module is used for judging whether the corresponding cache line in the cache storage module is hit or not according to the access request sent by the main equipment, if yes, the cache storage module is controlled to interact with the main equipment to cache data, and if not, a loading command is generated;
the cache line loading module is used for accessing the memory according to the loading command;
and the cache line updating module is used for updating the corresponding cache line of the cache storage module under the control of the cache line loading module.
In the cache data processing method based on the cache, a first address lookup table of a first cache queue is traversed, the first cache queue comprises a plurality of cache lines, the cache length of each cache line is variable, each cache line is used for storing cache data of a mapping memory, the first address lookup table comprises a memory block starting address corresponding to each cache line, the cache line meeting the address continuous condition is determined to serve as a candidate cache line according to the memory block starting address of each cache line, and the cache data of each candidate cache line are processed according to the cache length of each candidate cache line and a preset length threshold. According to the embodiment, the cache data of each candidate cache line can be processed according to the continuity of the starting address of the memory block by combining the cache length of each candidate cache line, so that the cache data of a plurality of cache lines with continuous addresses are prevented from being easily eliminated when the cache data is eliminated, and the cache data of a plurality of cache lines with continuous addresses which have been brushed off are prevented from being loaded with more loading time when the cache data is accessed next time.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which the figures of the drawings are not to be taken in a limiting sense, unless otherwise indicated.
FIG. 1 is a schematic diagram of a cache system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a cache system according to another embodiment of the present invention;
FIG. 3 is a first schematic diagram of memory mapping to a first buffer queue and a second buffer queue according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of a cache-based cache data processing method according to an embodiment of the present invention;
FIG. 5a is a schematic flow chart of S42 shown in FIG. 4;
fig. 5b is a schematic flow chart of S43 shown in fig. 4;
FIG. 5c is a schematic flow chart of S433 shown in FIG. 5 b;
FIG. 6 is a second schematic diagram of memory mapping to a first buffer queue and a second buffer queue according to an embodiment of the present invention;
FIG. 7a is a flowchart illustrating a cache-based cache data processing method according to another embodiment of the present invention;
FIG. 7b is a flowchart illustrating a cache-based cache data processing method according to another embodiment of the present invention;
FIG. 8a is a schematic diagram illustrating data transfer between a first buffer queue and a second buffer queue according to an embodiment of the present invention;
FIG. 8b is a schematic diagram illustrating data transfer among a first buffer queue, a second buffer queue, and a third buffer queue according to an embodiment of the present invention;
FIG. 8c is a schematic diagram illustrating data transfer among a first buffer queue, a second buffer queue, a third buffer queue, and a fourth buffer queue according to an embodiment of the present invention;
fig. 9 is a schematic circuit diagram of a chip according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, if not in conflict, the features of the embodiments of the present invention may be combined with each other, which is within the protection scope of the present invention. In addition, while functional block division is performed in a device diagram and logical order is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. Furthermore, the words "first," "second," "third," and the like as used herein do not limit the order of data and execution, but merely distinguish between identical or similar items that have substantially the same function and effect.
Referring to fig. 1, a cache system 100 includes a main device 11, a cache controller 12 and a memory 13, where the cache controller 12 is electrically connected to the main device 11 and the memory 13, respectively.
The host device 11 executes a software program and needs to fetch the buffered data from the memory 13. When the main device 11 accesses the memory 13, the access to the cache controller 12 will be transferred, if the memory address of the corresponding cache line in the cache controller 12 is consistent with the access address of the main device 11 to the memory 13, the cache controller 12 hits, the main device 11 can directly take the cache data from the cache line, if the memory address of the corresponding cache line is inconsistent with the access address of the main device 11, the cache controller 12 does not hit, then the cache controller 12 sends an access request to the memory 13, and the cache data with the same size as the length of the cache line is loaded in the cache controller 12 from the memory 13, so that the main device 11 can take the cache data from the cache controller 12.
In some embodiments, the host device 11 may be any suitable type of device, such as an electronic device, e.g., a headset or camera module. It will be appreciated that referring to fig. 2, the number of the master devices 11 may be plural, and the plurality of master devices 11 may access the memory 13 at the same time.
In some embodiments, referring to FIG. 1, the cache controller 12 includes a cache storage module 121, a hit determination module 122, a cache line loading module 123, and a cache line updating module 124.
The cache storage module 121 includes at least one cache queue and an address lookup table corresponding to each cache queue, where each cache queue includes a plurality of cache lines (cache lines), and the address lookup table is used for performing address determination by the hit determination module 122 to hit a corresponding cache line, where each cache line is used for storing cache data of a mapping memory, and the cache data is memory data of a corresponding memory block in the mapping memory 13.
The address lookup tables of different cache queues may be constructed in the form of corresponding data structures, and in some embodiments, the cache queues include a first cache queue, the address lookup table includes a first address lookup table, and the first cache queue includes a plurality of cache lines, where the cache lengths of the respective cache lines in the first cache queue may be the same or different.
The first address lookup table comprises a memory block starting address, a cache starting address and valid bit data, wherein the memory can be divided into a plurality of memory blocks with different byte lengths, the memory block starting address is the first memory address of the memory block mapped to the cache line in the memory, the cache starting address is the first cache address of the cache queue after the memory block is mapped to the cache line, the valid bit data comprises at least one valid bit, each valid bit is used for representing the validity of the cache data of the corresponding cache line, each valid bit is used for representing the validity of one byte of cache data, when the valid bit is 0, the cache data of the corresponding byte is valid, and when the valid bit is 1, the cache data of the corresponding byte is invalid.
Referring to fig. 3, in order to map the corresponding memory data of the memory 13 to the first buffer queue 31, the memory 13 may be divided into a plurality of memory blocks, and the corresponding memory blocks may be mapped to the first buffer queue 31, for example, the memory corresponding to the memory addresses 0-29 may be regarded as the memory block A0, and the data length thereof is 30 bytes. The memory corresponding to memory addresses 300-419 may be referred to as memory block A1, which has a data length of 120 bytes. The memory corresponding to memory addresses 800-819 may be referred to as memory block A2, which has a data length of 20 bytes. The memory corresponding to memory addresses 820-879 may be referred to as memory block A3, which has a data length of 60 bytes. The memory corresponding to memory addresses 880-1019 may be referred to as memory block A4, which has a data length of 140 bytes. Memory corresponding to memory addresses 1200-1249 may be referred to as memory block A5, which may have a data length of 50 bytes.
It will be appreciated that some memory data in the memory 13 shown in fig. 3 is not yet mapped into the cache queue, and for visual expression, a memory block in the memory 13, in which some memory data is not yet mapped into the cache queue, is set to be empty.
As shown in fig. 3, in the first address lookup table 32, the buffer length of the 0 th buffer line of the first buffer queue 31 is 120 bytes, the buffer length of the 1 st buffer line of the first buffer queue 31 is 30 bytes, the buffer length of the 2 nd buffer line of the first buffer queue 31 is 20 bytes, the buffer length of the 3 rd buffer line of the first buffer queue 31 is 60 bytes, the buffer length of the 4 th buffer line of the first buffer queue 31 is 140 bytes, and the buffer length of the 5 th buffer line of the first buffer queue 31 is 50 bytes, and therefore, the buffer lengths of the respective buffer lines in the first buffer queue may be different.
As shown in fig. 3, the cache data of the memory block A1 is mapped to the 0 th cache line of the first cache queue 31, and the first memory address of the memory block A1 is "300", so the memory address "300" is the memory block start address of the memory block A1. Similarly, the cache data of the memory block A0 is mapped to the 1 st cache line, and the first memory address of the memory block A0 is "0", so the memory address "0" is the memory block start address of the memory block A0, and so on.
As shown in fig. 3, after the memory block A1 is mapped to the 0 th cache line of the first cache queue 31, its first cache address in the first cache queue 31 is "0". After the memory block A0 is mapped to the 1 st cache line of the first cache queue 31, its first cache address in the first cache queue 31 is "120". Similarly, after the memory block A2 is mapped to the 2 nd cache line of the first cache queue 31, its first cache address in the first cache queue 31 is "150".
As shown in fig. 3, the buffer length of the 0 th buffer line of the first buffer queue 31 is 120 bytes, and the buffer data with the buffer length of 120 bytes in the 0 th buffer line is valid, so the valid bit data is 120 0 s, wherein each valid bit in the valid bit data is used to represent the validity of one byte of buffer data, and in general, when the valid bit is "0", it may represent that the byte of buffer data is valid, and 120 0 s represent that the buffer data with the buffer length of 120 bytes in the 0 th buffer line is valid.
In some embodiments, the cache queue further includes a second cache queue, the address lookup table further includes a second address lookup table, and the second cache queue includes a plurality of cache lines, where the second cache queue is used to store cache data with a cache length greater than a preset length threshold, and the cache lengths of the cache lines in the second cache queue may be the same or different. The second address lookup table includes a memory block start address, a cache start address, and valid bit data, and the meanings of the memory block start address, the cache start address, and the valid bit data are as described above and are not repeated herein.
With continued reference to fig. 3, in order to map the corresponding memory data of the memory 13 to the second buffer queue 33, the memory 13 may be divided into a plurality of memory blocks, and the corresponding memory blocks may be mapped to the second buffer queue 33, for example, the memory corresponding to the memory addresses 30-239 may be regarded as the memory block B0, and the data length thereof is 210 bytes. The memory corresponding to memory addresses 420-719 may be referred to as memory block B1, which has a data length of 300 bytes. The memory corresponding to memory addresses 1250-1649 may be referred to as memory block B2, which has a data length of 400 bytes.
As shown in fig. 3, in the second address lookup table 34, the buffer length of the 0 th buffer line of the second buffer queue 33 is 210 bytes, the buffer length of the 1 st buffer line of the second buffer queue 33 is 400 bytes, and the buffer length of the 2 nd buffer line of the second buffer queue 33 is 300 bytes, and thus, the buffer lengths of the respective buffer lines in the second buffer queue may be different.
As shown in fig. 3, the cache data of the memory block B0 is mapped to the 0 th cache line of the second cache queue 33, and the first memory address of the memory block B0 is "30", so the memory address "30" is the memory block start address of the memory block B0. Similarly, the cache data of the memory block B2 is mapped to the 1 st cache line of the second cache queue 33, and the first memory address of the memory block B2 is "1250", so the memory address "1250" is the memory block start address of the memory block B2. The cache data of the memory block B1 is mapped to the 2 nd cache line of the second cache queue 33, and the first memory address of the memory block B1 is "420", so the memory address "420" is the memory block start address of the memory block B1.
As shown in fig. 3, after the memory block B0 is mapped to the 0 th cache line of the second cache queue 33, the first cache address of the second cache queue 33 is "1000". After the memory block B2 is mapped to the 1 st cache line of the second cache queue 33, its first cache address in the second cache queue 33 is "1210". Similarly, after the memory block B1 is mapped to the 2 nd cache line of the second cache queue 33, the first cache address of the second cache queue 33 is "1610".
As can be seen from fig. 3, the first buffer queue 31 may store buffer data with a buffer length less than or equal to 200, and the second buffer queue 33 may store buffer data with a buffer length greater than or equal to 200.
In some embodiments, the first cache queue 31, the first address lookup table 32, the second cache queue 33, and the second address lookup table 34 all use a circular round robin management scheme.
In some embodiments, the cache memory module 121 is a register set or RAM memory.
The hit determination module 122 is configured to traverse the address lookup table to determine whether to hit the cache line according to the access request, and if so, to control the cache storage module 121 to interact with the host device 11 to cache data, and if not, to generate a load command.
In some embodiments, the access request carries a minimum memory request address and request memory size information, and the hit determination module 122 traverses the address lookup table to determine whether to hit the cache line according to the minimum memory request address and the request memory size information.
Specifically, the step of traversing the address lookup table according to the minimum memory request address and the request memory size information to determine whether to hit the cache line includes: and adding the minimum memory request address and the request memory size information to obtain a maximum memory request address, judging whether the minimum memory request address and the maximum memory request address fall in a target address mapping range at the same time, wherein the address mapping range is defined by a memory block starting address and a memory block ending address of each cache line, the target address mapping range is one of the address mapping ranges corresponding to each cache line, and the memory block ending address=the memory block starting address+the cache length-1. If the minimum memory request address and the maximum memory request address fall within the address mapping range at the same time, the hit determination module 122 hits the cache line corresponding to the target address mapping range. If the minimum memory request address or the maximum memory request address does not fall within the address mapping range, the hit determination module 122 misses a cache line of the address lookup table.
For example, the access request carries the minimum memory request address "300" and the request memory size information "30 bytes", and the hit determination module 122 adds the minimum memory request address "300" and the request memory size information "30 bytes" to obtain the maximum memory request address "330". The first cache line 31 has a memory block start address of "300" and a memory block end address of "419", and thus the target address mapping range is [300,419]. Since the minimum memory request address "300" and the maximum memory request address "330" fall within the target address mapping range [300,419], the hit determination module 122 hits in the 0 th cache line of the first cache queue 31.
For another example, the access request carries the minimum memory request address "250" and the request memory size information "30 bytes", and the hit determination module 122 adds the minimum memory request address "250" and the request memory size information "30 bytes" to obtain the maximum memory request address "280". Since the maximum memory request address "280" does not fall within any address mapping range, the hit determination module 122 misses any cache line in the first cache queue 31 or the second cache queue 33.
For another example, the access request carries the minimum memory request address "280" and the request memory size information "30 bytes", and the hit determination module 122 adds the minimum memory request address "280" and the request memory size information "30 bytes" to obtain the maximum memory request address "310". Although the maximum memory request address "310" falls within the address mapping range [300,419], the minimum memory request address "280" does not fall within the address mapping range [300,419], and therefore the hit determination module 122 misses either the first cache line 31 or the second cache line 33.
For another example, the access request carries the minimum memory request address "430" and the request memory size information "60 bytes", and the hit determination module 122 adds the minimum memory request address "430" and the request memory size information "60 bytes" to obtain the maximum memory request address "490". Although the maximum memory request address "490" does not fall within the address mapping range corresponding to any cache line in the first cache queue 31, the maximum memory request address "490" may fall within the address mapping range [420,719] in the second cache queue 33, and the minimum memory request address "30" also falls within the address mapping range [420,719], so the hit determination module 122 hits in the 2 nd cache line in the second cache queue 33.
If the hit judgment module 122 hits in the corresponding cache line in the first cache queue 31 or the second cache queue 33, the minimum cache request address and the maximum cache request address are calculated according to the cache start address, the memory block start address and the request memory size information corresponding to the hit cache line, and the master device is controlled to read data from the cache storage module according to the minimum cache request address and the maximum cache request address, wherein the minimum cache request address=minimum memory request address-memory block start address+cache start address, and the maximum cache request address=minimum cache request address+request memory size information.
For example, the hit determination module 122 hits in cache line 0 of the first cache queue 31, wherein the access request carries the minimum memory request address "300" and the request memory size information "30 bytes". The hit determination module 122 queries the first address lookup table to obtain a cache start address of "0" in the 0 th cache line of the first cache queue 31 and a memory block start address of "300", so that the minimum cache request address=300-300+0=0 and the maximum cache request address=0+30=30, and thus the hit determination module 122 controls the host device to read data from the cache storage module according to the minimum cache request address "0" and the maximum cache request address "30".
For another example, the hit determination module 122 hits in cache line 2 of the second cache queue 33, wherein the access request carries the minimum memory request address "430" and the request memory size information "60 bytes". The hit determination module 122 queries the second address lookup table to obtain a cache start address of "1610" and a memory block start address of "420" for the 2 nd cache line of the second cache queue 33, so that the minimum cache request address=430-420+1610=1620 and the maximum cache request address=1620+60=1680, and thus, the hit determination module 122 controls the host device to read data from the cache storage module according to the minimum cache request address "1620" and the maximum cache request address "1680".
If the hit determination module 122 does not hit any cache line of the first cache queue 31 and the second cache queue 33, the hit determination module 122 transmits a load command to the cache line loading module 123, and the cache line loading module 123 is configured to access the memory 13 according to the load command.
The cache line update module 124 is configured to perform a line update on the corresponding cache line of the cache storage module 121, for example, load the cache data from the memory 13 to the first cache queue 31 or the second cache queue 33, and update the first address lookup table 32 or the second address lookup table 34 under the control of the cache line load module 123.
It is understood that the hit determination module 122, the cache line loading module 123 and the cache line updating module 124 may be chip design circuits with logic operation and memory functions.
The embodiment of the invention provides a cache data processing method based on cache. Referring to fig. 4, the cache-based cache data processing method S400 includes:
s41, traversing a first address lookup table of a first cache queue;
the first cache queue comprises a plurality of cache lines, the cache length of each cache line is variable, each cache line is used for storing cache data of the mapping memory, and the first address lookup table comprises a memory block starting address corresponding to each cache line.
S42, determining the cache line meeting the address continuous condition as a candidate cache line according to the initial address of the memory block of each cache line;
by way of example and not limitation, satisfying the address continuation condition refers to a memory block start address mapped to one memory block in the first cache queue being adjacent to a memory block end address of another memory block. For example, referring to fig. 3, the memory block A2 is mapped to the 2 nd cache line of the first cache queue 31, the memory block A2 has a memory block start address of "800" and a memory block end address of "819". The memory block A3 is mapped to the 3 rd cache line of the first cache queue 31, the memory block start address of the memory block A3 is "820", and the memory block end address is "879". The memory block A4 is mapped to the 4 th cache line of the first cache queue 31, the memory block start address of the memory block A4 is "880", and the memory block end address is "1019". The memory block ending address "819" of the memory block A2 is consecutive to the memory block starting address "820" of the memory block A3, and the memory block ending address "879" of the memory block A3 is consecutive to the memory block starting address "880" of the memory block A4, so that the cache line 2 of the first cache line 31, the cache line 3 of the first cache line 31, and the cache line 4 of the first cache line 31 are candidates.
S43, processing the cache data of each candidate cache line according to the cache length of each candidate cache line and a preset length threshold.
According to the embodiment, the cache data of each candidate cache line can be processed according to the continuity of the starting address of the memory block by combining the cache length of each candidate cache line, so that the cache data of a plurality of cache lines with continuous addresses are prevented from being easily eliminated when the cache data is eliminated, and the cache data of a plurality of cache lines with continuous addresses which have been brushed off are prevented from being loaded with more loading time when the cache data is accessed next time.
In some embodiments, referring to fig. 5a, s42 includes:
s421, calculating the end address of the memory block of each cache line according to the start address of the memory block of each cache line and the cache length;
s422, if the initial address of the memory block of one cache line and the end address of the memory block of the other cache line in the first cache queue are continuous, determining that the one cache line and the other cache line are candidate cache lines;
s423, if the initial address of the memory block of one cache line and the end address of the memory block of the other cache line in the first cache queue are discontinuous, determining that the one cache line and the other cache line are not candidate cache lines.
As described above, the hit determination module searches the first address lookup table, where the starting address of the memory block of the 2 nd cache line of the first cache queue 31 is "800", and the cache length is 20, and the ending address of the memory block of the 2 nd cache line of the first cache queue 31 is=800+20-1=819. When the starting address of the memory block of the 3 rd cache line of the first cache queue 31 is "820" and the cache length is 60, the ending address of the memory block of the 3 rd cache line of the first cache queue 31=820+60-1=879. The starting address of the memory block of the 4 th cache line of the first cache queue 31 is "880", and the cache length is 140, and then the ending address of the memory block of the 4 th cache line of the first cache queue 31=880+140-1=1019.
Since the memory block ending address "819" of the memory block A2 is consecutive to the memory block starting address "820" of the memory block A3, the memory block ending address "879" of the memory block A3 is consecutive to the memory block starting address "880" of the memory block A4, the 2 nd cache line of the first cache line 31, the 3 rd cache line of the first cache line 31, and the 4 th cache line of the first cache line 31 are candidates.
For the 1 st cache line of the first cache queue 31, the starting address of the memory block of the 1 st cache line of the first cache queue 31 is "0", and the cache length is 30, and then the ending address of the memory block of the 1 st cache line of the first cache queue 31=0+30-1=29. Since the end address "29" of the block of the 1 st cache line of the first cache queue 31 and the start address "800" of the block of the 2 nd cache line of the first cache queue 31 are discontinuous, the 1 st cache line and the 2 nd cache line of the first cache queue 31 are not candidate cache lines.
In some embodiments, referring to fig. 5b, s43 includes:
s431, starting from the initial candidate cache line, accumulating the cache length of each candidate cache line according to the address continuous sequence to obtain a length result after each accumulation;
s432, judging whether the length result after each accumulation is larger than a preset length threshold value;
s433, if so, taking the candidate cache line participating in the accumulation process as a target cache line, and processing the cache data of each target cache line;
and S434, if not, continuously accumulating the cache length of each candidate cache line according to the address continuous sequence.
By way of example and not limitation, the starting candidate cache line may be the 2 nd cache line of the first cache queue 31 or the 4 th cache line of the first cache queue 31, and the candidate cache line corresponding to the minimum memory block starting address or the maximum memory block starting address in each candidate cache line of the starting candidate cache line. The address sequential order is the memory block start address in order of from large to small or from small to large, for example, the address sequential order is the memory block start address "800" -the memory block start address "820" -the memory block start address "880".
In executing step S431, for example, the embodiment starts from the 2 nd cache line of the first cache queue 31, and accumulates the cache length "20" of the 2 nd cache line of the first cache queue 31 and the 3 rd cache line "60" of the first cache queue 31 according to the address sequential order of "800-820-880", so as to obtain a length result after the first accumulation, where the length result is 20+60=80.
When executing step S432, the embodiment determines whether the length result after the first accumulation is greater than a preset length threshold, where the preset length threshold is defined by the user, for example, the preset length threshold is 200. Since the length result "80" after the first accumulation is smaller than the preset length threshold "200", step S434 is performed, that is, the buffer length of each candidate buffer line is continuously accumulated in the address sequential order.
Then, in this embodiment, the buffer length "140" of the 4 th buffer line of the first buffer queue 31 is accumulated with the length result after the first accumulation to obtain the length result after the second accumulation, where the length result is 80+140=220.
When executing step S432, the present embodiment determines whether the length result after the second accumulation is greater than the preset length threshold. Since the length result "220" after the second accumulation is greater than the preset length threshold "200", step S433 is performed, namely, the candidate cache line participating in the accumulation process is taken as the target cache line, and the cache data of each target cache line is processed, wherein the first cache line 31 2 nd cache line, the first cache line 31 3 rd cache line, and the first cache line 31 4 th cache line all participate in the accumulation process, and therefore, the first cache line 31 2 nd cache line, the first cache line 31 3 rd cache line, and the first cache line 31 4 th cache line are all target cache lines.
In some embodiments, when processing the cache data of each target cache line, referring to fig. 5c, s433 includes:
s4331, acquiring a second cache queue and a second address lookup table;
s4332, transferring the cache data of all the target cache lines to the reference cache line in the second cache queue, and updating the first address lookup table and the second address lookup table.
In some embodiments, when updating the first address lookup table, the present embodiment may set all valid bit data corresponding to the target cache line to be in an invalid state in the first address lookup table, that is, all valid bit data corresponding to the target cache line is set to 1, for example, please refer to fig. 6, for the 2 nd cache line of the first cache queue 61, the 3 rd cache line of the first cache queue 61, and the 4 th cache line of the first cache queue 61, in the first address lookup table 62, the valid bit data corresponding to the 2 nd cache line of the first cache queue 61 is 20 "1", the valid bit data corresponding to the 3 rd cache line of the first cache queue 61 is 60 "1", and the valid bit data corresponding to the 4 th cache line of the first cache queue 61 is 140 "1".
In some embodiments, when updating the second address lookup table, the embodiment selects the minimum memory address in all the target cache lines as the memory block start address of the reference cache line, calculates the cache start address of the reference cache line according to the cache start address and the cache length of the nearest cache line in the second address lookup table, where the logic positions of the nearest cache line and the reference cache line in the second address lookup table are adjacent, and uses the total cache length of all the target cache lines as the cache length of the reference cache line.
Since the cache data of the reference cache line is valid, the embodiment determines the valid bit data of the reference cache line according to the cache length of the reference cache line.
For example, with continued reference to fig. 6, the minimum memory address is "800" in the first cache line 61 line 2, the first cache line 61 line 3 and the first cache line 61 line 4, and thus the memory block start address of the reference cache line S0 is "800". In the second address lookup table 64 of the second cache queue 63, the 2 nd cache line of the second cache queue 33 is adjacent to the reference cache line S0 at the logical location of the second address lookup table, and thus the 2 nd cache line of the second cache queue 33 is the closest cache line.
The embodiment adds the cache start address "1610" of the 2 nd cache line of the second cache queue 33 and the cache length "300" to obtain the cache start address "1910" of the reference cache line S0.
The present embodiment uses the total cache length of all the target cache lines as the cache length of the reference cache line, that is, the cache length of the reference cache line=20+60+140=220.
Since the cache data of the reference cache line is valid, the valid bit data of the reference cache line is 220, which represents 220 "0".
In general, referring to fig. 3, if the continuity of the memory access of the master device is strong, for example, the master device needs to continuously access the memory block A2, the memory block A3 and the memory block A4 frequently and many times, then the cache loads the memory block A2, the memory block A3 and the memory block A4 in the first cache queue 31 respectively. After a process, assuming that the first cache queue 31 deletes the memory block A2, the memory block A3 and the memory block A4 from the first cache queue 31 according to a cache elimination algorithm, when the subsequent master needs to access the memory block A2, the memory block A3 and the memory block A4 again, the cache needs to take more time to load the memory block A2, the memory block A3 and the memory block A4 from the memory, and in some memory access processes with stronger continuity, the method reduces the data access efficiency.
Referring to fig. 6, in this embodiment, the memory block A2, the memory block A3 and the memory block A4 in the first cache queue are integrally transferred to the same cache line of the second cache queue, so that on one hand, the first cache queue can accept cache data with a shorter cache length for accessing by the host device, and on the other hand, by transferring a plurality of memory blocks with continuous memory addresses to the second cache queue in advance, the first cache queue can be prevented from adopting a cache elimination algorithm to easily delete the plurality of memory blocks with short cache length and continuous memory addresses, and the problem that more loading time is spent to load the cache data of the plurality of cache lines with continuous addresses which have been brushed off in the next access is avoided, thereby improving the data access efficiency.
In some embodiments, referring to fig. 7a, before traversing the first address lookup table of the first cache queue, the cache-based cache data processing method S400 further includes:
s44, determining a target cache queue according to the cache length of the cache data to be loaded and a preset length threshold;
s45, mapping the cache data to be loaded into a corresponding cache line of the target cache queue.
In some embodiments, when executing S44, the present embodiment determines whether the buffer length of the buffered data to be loaded is greater than or equal to a preset length threshold, if so, selects the second buffer queue as the target buffer queue, and if not, selects the first buffer queue as the target buffer queue.
For example, the buffer length of the buffer data to be loaded is 120, the preset length threshold is 200, and since the buffer length of the buffer data to be loaded is smaller than the preset length threshold, the first buffer queue is selected as the target buffer queue, and then the buffer data to be loaded is mapped to the corresponding buffer line of the first buffer queue. Similarly, assuming that the buffer length of the buffer data to be loaded is 250, since the buffer length of the buffer data to be loaded is greater than the preset length threshold, the second buffer queue is selected as the target buffer queue, and the buffer data to be loaded is mapped into the corresponding buffer line of the second buffer queue.
In some embodiments, transferring cache data of a cache line in the first cache queue to a cache line in the second cache queue may be advantageous over the immediate transfer mode because: in the instant transfer mode, the cache data of the cache line in the first cache queue is transferred to the cache line in the second cache queue, at this time, the storage space of the corresponding cache line of the first cache queue is vacated, meanwhile, although the second cache queue has one more cache data of the cache line, the second cache queue can eliminate the cache data of the corresponding cache line according to a cache elimination policy, the second cache queue can eliminate the cache data of the corresponding cache line, and the first cache queue and the second cache queue reduce one cache data in an integral way, when the main device accesses the cache, the reduced cache data may be the data which needs to be grabbed by the main device, and the instant transfer mode can reduce the data hit rate and can not effectively utilize the cache space.
Thus, in some embodiments, referring to FIG. 7b, prior to traversing the first address lookup table of the first cache queue, the cache-based cache data processing method S400 further comprises:
S46, detecting whether the first cache queue loads new cache data, if so, proceeding to the steps S41 and S47, and if not, maintaining the cache state of the first cache queue.
By adopting the method, the data hit rate is improved and the cache space is efficiently utilized.
For example, referring to fig. 8a, since the first buffer queue 81 has the above condition, the first buffer queue 81 transfers the corresponding buffer data to the same buffer line of the second buffer queue 82, and the first buffer queue 81 receives the new buffer data. Also, the first cache queue 81 and the second cache queue 82 may select a corresponding cache eviction algorithm to evict data, for example, the cache eviction algorithm includes LRU algorithm, LFU algorithm, FIFO algorithm, 2Q algorithm (two queues) or MQ algorithm (multiple queues).
Referring to fig. 8b, the first buffer queue 81 and the second buffer queue 82 can transfer the buffered data to be eliminated to the third buffer queue 83, wherein the third buffer queue 83 can select a corresponding buffer elimination algorithm to eliminate the data.
Referring to fig. 8c, according to the cache length elimination algorithm, the embodiment transfers the cache data to be eliminated in the third cache queue 83 to the fourth cache queue 84. In addition, the present embodiment eliminates the cache data in the fourth cache queue 84 according to a preset elimination algorithm.
It should be noted that, in the foregoing embodiments, there is not necessarily a certain sequence between the steps, and those skilled in the art will understand that, according to the description of the embodiments of the present invention, the steps may be performed in different orders in different embodiments, that is, may be performed in parallel, may be performed interchangeably, or the like.
Referring to fig. 9, fig. 9 is a schematic circuit structure of a chip according to an embodiment of the invention. As shown in fig. 9, the chip 900 includes one or more processors 91 and memory 92. In fig. 9, a processor 91 is taken as an example.
The processor 91 and the memory 92 may be connected by a bus or otherwise, for example in fig. 9.
The memory 92 is used as a non-volatile computer readable storage medium for storing non-volatile software programs, non-volatile computer executable programs and modules, such as program instructions/modules corresponding to the cache-based cache data processing method in the embodiment of the present invention. The processor 91 implements the functions of the cache-based cache data processing method provided in the above method embodiment by running nonvolatile software programs, instructions, and modules stored in the memory 92.
Memory 92 may include high-speed random access memory, but may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, memory 92 may optionally include memory remotely located relative to processor 91, which may be connected to processor 91 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The program instructions/modules are stored in the memory 92 that, when executed by the one or more processors 91, perform the cache-based cache data processing method of any of the method embodiments described above.
Embodiments of the present invention also provide a non-volatile computer storage medium storing computer executable instructions that are executed by one or more processors, such as the one processor 91 in fig. 9, to enable the one or more processors to perform the cache-based cache data processing method in any of the above-described method embodiments.
Embodiments of the present invention also provide a computer program product comprising a computer program stored on a non-volatile computer readable storage medium, the computer program comprising program instructions which, when executed by a chip, cause the chip to perform any of the cache-based cache data processing methods described in the claims.
The above-described embodiments of the apparatus or device are merely illustrative, in which the unit modules illustrated as separate components may or may not be physically separate, and the components shown as unit modules may or may not be physical units, may be located in one place, or may be distributed over multiple network module units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus a general purpose hardware platform, or may be implemented by hardware. Based on such understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the related art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; the technical features of the above embodiments or in the different embodiments may also be combined within the idea of the invention, the steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (8)

1. The cache data processing method based on the cache is characterized by comprising the following steps of:
traversing a first address lookup table of a first cache queue, wherein the first cache queue comprises a plurality of cache lines, the cache length of each cache line is variable, each cache line is used for storing cache data of a mapping memory, and the first address lookup table comprises a memory block starting address corresponding to each cache line;
Determining a cache line meeting a continuous condition of the memory address as a candidate cache line according to the memory block starting address of each cache line;
starting from the initial candidate cache line, accumulating the cache length of each candidate cache line according to the address continuous sequence to obtain a length result after each accumulation;
judging whether the length result after each accumulation is larger than a preset length threshold value or not;
if yes, taking the candidate cache line participating in the accumulation process as a target cache line, acquiring a second cache queue and a second address lookup table, transferring all cache data of the target cache line to a reference cache line in the second cache queue, and updating the first address lookup table and the second address lookup table;
if not, continuously accumulating the buffer length of each candidate buffer line according to the address continuous sequence.
2. The method of claim 1, further comprising, prior to traversing the first address lookup table of the first cache queue:
determining a target cache queue according to the cache length of the cache data to be loaded and a preset length threshold;
and mapping the cache data to be loaded to a corresponding cache line of the target cache queue.
3. The method of claim 2, wherein determining the target cache queue according to the cache length of the cache data to be loaded and the preset length threshold comprises:
judging whether the cache length of the cache data to be loaded is greater than or equal to a preset length threshold value;
if yes, selecting the second cache queue as a target cache queue;
and if not, selecting the first cache queue as a target cache queue.
4. A method according to any one of claims 1 to 3, wherein said determining, as candidate cache lines, a cache line satisfying a memory address continuation condition based on a memory block start address of each of said cache lines comprises:
calculating the end address of the memory block of each cache line according to the start address of the memory block of each cache line and the cache length;
and if the starting address of the memory block of one cache line and the ending address of the memory block of the other cache line in the first cache queue are continuous, determining one cache line and the other cache line as the candidate cache line.
5. A method according to any one of claims 1 to 3, further comprising, prior to traversing the first address lookup table of the first cache queue:
Detecting whether the first cache queue loads new cache data or not;
if yes, a step of traversing a first address lookup table of a first cache queue is entered;
if not, maintaining the cache state of the first cache queue.
6. A storage medium storing computer-executable instructions for causing an electronic device to perform the cache-based cache data processing method of any one of claims 1 to 5.
7. A chip, comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the cache-based cache data processing method of any one of claims 1 to 5.
8. A cache controller, comprising:
the cache storage module comprises at least one cache queue, each cache queue comprises a plurality of cache lines, the cache length of each cache line is variable, each cache line is used for storing cache data of a mapping memory, and the first address lookup table comprises a memory block starting address corresponding to each cache line;
The hit judging module is used for judging whether corresponding cache lines in the cache storage module are hit or not according to an access request sent by the main equipment, if yes, the cache storage module is controlled to interact with the main equipment, if no, a loading command is generated, the hit judging module is also used for determining cache lines meeting the continuous condition of the memory address as candidate cache lines according to the initial address of a memory block of each cache line, starting from the initial candidate cache line, accumulating the cache length of each candidate cache line according to the continuous sequence of the address to obtain a length result after each accumulation, judging whether the length result after each accumulation is larger than a preset length threshold, if yes, taking the candidate cache line participating in the accumulation process as a target cache line, obtaining a second cache queue and a second address lookup table, transferring the cache data of all the target cache lines to the reference cache line in the second cache queue, updating the first address lookup table and the second address lookup table, and if no, continuing accumulating the cache length of each candidate cache line according to the continuous sequence of the address;
the cache line loading module is used for accessing the memory according to the loading command;
And the cache line updating module is used for updating the corresponding cache line of the cache storage module under the control of the cache line loading module.
CN202111081748.5A 2021-09-15 2021-09-15 Cache-based cache data processing method, storage medium and chip Active CN113791989B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111081748.5A CN113791989B (en) 2021-09-15 2021-09-15 Cache-based cache data processing method, storage medium and chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111081748.5A CN113791989B (en) 2021-09-15 2021-09-15 Cache-based cache data processing method, storage medium and chip

Publications (2)

Publication Number Publication Date
CN113791989A CN113791989A (en) 2021-12-14
CN113791989B true CN113791989B (en) 2023-07-14

Family

ID=78878470

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111081748.5A Active CN113791989B (en) 2021-09-15 2021-09-15 Cache-based cache data processing method, storage medium and chip

Country Status (1)

Country Link
CN (1) CN113791989B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115809208B (en) * 2023-01-19 2023-07-21 北京象帝先计算技术有限公司 Cache data refreshing method and device, graphics processing system and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101826056A (en) * 2009-02-20 2010-09-08 Arm有限公司 Data processing equipment and method
CN103761052A (en) * 2013-12-28 2014-04-30 华为技术有限公司 Method for managing cache and storage device
WO2019127104A1 (en) * 2017-12-27 2019-07-04 华为技术有限公司 Method for resource adjustment in cache, data access method and device
CN110119487A (en) * 2019-04-15 2019-08-13 华南理工大学 A kind of buffering updating method suitable for divergence data
CN111290974A (en) * 2018-12-07 2020-06-16 北京忆恒创源科技有限公司 Cache elimination method for storage device and storage device
CN111367833A (en) * 2020-03-31 2020-07-03 中国建设银行股份有限公司 Data caching method and device, computer equipment and readable storage medium
WO2020199061A1 (en) * 2019-03-30 2020-10-08 华为技术有限公司 Processing method and apparatus, and related device
CN113138944A (en) * 2020-01-20 2021-07-20 华为技术有限公司 Data caching method and related product
CN113342709A (en) * 2021-06-04 2021-09-03 海光信息技术股份有限公司 Method for accessing data in a multiprocessor system and multiprocessor system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101826056A (en) * 2009-02-20 2010-09-08 Arm有限公司 Data processing equipment and method
CN103761052A (en) * 2013-12-28 2014-04-30 华为技术有限公司 Method for managing cache and storage device
WO2019127104A1 (en) * 2017-12-27 2019-07-04 华为技术有限公司 Method for resource adjustment in cache, data access method and device
CN111290974A (en) * 2018-12-07 2020-06-16 北京忆恒创源科技有限公司 Cache elimination method for storage device and storage device
WO2020199061A1 (en) * 2019-03-30 2020-10-08 华为技术有限公司 Processing method and apparatus, and related device
CN110119487A (en) * 2019-04-15 2019-08-13 华南理工大学 A kind of buffering updating method suitable for divergence data
CN113138944A (en) * 2020-01-20 2021-07-20 华为技术有限公司 Data caching method and related product
CN111367833A (en) * 2020-03-31 2020-07-03 中国建设银行股份有限公司 Data caching method and device, computer equipment and readable storage medium
CN113342709A (en) * 2021-06-04 2021-09-03 海光信息技术股份有限公司 Method for accessing data in a multiprocessor system and multiprocessor system

Also Published As

Publication number Publication date
CN113791989A (en) 2021-12-14

Similar Documents

Publication Publication Date Title
US10963387B2 (en) Methods of cache preloading on a partition or a context switch
US10558577B2 (en) Managing memory access requests with prefetch for streams
US9361236B2 (en) Handling write requests for a data array
CN105740164B (en) Multi-core processor supporting cache consistency, reading and writing method, device and equipment
US8095734B2 (en) Managing cache line allocations for multiple issue processors
US10725923B1 (en) Cache access detection and prediction
JP4451717B2 (en) Information processing apparatus and information processing method
US8656119B2 (en) Storage system, control program and storage system control method
JP2004038345A (en) Prefetch control device, information processor, and prefetch control process
JP2018005395A (en) Arithmetic processing device, information processing device and method for controlling arithmetic processing device
CN113760787B (en) Multi-level cache data push system, method, apparatus, and computer medium
US20210064545A1 (en) Home agent based cache transfer acceleration scheme
KR100987996B1 (en) Memory access control apparatus and memory access control method
CN113791989B (en) Cache-based cache data processing method, storage medium and chip
CN114217861A (en) Data processing method and device, electronic device and storage medium
US8250304B2 (en) Cache memory device and system with set and group limited priority and casting management of I/O type data injection
CN109669881B (en) Computing method based on Cache space reservation algorithm
US20130339624A1 (en) Processor, information processing device, and control method for processor
US20170046262A1 (en) Arithmetic processing device and method for controlling arithmetic processing device
US10545875B2 (en) Tag accelerator for low latency DRAM cache
JP2013041414A (en) Storage control system and method, and replacement system and method
JP4742432B2 (en) Memory system
JP2001249846A (en) Cache memory device and data processing system
US8484423B2 (en) Method and apparatus for controlling cache using transaction flags
JP7311959B2 (en) Data storage for multiple data types

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant