CN117539796A - Electronic device and buffer memory management method - Google Patents

Electronic device and buffer memory management method Download PDF

Info

Publication number
CN117539796A
CN117539796A CN202410026373.XA CN202410026373A CN117539796A CN 117539796 A CN117539796 A CN 117539796A CN 202410026373 A CN202410026373 A CN 202410026373A CN 117539796 A CN117539796 A CN 117539796A
Authority
CN
China
Prior art keywords
cache
target
pool
blocks
size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410026373.XA
Other languages
Chinese (zh)
Other versions
CN117539796B (en
Inventor
钟威
朱凯迪
王志
吴宗霖
朱启傲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hosin Global Electronics Co Ltd
Original Assignee
Hosin Global Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hosin Global Electronics Co Ltd filed Critical Hosin Global Electronics Co Ltd
Priority to CN202410026373.XA priority Critical patent/CN117539796B/en
Publication of CN117539796A publication Critical patent/CN117539796A/en
Application granted granted Critical
Publication of CN117539796B publication Critical patent/CN117539796B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

An electronic device and a buffer memory management method, the buffer memory management method includes: dividing a storage space of a buffer memory into a plurality of buffer pools, wherein each buffer pool comprises a plurality of buffer blocks, the standard space sizes of the buffer blocks belonging to the same buffer pool are the same, and the standard space sizes of the buffer blocks belonging to different buffer pools are different; sequencing the plurality of cache pools from small to large according to the standard space sizes corresponding to the cache pools; selecting a target cache pool from the plurality of cache pools according to the target data size, a plurality of standard space sizes corresponding to the plurality of cache pools and the arrangement sequence of the plurality of cache pools, wherein the target data size is not larger than the target standard space size of the target cache pool; and storing the target cache data into a blank target cache block of the target cache pool.

Description

Electronic device and buffer memory management method
Technical Field
The present invention relates to a memory management technology, and more particularly, to a buffer memory management method for a buffer memory and an electronic device using the buffer memory.
Background
Random Access Memory (RAM) is an integral part of embedded systems, and in conventional embedded software designs, RAM, which is usually a fixed address and size, is used for software, and this design mode is simple, but lacks flexibility, and does not work well in some complex scenarios.
Similarly, there is a problem in the design of the storage firmware, in which the buffer space in the storage master is limited, data needs to be transmitted and received to and from Read/Write, a Mapping Table (Mapping Table) is needed, garbage collection (Garbage Collection) is used in situations such as data moving, if each part is given a fixed buffer address and size, waste is caused in many situations, and performance is further affected.
Disclosure of Invention
The present invention has been made to solve the above problems, and an object of the present invention is to provide a method for managing a buffer space of a buffer memory, which can divide the buffer space into a plurality of parts, and the size and position of each part can be adjusted as needed; according to different situations, the cache data to be stored is stored in a part with a proper size, so that the waste of the cache space is reduced.
The embodiment of the invention provides an electronic device. The electronic device includes: a buffer memory; and a processor. The processor is electrically connected to the buffer memory. Wherein the processor is configured to: dividing a storage space of the buffer memory into a plurality of buffer pools, wherein each buffer pool comprises a plurality of buffer blocks, the standard space sizes of the buffer blocks belonging to the same buffer pool are the same, and the standard space sizes of the buffer blocks belonging to different buffer pools are different; sequencing the plurality of cache pools from small to large according to the standard space sizes corresponding to the cache pools; identifying a target data size of the target cache data; selecting a target cache pool from the plurality of cache pools according to the target data size, a plurality of standard space sizes corresponding to the plurality of cache pools and an arrangement sequence of the plurality of cache pools, wherein the target data size is not greater than the target standard space size of the target cache pool, a first standard space size of a first cache pool ordered before the target cache pool is smaller than the target standard space size of the target cache pool, and the target data size is greater than the first standard space size; and storing the target cache data into a blank target cache block of the target cache pool.
In an embodiment of the present invention, a ratio of standard space sizes of each of a pair of adjacent buffer pools is P, wherein a second standard space size of a second buffer pool ordered after the target buffer pool is P times the target standard space size of the target buffer pool, and a buffer block of the second buffer pool is sufficient to store P target buffer data.
In an embodiment of the present invention, in an operation of selecting the target cache pool from the plurality of cache pools according to the target data size, the plurality of standard space sizes corresponding to the plurality of cache pools, and the arrangement order of the plurality of cache pools, the processor compares the plurality of standard space sizes corresponding to the plurality of cache pools according to the arrangement order from a cache pool corresponding to a smallest standard space size, wherein in response to determining that the target data size is greater than the standard space size of the currently compared cache pool, the processor selects a next cache pool for the comparison, wherein in response to determining that the target data size is not greater than the standard space size corresponding to the currently compared cache pool, the processor regards the currently compared cache pool as the target cache pool.
In an embodiment of the present invention, wherein the processor takes the currently compared cache pool as the target cache pool when a next cache pool is to be selected for the comparison but there is no next cache pool; and the processor stores the target cache data using Q target cache blocks of the target cache pool, wherein a size of Q target cache blocks is greater than the target data size and a size of Q-1 target cache blocks is less than the target data size.
In an embodiment of the present invention, in response to determining that the target cache pool does not have enough empty target cache blocks to store the target cache data, the processor performs a cache block splitting operation on the second cache pool ordered after the target cache pool to obtain empty P target cache blocks in the second cache pool, and stores the target cache data to one of the empty P target cache blocks in the second cache pool.
In an embodiment of the present invention, in response to determining that the target cache pool does not have enough empty target cache blocks to store the target cache data, the processor performs a cache block merging operation on the first cache pool ordered before the target cache pool to obtain M target cache blocks that are empty in the first cache pool, and stores the target cache data to the M target cache blocks that are empty in the first cache pool.
In one embodiment of the present invention, in response to determining that the target cache pool does not have enough empty target cache blocks to store the target cache data, the processor performs a cache block sort operation on a fourth cache pool of the plurality of cache pools to obtain one empty target cache block in the fourth cache pool, and stores the target cache data to the empty target cache blocks in the fourth cache pool.
In an embodiment of the present invention, the cache block sorting operation includes: grouping the plurality of fourth cache blocks into a plurality of fourth cache block groups according to respective addresses of the plurality of fourth cache blocks in the fourth cache pool, wherein each fourth cache block group comprises P fourth cache blocks, and the size of the P fourth cache blocks is equal to the target standard space size; identifying a fifth cache block group of the fourth cache blocks having the most blank fourth cache blocks of the plurality of fourth cache block groups; moving the data currently stored in the fifth cache block group to blank fourth cache blocks in one or more sixth cache block groups in the fourth cache block groups; and merging the P fourth cache blocks in the fifth cache block group to obtain the blank target cache block.
An embodiment of the present invention provides a buffer memory management method for a buffer memory. The method comprises the following steps: dividing a storage space of the buffer memory into a plurality of buffer pools, wherein each buffer pool comprises a plurality of buffer blocks, the standard space sizes of the buffer blocks belonging to the same buffer pool are the same, and the standard space sizes of the buffer blocks belonging to different buffer pools are different; sequencing the plurality of cache pools from small to large according to the standard space sizes corresponding to the cache pools; identifying a target data size of the target cache data; selecting a target cache pool from the plurality of cache pools according to the target data size, a plurality of standard space sizes corresponding to the plurality of cache pools and an arrangement sequence of the plurality of cache pools, wherein the target data size is not greater than the target standard space size of the target cache pool, a first standard space size of a first cache pool ordered before the target cache pool is smaller than the target standard space size of the target cache pool, and the target data size is greater than the first standard space size; and storing the target cache data into a blank target cache block of the target cache pool.
Based on the above, the electronic device and the buffer memory management method provided by the embodiments of the present invention can divide the storage space (buffer space) of the buffer memory of the electronic device into the buffer pools with storage blocks of different sizes, so that the buffer data to be stored can be stored into the storage block with the most suitable size, thereby reducing the waste of the storage space and improving the storage capacity of the buffer memory.
Drawings
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
FIG. 1 is a block diagram of a host system and a storage device according to an embodiment of the invention;
FIG. 2 is a flow chart of an instruction scheduling method according to an embodiment of the present invention;
FIGS. 3A and 3B are schematic diagrams illustrating partitioning of multiple cache pools according to embodiments of the present invention;
FIG. 4 is a schematic diagram of comparing target cache data to multiple cache pools according to an embodiment of the present invention;
FIG. 5A is a schematic diagram illustrating an alignment of target cache data and multiple cache pools according to another embodiment of the present invention;
FIG. 5B is a schematic diagram illustrating storing target cache data into a plurality of target cache blocks within a target cache pool according to another embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating obtaining a target cache block via a cache block merge operation, according to an embodiment of the invention;
FIG. 7A is a schematic diagram illustrating an alignment of target cache data and multiple cache pools according to another embodiment of the present invention;
FIG. 7B is a diagram illustrating storing target cache data to a target cache block obtained via a buffer block partitioning operation according to another embodiment of the present invention;
FIG. 8A is a diagram illustrating the movement of cached data according to an embodiment of the present invention;
FIG. 8B is a diagram illustrating storing target cache data to a target cache block obtained via a buffer block merge operation according to another embodiment of the present invention;
fig. 9 is a schematic diagram illustrating a buffer block merging operation according to an embodiment of the present invention.
Description of the reference numerals
100 electronic device
110: processor and method for controlling the same
120: buffer memory
130: data transmission interface circuit
S210, S220, S230, S240, S250: flow steps of buffer memory management method
1201-1203 cache pool
1201 Buffer blocks (1) -1201 (4N), 1202 (1) -1202 (2N), 1203 (1) -1203 (N), 1203 (N+1), 1202 (2N+1), 1202 (2N+2)
A301, a302, a411, a412, a511, a512, a513, a521, a611, a612, a613, a711, a712, a721, a722, a811, a812, a821, a831, a911, a912: arrows
TBD 1-TBD 4: target cache data
TS1 buffer memory block group
Detailed Description
Reference will now be made in detail to the exemplary embodiments of the present invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings and the description to refer to the same or like parts.
FIG. 1 is a block diagram of a host system and a storage device according to an embodiment of the invention. Referring to fig. 1, the electronic device 100 is, for example, a personal computer, a notebook computer, or a server. The electronic device 100 includes a Processor (Processor) 110, a buffer Memory (Host Memory) 120, and a storage device 130.
In the present embodiment, the processor 110 is, for example, a central processing unit (Central Processing Unit, CPU), a Microprocessor (micro-processor), or other programmable processing unit (micro processor), a digital signal processor (Digital Signal Processor, DSP), a programmable controller, an application specific integrated circuit (Application Specific Integrated Circuits, ASIC), a programmable logic device (Programmable Logic Device, PLD), or other similar circuit components, but the invention is not limited thereto.
In the present embodiment, a Buffer Memory (Buffer Memory) 120 is used to temporarily store instructions or data executed by the processor 110. For example, in the present embodiment, the buffer memory 120 may be a dynamic random access memory (Dynamic Random Access Memory, DRAM), a static random access memory (Static Random Access Memory, SRAM), or the like. However, it should be understood that the present invention is not limited thereto, and the buffer memory 120 may be any other suitable memory. In other embodiments, the buffer Memory 120 may also be referred to as a Memory (Memory), a Host Memory (Host Memory), a Cache Memory (Cache Memory), or a Device Memory (Device Memory).
The storage 130 may be, for example, a USB flash drive, a memory card, a solid state disk (Solid State Drive, SSD), or a wireless memory storage device. The storage device 130 is used for storing data stored for a long time. In one embodiment, the processor 110 may remove or store data that no longer needs to be cached into the storage 130 to free up memory space occupied by the data by the buffer memory 120.
FIG. 2 is a flow chart of a buffer memory management method according to an embodiment of the invention. Referring to fig. 2, in step S210, the processor 110 divides the storage space of the buffer into a plurality of buffer pools, wherein each buffer pool includes a plurality of buffer blocks, the standard space sizes of the respective buffer blocks belonging to the same buffer pool are the same, and the standard space sizes of the respective buffer blocks belonging to different buffer pools are different. Next, in step S220, the processor 110 sorts the plurality of buffer pools according to the standard space sizes corresponding to the plurality of buffer pools from small to large.
For example, referring to fig. 3A and 3B, the processor 110 divides the storage space of the buffer memory 120 into three parts (also referred to as buffer pools) 1201, 1202, 1203 (as indicated by arrow a 301), for example, by performing an initialization operation on the buffer memory 120.
As indicated by arrow a302, each of the buffer pools 1201-1203 is divided into a plurality of buffer blocks. For example, the buffer pool 1201 includes buffer blocks 1201 (1) to 1201 (4N) of the same initial size (standard space size); the cache pool 1202 includes cache blocks 1202 (1) to 1202 (2N) of the same initial size (standard space size); the cache pool 1203 includes cache blocks 1203 (1) to 1203 (N) of the same initial size (standard space size). Since the standard space size of each cache block of the cache pool 1201 is minimal, the cache pool 1201 will be ordered first.
In addition, in this embodiment, the ratio of the initial sizes of the two adjacent buffer pools, the respective buffer blocks, is fixed to P. For example, the ratio of the standard space sizes of each of a pair of adjacent cache pools is P, wherein the second standard space size of a second cache pool ordered after the target cache pool is P times the target standard space size of the target cache pool, and the cache blocks of the second cache pool are sufficient to store P target cache data. As in the example shown in fig. 3B, the cache pool 1202 and the cache pool 1203 are adjacent, and the initial size of the cache block of the cache pool 1203 is 2 times the initial size of the cache block of the cache pool 1202 (e.g., the size of the cache block 1203 (1) is the size of the cache block 1202 (1) plus the size of the cache block 1202 (2)), i.e., P is equal to 2. By analogy, it can be appreciated that the buffer pool 1202 is adjacent to the buffer pool 1201, and that the initial size of the buffer block of the buffer pool 1202 is 2 times the initial size of the buffer block of the buffer pool 1201. The initial size of a cache block of a cache pool may also be referred to as the standard space size of the cache pool.
In one embodiment, the processor 110 may concatenate the cache blocks in the form of a linked list, fetching directly from the head of the linked list when allocated, and reconnecting to the tail of the linked list when released.
With the different sized cache block designs provided above, the processor 110 may store target cache data to be stored into the most appropriate and least wasteful cache block of the cache pool.
Referring back to fig. 2, in step S230, the processor 110 identifies a target data size of the target cache data. Next, in step S240, the processor 110 selects a target cache pool from the plurality of cache pools according to the target data size, the plurality of standard space sizes corresponding to the plurality of cache pools, and the arrangement order of the plurality of cache pools, wherein the target data size is not greater than the target standard space size of the target cache pool, and wherein a first standard space size of a first cache pool ordered before the target cache pool is smaller than the target standard space size of the target cache pool, and the target data size is greater than the first standard space size.
In more detail, the processor 110 compares the target data size with the standard space sizes corresponding to the plurality of buffer pools according to the arrangement order, starting from the buffer pool corresponding to the minimum standard space size. Responsive to determining that the target data size is greater than the standard space size of the currently compared cache pool, the processor 110 selects a next cache pool for the comparison; in response to determining that the target data size is not greater than the standard space size corresponding to the currently compared cache pool, the processor 110 takes the currently compared cache pool as the target cache pool.
That is, after identifying the size (e.g., target data size) of the cache data (e.g., target cache data) to be stored, the processor 110 compares the initial size (standard space size) of the cache blocks in the cache pool with the target data size, starting from the cache pool ordered at the top, according to the ordering order/ordering direction.
When the target data size is greater than the standard space size of the compared cache pool, the processor 110 may determine that the target cache data cannot be stored in one (initially sized) cache block of the cache pool, and the processor 110 may then compare the target data size to the standard space size of the next cache pool until when the target data size is not greater than the standard space size of the compared cache pool (i.e., the target cache data may be stored in one (initially sized) cache block of the compared cache pool). At this time, this cache pool is the target cache pool.
Next, in step S250, the processor 110 stores the target cache data into a blank target cache block of the target cache pool.
For example, referring to fig. 4, in fig. 4, the area of the target cache data TBD1 and the area of the cache block may be used for exemplary illustration of size comparison. Assume that the processor 110 is to store target cache data TBD1. Initially, as indicated by arrow a411, processor 110 will select cache pool 1201 for comparison (because the ordering is top-most). The processor 110 determines that the target data size of the target cache data TBD1 is greater than the standard space size of the cache pool 1201 (e.g., the size of the target cache data TBD1 is greater than each of the cache blocks 1201 (1) -1201 (4N) of the cache pool 1201), and the processor 110 knows that the target cache data TBD1 cannot be stored in the cache block in the cache pool 1201. Next, as indicated by arrow a412, the processor 110 selects the next cache pool 1202 for comparison and determines that the target data size of the target cache data TBD1 is not greater than the standard space size of the cache pool 1202. The processor 110 then treats the cache pool 1202 as a target cache pool and stores the target cache data into a blank cache block within the cache pool 1202.
In one embodiment, when the next cache pool is to be selected for the comparison but there is no next cache pool, the processor 110 takes the cache pool currently being compared as the target cache pool; and the processor 110 stores the target cache data using Q target cache blocks of the target cache pool, wherein a size of Q target cache blocks is larger than the target data size and a size of Q-1 target cache blocks is smaller than the target data size.
In short, when one of the initially sized cache blocks of the target cache pool is unable to store the target cache data, the processor 110 uses the plurality of cache blocks of the target cache pool together to store the target cache data.
For example, referring to FIG. 5A, assume that the processor 110 is to store target cache data TBD2. Initially, as indicated by arrow 511, processor 110 may select cache pool 1201 for comparison and determine that the target data size of target cache data TBD2 is greater than the standard space size of cache pool 1201. Next, as indicated by arrow a512, the processor 110 selects the next cache pool 1202 for comparison and determines that the target data size of the target cache data TBD2 is greater than the standard space size of the cache pool 1202. Next, as indicated by arrow a513, the processor 110 selects the next cache pool 1203 to compare, and determines that the target data size of the target cache data TBD2 is larger than the standard space size of the cache pool 1203. In this example, the processor 110 will take the cache pool 1203 as the target cache pool and select a blank plurality of cache blocks as the target cache blocks to store the target cache data since there is no next cache pool. The total size of the selected cache blocks is not smaller than the target cache data.
As shown in fig. 5B, continuing with the example of fig. 5A, as indicated by arrow a521, the processor 110 selects 2 (i.e., q=2) empty cache blocks 1203 (1), 1203 (2) in the cache pool 1203 as target cache blocks to store target cache data TBD2.
However, in an actual use scenario, there is not enough empty cache blocks in the buffer pool to store the target cache data. The present invention also provides a number of solutions (e.g., a cache block merge operation, a cache block split operation, and a cache block sort operation).
In one embodiment, in response to determining that the target cache pool does not have enough empty target cache blocks to store the target cache data, processor 110 performs a cache block merge operation on the first cache pool ordered before the target cache pool to obtain M target cache blocks that are empty in the first cache pool, and stores the target cache data to the M target cache blocks that are empty in the first cache pool.
In one embodiment, in response to determining that the target cache pool does not have enough empty target cache blocks to store the target cache data, the processor 110 performs a cache block partitioning operation on the second cache pool ordered after the target cache pool to obtain empty P target cache blocks in the second cache pool, and stores the target cache data to one of the empty P target cache blocks in the second cache pool.
In one embodiment, in response to determining that the target cache pool does not have enough empty target cache blocks to store the target cache data, the processor 110 performs a cache block sort operation on a fourth cache pool of the plurality of cache pools to obtain one empty target cache block in the fourth cache pool, and stores the target cache data to the empty target cache blocks in the fourth cache pool. The cache block arrangement operation includes: grouping the plurality of fourth cache blocks into a plurality of fourth cache block groups according to respective addresses of the plurality of fourth cache blocks in the fourth cache pool, wherein each fourth cache block group comprises P fourth cache blocks, and the size of the P fourth cache blocks is equal to the target standard space size; identifying a fifth cache block group of the fourth cache blocks having the most blank fourth cache blocks of the plurality of fourth cache block groups; moving the data currently stored in the fifth cache block group to blank fourth cache blocks in one or more sixth cache block groups in the fourth cache block groups; and merging the P fourth cache blocks in the fifth cache block group to obtain the blank target cache block.
For the cache block merging operation, for example, please refer to fig. 6, it is assumed that the target data size of the target cache data TBD3 to be stored is larger than the standard space size of the target cache pool 1203 but smaller than twice the standard space size of the cache pool 1203. Further, it is further assumed that the buffer pool 1203 has only one blank buffer block 1203 (N), which is insufficient to store all target buffer data TBD3. In this example, the processor 110 selects the cache pool 1202 (which is the first cache pool ordered before the cache pool 1203) to find and select two blank cache blocks 1202 (1), 1202 (2) therein to perform cache block merging (as indicated by arrow a 612) to obtain a blank cache block 1203 (n+1) (e.g., a target cache block). Next, the processor 110 stores the target cache data TBD3 into the cache block 1203 (N) (as indicated by arrow a 611) and into the cache block 1203 (n+1) (as indicated by arrow a 613).
However, if in the example of fig. 6, it is assumed that the cache pool 1203 does not have any empty cache blocks. At this time, the processor 110 may attempt to use the four blank cache blocks of the cache pool 1202 for cache block merging to generate two target cache blocks, thereby storing the target cache data TBD3 that requires the standard space size of the two cache pools 1203 for storage.
In another more extreme example, assuming that there are not enough empty cache blocks within the cache pool 1202 to perform a cache block merge operation to generate a target cache block, the processor 110 may look up the cache pool 1201 to perform a cache block merge operation using every 4 empty cache blocks to generate 1 target cache block.
For the cache block splitting operation, for example, referring to fig. 7A and 7B, it is assumed that the processor 110 is to store the target cache data TBD4. Initially, as indicated by arrow 711, the processor 110 selects the buffer pool 1201 for comparison and determines that the target data size of the target buffer data TBD4 is greater than the standard space size of the buffer pool 1201. Next, as indicated by arrow a712, the processor 110 selects the next cache pool 1202 for comparison and determines that the target data size of the target cache data TBD2 is not greater than the standard space size of the cache pool 1202. Processor 110 will treat cache pool 1202 as the target cache pool. However, as shown in fig. 7B, in the present embodiment, there is no empty cache block in the target cache pool 1202 to store the target cache data TBD4. The processor 110 may attempt to select a cache pool 1203 (e.g., a second cache pool) ordered after the target cache pool 1202 to find and select a blank cache block 1203 (1) to perform a cache block splitting operation to obtain 2 target cache blocks 1202 (2n+1), 1202 (2n+2) (as indicated by arrow a 721). Next, the processor 110 may select one of the target cache blocks 1202 (2N+1), 1202 (2N+2) (e.g., the cache block 1202 (2N+1)) to store the target cache data TBD4 (as indicated by arrow A722).
It should be noted that in the examples of fig. 6 and 7B, it can be seen that a cache pool may have cache blocks that are not of its standard spatial size. For example, the size of the cache block 1203 (n+1) of the cache pool 1202 in fig. 6 is not the standard space size of the cache pool 1202. As another example, the sizes of the cache blocks 1202 (2n+1), 1202 (2n+2) of the cache pool 1203 in fig. 7B are not the standard space size of the cache pool 1203.
In some cases, when it is determined that there are not enough empty cache blocks to serve as target cache blocks to store the target cache data, the processor 110 may first perform a cache block sort operation on the target cache pool to attempt to sort out enough empty cache blocks as target cache blocks to store the target cache data.
For the cache block sorting operation, please refer to fig. 8A and 8B, it is assumed that the target data size of the target cache data TBD3 to be stored is larger than the standard space size of the target cache pool 1203 but smaller than twice the standard space size of the cache pool 1203. Further, it is further assumed that the buffer pool 1203 has only one blank buffer block 1203 (N) and one buffer block 1202 (2n+2) generated by the buffer block dividing operation, and that both buffer blocks are also insufficient for storing all the target buffer data TBD3. In this example, since there are cache blocks 1202 (2n+1), 1202 (2n+2) in the target cache pool 1203, the processor 110 performs the cache block sort operation on the target cache pool 1203 (e.g. the fourth cache pool).
As indicated by arrow a812, the processor 110 moves the data stored in the cache block 1202 (2n+1) into the empty cache block 1202 (1) of the cache pool 1202 to make the cache block 1202 (2n+1) an empty cache block. Next, as indicated by arrow a821, the processor 110 performs cache block merging on the cache blocks 1202 (2n+1), 1202 (2n+2) to generate a cache block 1203 (N-1) (e.g., a target cache block) having a standard space size of the cache pool 1203. Finally, as indicated by arrow A831, the processor 110 may store the target cache data TBD3 into cache blocks 1203 (N-1), 1203 (N).
As another example, referring to fig. 9, it is assumed that the processor 110 performs a buffer block sorting operation on the buffer pool 1201 (e.g., the fourth buffer pool), and the sizes of four buffer blocks in the fourth buffer pool 1201 are equal to a standard space size (corresponding to the buffer pool 1203). The processor 110 groups four cache blocks 1201 (1) to 1201 (4) into a first cache block group; grouping the four cache blocks 1201 (5) -1201 (8) into a second cache block group; grouping the four buffer blocks 1201 (9) -1201 (12) into a third buffer block group; four buffer blocks 1201 (13) to 1201 (16) are grouped into a fourth buffer block group. Next, the processor 110 identifies the cache block group having the most blank cache blocks among the cache block groups as the fifth cache block group TS1, and moves the data in the fifth cache block group TS1 (e.g., the data stored in the cache block 1201 (13)) into the blank cache blocks (e.g., the cache block 1201 (9)) of the other cache block groups (e.g., the sixth cache block group) (as indicated by the arrow a 911). Next, as indicated by arrow a912, the processor 110 merges the four buffer blocks 1201 (13) to 1201 (16) in the fifth buffer block set TS1 into one target buffer block (e.g., buffer block 1203 (5)). Finally, the processor 110 may store the target cache data into the target cache block.
In one embodiment, the steps of dividing and using the cache pool are as follows:
1. dividing the cache space into a plurality of cache pools, wherein each cache pool comprises a plurality of cache blocks with the same size, and the cache blocks in different cache pools are different in size.
2. And responding to the cache call request, and distributing idle cache blocks in the target cache pool according to the required cache size.
3. Sequentially comparing from the cache pool with the minimum size during allocation, and if the size of the cache block in the current cache pool is smaller than the required cache size, selecting a cache pool with a larger primary size for comparison; if the size of the cache block in the current cache pool is not smaller than the required cache size, the current cache pool is a target cache pool.
4. If the size of the buffer block in the buffer pool with the largest size is still smaller than the required buffer size, a plurality of buffer blocks are distributed from the current buffer pool, so that the sum of the sizes of the buffer blocks is not smaller than the required buffer size.
5. If there is no free cache block in the target cache pool, then attempt to resolve in the following order: a) Allocating an idle cache block from a cache pool with a larger primary size, splitting the idle cache block into a plurality of cache blocks with target sizes, and adding the cache blocks with the target sizes into the target cache pool; b) If no free cache block exists in the cache pool of the larger level, selecting a smaller free cache block from the cache pool of the smaller level and merging the smaller free cache block into a cache block of the target size, and adding the cache block of the target size into the target cache pool; c) If the buffer blocks of the smaller level do not have enough free buffer blocks which are combined into the target size, waiting for the current buffer pool to release the buffer blocks. The processor 110 will try in the above order in turn until there are free cache blocks to be allocated.
In an embodiment, the buffer space available to the processor 110 is not large, and a bitmap (bitmap) may be used to manage the usage of the buffer space, and a minimum buffer block granularity (e.g., a minimum standard space size) is used as a unit, each unit is mapped to 1 bit in the bitmap, and buffer blocks with different sizes are multiples of a buffer block, e.g., a buffer block size of 4 units, and then mapped to 4 bits. The processor 110 creates a bitmap covering the entire buffer space with each bit in the bitmap as a key, and records a usage status of the unit buffer blocks and a combination status, where the usage status indicates whether the bit-mapped buffer blocks are in an occupied (e.g., using a first bit value) or an idle (e.g., using a second bit value), and the combination status indicates whether each unit buffer block is a complete minimum-size buffer block or is a part of a large-size buffer block. In addition, the processor 110 may modify the combined state of the corresponding unit cache blocks in the bitmap at the same time when the cache blocks are divided, combined, and sorted.
Based on the above, the electronic device and the buffer memory management method provided by the embodiments of the present invention can divide the storage space of the buffer memory of the electronic device into the buffer pools with storage blocks of different sizes, so that the buffer data to be stored can be stored into the storage block with the most suitable size, thereby reducing the waste of the storage space and enhancing the storage capacity of the buffer memory.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (16)

1. An electronic device, comprising:
a buffer memory; and
a processor electrically connected to the buffer memory,
wherein the processor is configured to:
dividing a storage space of the buffer memory into a plurality of buffer pools, wherein each buffer pool comprises a plurality of buffer blocks, the standard space sizes of the buffer blocks belonging to the same buffer pool are the same, and the standard space sizes of the buffer blocks belonging to different buffer pools are different;
sequencing the plurality of cache pools from small to large according to the standard space sizes corresponding to the cache pools;
Identifying a target data size of the target cache data;
selecting a target cache pool from the plurality of cache pools according to the target data size, the plurality of standard space sizes corresponding to the plurality of cache pools and the arrangement sequence of the plurality of cache pools,
wherein the target data size is not greater than a target standard space size of the target cache pool, wherein a first standard space size of a first cache pool ordered before the target cache pool is less than the target standard space size of the target cache pool, and the target data size is greater than the first standard space size; and
and storing the target cache data into blank target cache blocks of the target cache pool.
2. The electronic device of claim 1, wherein the ratio of standard space sizes of each of the adjacently ordered pair of cache pools is P,
wherein a second standard space size of a second cache pool ordered after the target cache pool is P times the target standard space size of the target cache pool, and a cache block of the second cache pool is sufficient to store P target cache data.
3. The electronic device of claim 2, wherein in selecting the target cache pool from the plurality of cache pools based on the target data size, the plurality of standard space sizes corresponding to the plurality of cache pools, and the arrangement order of the plurality of cache pools,
The processor compares the target data size with the standard space sizes corresponding to the cache pools according to the arrangement order from the cache pool corresponding to the minimum standard space size,
wherein in response to determining that the target data size is greater than the standard space size of the currently compared cache pool, the processor selects a next cache pool for the comparison,
and the processor takes the currently compared cache pool as the target cache pool in response to determining that the target data size is not greater than the standard space size corresponding to the currently compared cache pool.
4. The electronic device of claim 3, wherein when a next cache pool is to be selected for the comparison but there is no next cache pool,
the processor takes the currently compared cache pool as the target cache pool; and
the processor stores the target cache data using Q target cache blocks of the target cache pool, wherein a size of Q target cache blocks is greater than the target data size and a size of Q-1 target cache blocks is less than the target data size.
5. The electronic device of claim 3, wherein in response to determining that the target cache pool does not have enough empty target cache blocks to store the target cache data, the processor performs a cache block partitioning operation on the second cache pool ordered after the target cache pool to obtain empty P target cache blocks in the second cache pool, and stores the target cache data to one of the empty P target cache blocks in the second cache pool.
6. The electronic device of claim 3, wherein in response to determining that the target cache pool does not have enough empty target cache blocks to store the target cache data, the processor performs a cache block merge operation on the first cache pool ordered before the target cache pool to obtain M target cache blocks that are empty within the first cache pool, and stores the target cache data to the M target cache blocks that are empty within the first cache pool.
7. The electronic device of claim 3, wherein in response to determining that the target cache pool does not have enough empty target cache blocks to store the target cache data, the processor performs a cache block sort operation on a fourth cache pool of the plurality of cache pools to obtain one empty target cache block in the fourth cache pool and stores the target cache data to the empty target cache blocks in the fourth cache pool.
8. The electronic device of claim 7, wherein the cache block sort operation comprises:
grouping the plurality of fourth cache blocks into a plurality of fourth cache block groups according to respective addresses of the plurality of fourth cache blocks in the fourth cache pool, wherein each fourth cache block group comprises P fourth cache blocks, and the size of the P fourth cache blocks is equal to the target standard space size;
identifying a fifth cache block group of the fourth cache blocks having the most blank fourth cache blocks of the plurality of fourth cache block groups;
moving the data currently stored in the fifth cache block group to blank fourth cache blocks in one or more sixth cache block groups in the fourth cache block groups; and
and merging the P fourth cache blocks in the fifth cache block group to obtain the blank target cache block.
9. A buffer memory management method for a buffer memory, comprising:
dividing a storage space of the buffer memory into a plurality of buffer pools, wherein each buffer pool comprises a plurality of buffer blocks, the standard space sizes of the buffer blocks belonging to the same buffer pool are the same, and the standard space sizes of the buffer blocks belonging to different buffer pools are different;
Sequencing the plurality of cache pools from small to large according to the standard space sizes corresponding to the cache pools;
identifying a target data size of the target cache data;
selecting a target cache pool from the plurality of cache pools according to the target data size, the plurality of standard space sizes corresponding to the plurality of cache pools and the arrangement sequence of the plurality of cache pools,
wherein the target data size is not greater than a target standard space size of the target cache pool, wherein a first standard space size of a first cache pool ordered before the target cache pool is less than the target standard space size of the target cache pool, and the target data size is greater than the first standard space size; and
and storing the target cache data into blank target cache blocks of the target cache pool.
10. The method of claim 9, wherein the ratio of standard space sizes of each of the pair of adjacent buffer pools is P,
wherein a second standard space size of a second cache pool ordered after the target cache pool is P times the target standard space size of the target cache pool, and a cache block of the second cache pool is sufficient to store P target cache data.
11. The buffer memory management method of claim 10, wherein the step of selecting the target buffer pool from the plurality of buffer pools according to the target data size, the plurality of standard space sizes corresponding to the plurality of buffer pools, and the arrangement order of the plurality of buffer pools comprises:
starting from the cache pool corresponding to the minimum standard space size, comparing the target data size with the standard space sizes corresponding to the cache pools according to the arrangement sequence;
in response to determining that the target data size is greater than the standard space size of the currently compared cache pool, selecting a next cache pool for the comparison; and
and in response to determining that the target data size is not greater than the standard space size corresponding to the currently compared cache pool, taking the currently compared cache pool as the target cache pool.
12. The buffer memory management method of claim 11, wherein the method further comprises:
when the next cache pool is to be selected for the comparison but there is no next cache pool,
Taking the currently compared cache pool as the target cache pool; and
the target cache data is stored using Q target cache blocks of the target cache pool, wherein a size of Q target cache blocks is greater than the target data size and a size of Q-1 target cache blocks is less than the target data size.
13. The buffer memory management method of claim 11, wherein the method further comprises: in response to determining that the target cache pool does not have enough empty target cache blocks to store the target cache data, performing a cache block partitioning operation on the second cache pool ordered after the target cache pool to obtain empty P target cache blocks in the second cache pool, and storing the target cache data to one of the empty P target cache blocks in the second cache pool.
14. The buffer memory management method of claim 11, wherein the method further comprises: in response to determining that the target cache pool does not have enough empty target cache blocks to store the target cache data, performing a cache block merging operation on the first cache pool ordered in front of the target cache pool to obtain M target cache blocks that are empty in the first cache pool, and storing the target cache data to the M target cache blocks that are empty in the first cache pool.
15. The buffer memory management method of claim 11, wherein the method further comprises: in response to determining that the target cache pool does not have enough empty target cache blocks to store the target cache data, performing a cache block sort operation on a fourth cache pool of the plurality of cache pools to obtain one empty target cache block in the fourth cache pool, and storing the target cache data to the empty target cache blocks in the fourth cache pool.
16. The method of claim 15, wherein the cache block sort operation comprises:
grouping the plurality of fourth cache blocks into a plurality of fourth cache block groups according to respective addresses of the plurality of fourth cache blocks in the fourth cache pool, wherein each fourth cache block group comprises P fourth cache blocks, and the size of the P fourth cache blocks is equal to the target standard space size;
identifying a fifth cache block group of the fourth cache blocks having the most blank fourth cache blocks of the plurality of fourth cache block groups;
moving the data currently stored in the fifth cache block group to blank fourth cache blocks in one or more sixth cache block groups in the fourth cache block groups; and
And merging the P fourth cache blocks in the fifth cache block group to obtain the blank target cache block.
CN202410026373.XA 2024-01-09 2024-01-09 Electronic device and buffer memory management method Active CN117539796B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410026373.XA CN117539796B (en) 2024-01-09 2024-01-09 Electronic device and buffer memory management method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410026373.XA CN117539796B (en) 2024-01-09 2024-01-09 Electronic device and buffer memory management method

Publications (2)

Publication Number Publication Date
CN117539796A true CN117539796A (en) 2024-02-09
CN117539796B CN117539796B (en) 2024-05-28

Family

ID=89796247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410026373.XA Active CN117539796B (en) 2024-01-09 2024-01-09 Electronic device and buffer memory management method

Country Status (1)

Country Link
CN (1) CN117539796B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1677946A (en) * 2004-04-02 2005-10-05 华为技术有限公司 Buffer distribution method and apparatus
CN101034961A (en) * 2007-04-11 2007-09-12 重庆重邮信科(集团)股份有限公司 Management method and device of IR buffer in the multi-process HARQ technology
CN102436421A (en) * 2010-09-29 2012-05-02 腾讯科技(深圳)有限公司 Data caching method
CN102567522A (en) * 2011-12-28 2012-07-11 北京握奇数据***有限公司 Method and device for managing file system of intelligent card
CN103595653A (en) * 2013-11-18 2014-02-19 福建星网锐捷网络有限公司 Cache distribution method, device and apparatus
CN107864391A (en) * 2017-09-19 2018-03-30 北京小鸟科技股份有限公司 Video flowing caches distribution method and device
CN108959517A (en) * 2018-06-28 2018-12-07 河南思维轨道交通技术研究院有限公司 File management method, device and electronic equipment
CN112817526A (en) * 2021-01-19 2021-05-18 杭州和利时自动化有限公司 Data storage method, device and medium
CN113395415A (en) * 2021-08-17 2021-09-14 深圳大生活家科技有限公司 Camera data processing method and system based on noise reduction technology
CN114064588A (en) * 2021-11-24 2022-02-18 建信金融科技有限责任公司 Storage space scheduling method and system
CN115168304A (en) * 2022-09-06 2022-10-11 北京奥星贝斯科技有限公司 Data processing method, device, storage medium and equipment
CN116303118A (en) * 2023-05-18 2023-06-23 合肥康芯威存储技术有限公司 Storage device and control method thereof
CN117056246A (en) * 2023-07-04 2023-11-14 山东日照发电有限公司 Data caching method and system
CN117311621A (en) * 2023-09-26 2023-12-29 济南浪潮数据技术有限公司 Cache disk space allocation method and device, computer equipment and storage medium
CN117349246A (en) * 2022-06-27 2024-01-05 北京小米移动软件有限公司 Disk sorting method, device and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1677946A (en) * 2004-04-02 2005-10-05 华为技术有限公司 Buffer distribution method and apparatus
CN101034961A (en) * 2007-04-11 2007-09-12 重庆重邮信科(集团)股份有限公司 Management method and device of IR buffer in the multi-process HARQ technology
CN102436421A (en) * 2010-09-29 2012-05-02 腾讯科技(深圳)有限公司 Data caching method
CN102567522A (en) * 2011-12-28 2012-07-11 北京握奇数据***有限公司 Method and device for managing file system of intelligent card
CN103595653A (en) * 2013-11-18 2014-02-19 福建星网锐捷网络有限公司 Cache distribution method, device and apparatus
CN107864391A (en) * 2017-09-19 2018-03-30 北京小鸟科技股份有限公司 Video flowing caches distribution method and device
CN108959517A (en) * 2018-06-28 2018-12-07 河南思维轨道交通技术研究院有限公司 File management method, device and electronic equipment
CN112817526A (en) * 2021-01-19 2021-05-18 杭州和利时自动化有限公司 Data storage method, device and medium
CN113395415A (en) * 2021-08-17 2021-09-14 深圳大生活家科技有限公司 Camera data processing method and system based on noise reduction technology
CN114064588A (en) * 2021-11-24 2022-02-18 建信金融科技有限责任公司 Storage space scheduling method and system
CN117349246A (en) * 2022-06-27 2024-01-05 北京小米移动软件有限公司 Disk sorting method, device and storage medium
CN115168304A (en) * 2022-09-06 2022-10-11 北京奥星贝斯科技有限公司 Data processing method, device, storage medium and equipment
CN116303118A (en) * 2023-05-18 2023-06-23 合肥康芯威存储技术有限公司 Storage device and control method thereof
CN117056246A (en) * 2023-07-04 2023-11-14 山东日照发电有限公司 Data caching method and system
CN117311621A (en) * 2023-09-26 2023-12-29 济南浪潮数据技术有限公司 Cache disk space allocation method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN117539796B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN107665146B (en) Memory management device and method
US9966152B2 (en) Dedupe DRAM system algorithm architecture
US9697111B2 (en) Method of managing dynamic memory reallocation and device performing the method
US11314689B2 (en) Method, apparatus, and computer program product for indexing a file
CN108959113B (en) Method and system for flash aware heap memory management
US11243716B2 (en) Memory system and operation method thereof
US8914571B2 (en) Scheduler for memory
US10789170B2 (en) Storage management method, electronic device and computer readable medium
CN114036078A (en) Method and system for managing cache devices in a storage system
CN115129621B (en) Memory management method, device, medium and memory management module
JP3989312B2 (en) Cache memory device and memory allocation method
CN115964319A (en) Data processing method for remote direct memory access and related product
US20220350531A1 (en) Memory swapping method and apparatus
US7484070B1 (en) Selective memory block remapping
US11093291B2 (en) Resource assignment using CDA protocol in distributed processing environment based on task bid and resource cost
CN112650449B (en) Method and system for releasing cache space, electronic device and storage medium
CN117539796B (en) Electronic device and buffer memory management method
CN117078495A (en) Memory allocation method, device, equipment and storage medium of graphic processor
CN110119245B (en) Method and system for operating NAND flash memory physical space to expand memory capacity
CN106537321B (en) Method, device and storage system for accessing file
US20170131908A1 (en) Memory Devices and Methods
CN107797757B (en) Method and apparatus for managing cache memory in image processing system
US20180203875A1 (en) Method for extending and shrinking volume for distributed file system based on torus network and apparatus using the same
CN111078405B (en) Memory allocation method and device, storage medium and electronic equipment
CN113722085B (en) Distribution method and distribution system of graphic resources

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant