WO2022135160A1 - 缓存空间的释放方法、释放***、电子设备及存储介质 - Google Patents

缓存空间的释放方法、释放***、电子设备及存储介质 Download PDF

Info

Publication number
WO2022135160A1
WO2022135160A1 PCT/CN2021/136650 CN2021136650W WO2022135160A1 WO 2022135160 A1 WO2022135160 A1 WO 2022135160A1 CN 2021136650 W CN2021136650 W CN 2021136650W WO 2022135160 A1 WO2022135160 A1 WO 2022135160A1
Authority
WO
WIPO (PCT)
Prior art keywords
storage unit
occupied
data
stored
task
Prior art date
Application number
PCT/CN2021/136650
Other languages
English (en)
French (fr)
Inventor
张梦
Original Assignee
展讯半导体(南京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 展讯半导体(南京)有限公司 filed Critical 展讯半导体(南京)有限公司
Publication of WO2022135160A1 publication Critical patent/WO2022135160A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0688Non-volatile semiconductor memory arrays

Definitions

  • the invention relates to the field of wireless communication, in particular to a method for releasing cache space, a releasing system, an electronic device and a storage medium.
  • the data manager in the chip has been receiving data and storing the data in the cache. Since the same data may be associated with multiple processing tasks, after the received data is stored in the cache, multiple data needs to be processed multiple times. Reads are repeated for non-repetitive task processing, so cached data cannot be released immediately after being read.
  • the buffer space in the chip is small and fixed in size, and cannot be adjusted dynamically; if the release efficiency of the old buffer content is low, as the buffer content continues to increase, the buffer space gradually fills up, and there may be problems that the received data cannot be stored or Missing question.
  • the general processing method mainly uses the ring data manager and the FIFO (first-in, first-out) mechanism to cache the data in the cell.
  • This processing method has at least the following problems: the ring data manager and the FIFO mechanism cannot control the priority of the cache read and write access.
  • the data of each cell can only be processed one by one in the order in which the data arrives. Since the data in the cache cannot be released, the newly arrived data cannot get the cache space and is lost. Especially in extreme scenarios, the distribution of cells is uneven.
  • the hardware measurement processing capability does not match the data arrival rate. At this time, a large storage space is required to buffer unprocessed data, resulting in storage space. The use efficiency is low and the occupation area is large. This further makes the chip unable to continue to receive data or even if the data is received, the data will be lost, resulting in the defect that the processing task cannot be performed.
  • the technical problem to be solved by the present invention is to overcome the defects in the prior art that the data in the cache is difficult to be released, the newly received data cannot be stored, and data loss will occur, and to provide a method for improving the release efficiency of the data stored in the cache.
  • the present invention provides a method for releasing cache space, wherein the cache space includes several storage units, and the method for releasing includes:
  • the target occupied storage unit is released.
  • the step further includes:
  • the step of selecting a target occupied storage unit according to the number of tasks to be processed associated with each occupied storage unit includes:
  • One of the occupied storage units is selected as the target occupied storage unit; or, the occupied storage unit with the smallest number of associated tasks to be processed is selected as the target occupied storage unit.
  • the step of selecting a target processing task in the target occupied storage unit includes:
  • the first to-be-processed task associated with the target occupied storage unit is used as the target processing task.
  • the to-be-processed tasks associated with each occupied storage unit are numbered sequentially, and the step of calculating the number of to-be-processed tasks associated with each occupied storage unit includes:
  • the number of pending tasks associated with the occupied storage unit is calculated according to the second number and the first number.
  • the step of selecting a target occupied storage unit according to the number of tasks to be processed associated with each occupied storage unit the step further includes:
  • Receive the data to be stored and store the data to be stored in at least one of the free storage units according to the size of the data to be stored.
  • the data to be stored is stored in a plurality of the free storage units, the The free storage units are connected in series through a linked list, and the data to be stored is associated with at least one task to be processed.
  • the addresses of the occupied storage units are obtained through a corresponding linked list.
  • the step of selecting a target occupied storage unit according to the number of tasks to be processed associated with each occupied storage unit the step further includes:
  • the cache space is divided into several storage units of the same size.
  • the step of storing the data to be stored in at least one of the free storage units according to the size of the data to be stored includes:
  • the storage unit includes storage unit status information, quantity information of tasks to be processed, linked list address information and storage unit identification information, the storage unit status information is used to indicate the storage unit storage status, and the linked list address The information is used to indicate the address of the memory cell in series with the current memory cell.
  • the present invention also provides a cache space release system, the cache space includes several storage units, and the release system includes: a storage unit selection module, a processing task selection module and a space release module;
  • the storage unit selection module is configured to select a target occupied storage unit according to the number of tasks to be processed associated with each occupied storage unit;
  • the processing task selection module is configured to select a target processing task in the target occupied storage unit
  • the space release module is configured to release the target occupied storage unit when no task to be processed is associated with the target occupied storage unit.
  • the release system further includes a task judgment module, and the processing task selection module is also used to call the task judgment module after selecting the target processing task, and the task judgment module is used to process the target processing task. After completion, it is judged whether the occupied storage unit associated with the target processing task is also associated with a pending task, if not, the space release module is called, and if so, the storage unit selection module is called.
  • the storage unit selection module includes: a task calculation unit and a storage selection unit;
  • the task calculation unit is used to calculate the number of tasks to be processed associated with each occupied storage unit
  • the storage selection unit is used to select one of the occupied storage units whose number of associated tasks to be processed is less than the task processing threshold as the target occupied storage unit; or, the storage selection unit is used to select the number of the associated tasks to be processed. Sort from least to most, and select one of the several occupied storage units ahead of the sorting as the target occupied storage unit; or, the storage selection unit is used to select the occupied storage unit with the least number of associated tasks to be processed Occupies a storage unit as a target.
  • the processing task selection module is configured to use the first to-be-processed task associated with the target occupied storage unit as the target processing task.
  • the tasks to be processed associated with each occupied storage unit are sequentially numbered, and the task calculation unit is used to obtain, for each occupied storage unit, the first number of the currently associated first pending task and the currently associated last task.
  • a second number of a to-be-processed task, and the number of to-be-processed tasks associated with the occupied storage unit is calculated according to the second number and the first number.
  • the release system further includes a data receiving module for receiving data to be stored, and storing the data to be stored in at least one of the idle storage units according to the size of the data to be stored, when the data to be stored is stored.
  • a data receiving module for receiving data to be stored, and storing the data to be stored in at least one of the idle storage units according to the size of the data to be stored, when the data to be stored is stored.
  • the idle storage units are connected in series through a linked list, and the data to be stored is associated with at least one to-be-processed task.
  • the space release module is configured to acquire the address of the occupied storage unit through a corresponding linked list when the data to be used by the target processing task is stored in a plurality of occupied storage units.
  • the release system further includes: a storage unit division module, configured to divide the cache space into several storage units of the same size.
  • the data receiving module is used to judge whether the storage space contained in the current idle storage unit meets the storage requirements of the data to be stored: if so, store the data to be stored in the corresponding idle storage unit; If not, wait for the release of the occupied storage unit until the storage space contained in the current free storage unit meets the storage requirements of the data to be stored, or, first store a part of the data to be stored in the current free storage unit, when there is a new storage space When the storage unit is free, another part of the data is to be stored in the new free storage unit.
  • the storage unit includes storage unit status information, quantity information of tasks to be processed, linked list address information and storage unit identification information, the storage unit status information is used to indicate the storage unit storage status, and the linked list address The information is used to indicate the address of the memory cell in series with the current memory cell.
  • the present invention also provides an electronic device, including a memory, a processor, and a computer program stored in the memory and running on the processor, the processor realizes the above-mentioned release of the cache space when executing the computer program method.
  • the present invention also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the steps of the above-mentioned method for releasing cache space.
  • the positive improvement effect of the present invention is that: in the present invention, the target occupied storage unit is selected according to the number of pending tasks associated with the current occupied storage unit, so as to further select the target processing task.
  • the occupied storage unit can be released, that is, the occupied storage unit is converted into a free storage unit for subsequent data storage.
  • the present invention can dynamically adjust the number of tasks to be processed associated with each occupied storage unit to be stored in the cache.
  • the data space can be released in time for the occupied storage unit, which improves the use efficiency of the cache space, and overcomes the defect that the cache space is unevenly distributed in the existing mechanism, the occupied cache space cannot be released, and new data is easily lost.
  • FIG. 1 is a flowchart of a method for releasing a cache space in Embodiment 1 of the present invention.
  • FIG. 2 is a partial flowchart of a method for releasing cache space in Embodiment 1 of the present invention.
  • FIG. 3 is a schematic diagram of a numbering manner of tasks to be processed in a specific scenario in Embodiment 1 of the present invention.
  • FIG. 4 is a schematic diagram of a concatenation of linked lists in a specific scenario in Embodiment 1 of the present invention.
  • FIG. 5 is a schematic block diagram of a system for releasing cache space in Embodiment 2 of the present invention.
  • FIG. 6 is a schematic diagram of a module of an electronic device in Embodiment 3 of the present invention.
  • the cache space includes several storage units, each of which can be independently applied for and released, and the several storage units include two storage states of occupied storage units and free storage units, and the occupied storage units Indicates that data has been stored in the storage unit, and an idle storage unit indicates that data has not yet been stored in the storage unit.
  • the idle storage unit stores data
  • the idle storage unit switches to the occupied storage unit.
  • the occupied storage unit switches is a free storage unit.
  • the release method in this embodiment includes:
  • Step 11 Select a target occupied storage unit according to the number of tasks to be processed associated with each occupied storage unit.
  • step 11 the number of tasks to be processed associated with each occupied storage unit may be calculated first, and then the target occupied storage unit may be selected according to the calculated quantity.
  • the task processing threshold can be set to 3, and all associated The occupied storage units whose number of tasks to be processed is less than 3 are all candidate storage spaces, and one of the candidate storage spaces is randomly selected as the target occupied storage unit.
  • the number of the associated tasks to be processed can be sorted from less to more, and one of the occupied storage units in the first order can be selected as the target occupied storage unit.
  • the preset number is 4, the number of associated tasks to be processed is sorted from least to most, and the top 4 occupied storage unit seat candidate storage units are sorted, and a seat target occupied storage unit is randomly selected in the candidate storage space.
  • the occupied storage unit with the smallest number of associated tasks to be processed is selected as the target occupied storage unit.
  • the third specific implementation is preferred.
  • Step 12 Select the target processing task in the target occupied storage unit.
  • the target processing task may be randomly selected in the target occupied storage unit, or the target processing task may be selected in a certain order.
  • the latter method is selected to describe step 12 in detail: in this embodiment , take the first task to be processed associated with the target occupied storage unit as the target processing task, it should be understood that in other embodiments, the last pending task associated with the target occupied storage unit may also be taken as the target processing task, or The associated intermediate pending task is used as the target processing task.
  • Step 13 After the processing of the target processing task is completed, determine whether the occupied storage unit associated with the target processing task is still associated with the task to be processed; if not, go to Step 14;
  • Step 14 release the storage unit occupied by the target, and return to step 11 .
  • the cache space can be divided into several storage units according to actual needs for subsequent dynamic allocation and release of the cache space.
  • the cache space is preferably divided into several storage units of the same size to Further improve the efficiency and stability of subsequent dynamic allocation and release of cache space.
  • the target occupied storage unit is selected according to the number of pending tasks associated with the current occupied storage unit, and the target processing task associated with the target occupied storage unit is further selected.
  • the occupied storage unit can be released, that is, the occupied storage unit is switched to an idle storage unit for subsequent data storage.
  • the number of pending tasks associated with each occupied storage unit can be dynamically adjusted in the cache.
  • the space that can store data can release the occupied storage unit in time, which improves the use efficiency of the cache space, and overcomes the defect of the existing mechanism that the cache space is unevenly distributed, the occupied cache space cannot be released, and new data is easily lost.
  • the method for releasing cache space in this embodiment may further include:
  • Step 21 Receive data to be stored.
  • step 21 when starting to receive data to be stored, first apply for an idle storage unit, and after step 21, it may further include a step of judging whether the storage space included in the current idle storage unit meets the storage requirements of the data to be stored: if so, execute Step 22; if not, in a specific implementation manner, you can wait for the release of the occupied storage unit until the storage space contained in the current free storage unit meets the storage requirements of the data to be stored, and in this case, every first time
  • the threshold value executes again the step of judging whether the storage space included in the current idle storage unit meets the storage requirement of the data to be stored until the storage space included in the current idle storage unit meets the storage requirement of the data to be stored; in another specific embodiment , you can first store a part of the data to be stored in the current idle storage unit, and when there is a new idle storage unit, another part of the data to be stored in the new idle storage unit, in this embodiment, you can only When the sum of the storage space contained in all the newly appearing free storage units meets all the storage requirements of
  • Step 22 Store the data to be stored in at least one free storage unit according to the size of the data to be stored.
  • steps 21 and 22 can be performed synchronously with steps 11 to 14. Therefore, it is possible to release the buffer space while receiving the data to be processed. In other words, while realizing the release of the buffer space, The data to be processed can be received, so that the data processing process can be continuously and uninterruptedly executed, and the efficiency of data processing can be further improved.
  • free storage units may be randomly allocated to the received data to be stored, as long as the total storage space included in the allocated free storage units meets the storage requirements of the data to be stored.
  • each to-be-stored data is associated with at least one to-be-processed task.
  • step 22 or after successful application for a free storage unit in step 21, the following steps can be further performed: the pending tasks associated with each occupied storage unit are sequentially executed.
  • the pending tasks associated with each occupied storage unit are sequentially executed. For example, in a specific scenario: there are currently 3 occupied storage units, the first occupied storage unit is associated with two pending tasks, the second occupied storage unit is associated with one pending task, and the third occupied storage unit If three pending tasks are associated, the pending tasks in the first occupied storage unit can be numbered 001 and 002 respectively, the pending tasks in the second occupied storage unit can be numbered as 003, and the third occupied storage unit can be numbered as 003.
  • the tasks to be processed in the storage unit are respectively numbered 004, 005 and 006. It should be understood that the above-mentioned specific scenarios are only used for illustration and should not be a limitation of this embodiment. The number and the number of pending tasks associated with each occupied storage unit can be selected according to actual conditions.
  • the number of the associated pending tasks may be calculated according to the number of the pending tasks associated with each occupied storage unit. Specifically, for each occupied storage unit, the currently associated first The first number of each pending task and the second number of the currently associated last pending task, and the number of pending tasks associated with the storage unit is calculated according to the second number and the first number.
  • Fig. 3 shows a schematic diagram of the numbering method of tasks to be processed in a specific scenario.
  • the storage unit HEAD the first storage unit, that is, the head of the storage unit
  • front storage unit 2 front storage unit 1.
  • Front storage unit 0, current storage unit, rear storage unit 0, rear storage unit 1, rear storage unit 2, etc. storage units.
  • 101, 102, 103 and 104 represent different tasks to be processed.
  • the data required by each task to be processed may be stored in multiple storage units, such as the data required by task 101 to be processed.
  • the data is stored in the front storage unit 2 and the current storage unit respectively.
  • the front storage unit 2 is the data read start storage unit
  • the data required for the task to be processed 102 is stored in the front storage unit 1 respectively.
  • the front storage unit 1 is the data read start storage unit, and the data required for the task to be processed 103 is stored in the front storage unit 0 and the rear storage unit 1 respectively
  • the front storage unit 0 is the data read start storage unit
  • the data to be used for the task to be processed 104 is stored in the current storage unit and the rear storage unit 2 respectively
  • the current storage unit The unit is the starting storage unit for data reading.
  • the number (104) of the last pending task associated with the current storage unit subtracts the number (101) of the first processing unit associated with the current storage unit, and the result is the current storage unit The number of pending tasks associated with it.
  • FIG. 4 shows a schematic diagram of the concatenation of linked lists in a specific scenario. Each storage unit in the linked list can be randomly selected from free storage units, and there is no sequence requirement.
  • the storage unit 0 is linked to the storage unit 1 through the address stored in the linked list, and the storage unit 1 is connected to the storage unit 2 through the address stored in the linked list... For example, in FIG.
  • the linked list will store the addresses of the linked front storage unit 2 and the current storage unit.
  • the parameters of the received data there may be multiple linked list information maintained, and the multiple linked lists are not related to each other.
  • the addresses of the successfully applied free storage units can be sequentially recorded in the information of the linked list, because the data to be used by the task to be processed may be stored in multiple occupied storage units. Therefore, the corresponding data can be found through the linked list.
  • the first address stored in the linked list is the starting address information of the stored data (storage unit 0 in Figure 4), and the last address stored in the linked list is the end address of the data. Therefore, when processing the task to be processed, the starting address of the required data can be found in the linked list to start processing the data until the end address information is found, that is, all the required data has been processed.
  • the first storage unit read is the occupied storage unit starting from the associated data of the current task to be processed.
  • the starting address information in the linked list is obtained.
  • the subsequent occupied storage unit to be read is searched based on the address information connected in series in the linked list.
  • the data of each storage unit has been read and is currently pending. After the tasks are processed, the number of pending tasks in the occupied storage unit needs to be updated.
  • the storage unit status information is updated to the idle state , that is, it is switched to an idle storage unit for subsequent application and use of storage space for data to be stored.
  • each storage unit maintains corresponding storage unit state information, storage unit identification information, number information of tasks to be processed, and linked list information
  • the storage unit state information is used to indicate the storage state of the storage unit, such as idle state or occupied state
  • the initial storage state of each storage unit is idle, that is, an idle storage unit, if it is occupied after application, it is an occupied storage unit, and it is an idle state after being occupied and released, that is, an idle storage unit.
  • the linked list information includes the linked list address information, which is used to represent the address of the storage unit connected in series with the current storage unit, including the starting address information, the ending address information and several address information associated in the middle
  • the storage unit identification information is used to indicate the identification of each storage unit. to facilitate the management of storage units.
  • the method for releasing cache space in this embodiment can be applied in various scenarios, such as a scenario of cell measurement, a scenario of data demodulation, a scenario of parameter estimation, etc., which is not limited in this embodiment.
  • the cache space includes several storage units.
  • the release system includes: a storage unit selection module 31 , a processing task selection module 32 and a space release module 33 .
  • the storage unit selection module 31 is configured to select a target occupied storage unit according to the number of to-be-processed tasks associated with each occupied storage unit. Specifically, the storage unit selection module 31 is configured to first calculate the number of tasks to be processed associated with each occupied storage unit, and then select the target occupied storage unit according to the calculated quantity.
  • the storage unit selection module 31 is configured to select one of the occupied storage units whose number of associated tasks to be processed is less than the task processing threshold as the target occupied storage unit, for example, the task processing can be set
  • the threshold is 3, and all occupied storage units with less than 3 associated tasks to be processed are candidate storage spaces, and one of the candidate storage spaces is randomly selected as the target occupied storage unit.
  • the storage unit selection module 31 is configured to sort the number of the associated tasks to be processed in ascending order, and select one of the several occupied storage units ahead of the sorting as the target occupied storage unit , for example: you can set the preset number to 4, sort the number of associated tasks to be processed from least to most, sort the top 4 occupied storage unit seat candidate storage units, and randomly select a seat in the candidate storage space
  • the target occupies a storage unit.
  • the storage unit selection module 31 is configured to select the occupied storage unit with the smallest number of associated tasks to be processed as the target occupied storage unit.
  • the third specific implementation is preferred.
  • the processing task selection module 32 is configured to select a target processing task in the target occupied storage unit. Specifically, the processing task selection module 32 is used to randomly select target processing tasks in the target occupied storage unit, and may also select target processing tasks in a certain order. In this embodiment, the latter method is selected for the processing task selection module. 32 for specific description: in this embodiment, the processing task selection module 32 is used to take the first pending task associated with the target occupied storage unit as the target processing task. It should be understood that in other embodiments, the processing task selection module 32 also It can be used to take the last to-be-processed task associated with the target occupied storage unit as the target processing task, or the associated intermediate to-be-processed task as the target processing task.
  • the space release module 33 is configured to release the target-occupied storage unit when no task to be processed is associated with the target-occupied storage unit, and to call the storage unit selection module 31 .
  • the release system may further include a task judgment module 34, and the processing task selection module 32 is also used to call the task judgment module 34 after selecting the target processing task, and the task judgment module 34 is used to judge the target processing after the target processing task is processed. Whether the occupied storage unit associated with the task is also associated with a task to be processed, if not, the space release module 33 is called; if so, the storage unit selection module 31 is called.
  • the release system may further include: a storage unit division module 35 for dividing the cache space into several storage units for subsequent dynamic allocation and release of the cache space.
  • the cache space is divided into several storage units of the same size to further improve the efficiency and stability of subsequent dynamic allocation and release of the cache space.
  • the target occupied storage unit is selected according to the number of tasks to be processed associated with the current occupied storage unit, the processing task selection module 32 is used to further select the target processing task associated with the target occupied storage unit, and the space release module 33 After all the pending tasks associated with the target occupied storage unit are processed, the occupied storage unit can be released, that is, the occupied storage unit is switched to an idle storage unit for subsequent data storage.
  • the number of associated pending tasks can dynamically adjust the space that can store data in the cache, and the occupied storage unit can be released in time, which improves the efficiency of the use of cache space, and overcomes the uneven distribution of cache space in the existing mechanism and the occupied cache space. The inability to be freed makes new data vulnerable to loss.
  • the system for releasing cache space in this embodiment may further include: a data receiving module 36 for receiving the data to be stored, and storing the data to be stored in at least one free storage unit according to the size of the data to be stored, and when the data to be stored is stored When data is stored in multiple idle storage units, the idle storage units are connected in series through a linked list, and the data to be stored is associated with at least one to-be-processed task.
  • the data receiving module 36 can be used to first apply for an idle storage unit when starting to receive the data to be stored, and the data receiving module 36 can be further used to judge whether the storage space contained in the current idle storage unit meets the storage requirements of the data to be stored: if yes, Then store the data to be stored in at least one free storage unit according to the size of the data to be stored; if not, in a specific implementation, you can wait for the release of the occupied storage unit until the storage space contained in the current free storage unit meets the The storage requirements of the data to be stored. In this case, it can be judged again every first time threshold whether the storage space contained in the current free storage unit meets the storage requirements of the data to be stored until the storage space contained in the current free storage unit meets the storage space to be stored.
  • a part of the data to be stored may be stored in the current free storage unit first, and when a new free storage unit exists, another part of the data to be stored in the current free storage unit In the free storage unit, in this embodiment, the other part of the data can be stored only when the sum of the storage space contained in all the newly appearing free storage units meets all the storage requirements of the other part of the data.
  • the occupied storage unit is switched to the idle storage unit, a part of the data is first stored in the newly appeared idle storage unit until all the data to be stored are stored.
  • the data receiving module 36 may be used to apply and store multiple free storage spaces in parallel.
  • multiple data sources The same storage unit will not be occupied to facilitate data management of different data sources. It should be understood that in other embodiments, multiple data sources may also occupy the same storage unit, which may be selected according to actual conditions.
  • the data receiving module 36 and the storage unit selection module 31, the processing task selection module 32, the task judgment module 34 and the space release module 33 can be called at the same time. Therefore, the buffer space can be stored while receiving the data to be processed. Release, in other words, can realize the reception of data to be processed while realizing the release of the cache space, so that the data processing process can be continuously and uninterruptedly executed, and the efficiency of data processing can be further improved.
  • the storage unit selection module 31 may randomly allocate free storage units for the received data to be stored, as long as the total storage space contained in the allocated free storage units meets the storage requirements of the data to be stored.
  • each data to be stored is associated with at least one task to be processed.
  • the storage unit selection module can be called, and the storage unit selection module is used for each occupied storage unit, Obtain the first number of the currently associated first pending task and the second number of the currently associated last pending task, and calculate the number of pending tasks associated with the storage unit according to the second number and the first number.
  • the first occupied storage unit is associated with two pending tasks
  • the second occupied storage unit is associated with one pending task
  • the third occupied storage unit is associated with another If there are three pending tasks, the pending tasks in the first occupied storage unit can be numbered 001 and 002 respectively, the pending tasks in the second occupied storage unit can be numbered as 003, and the third occupied storage unit can be numbered as 003.
  • the tasks to be processed are numbered 004, 005 and 006 respectively. It should be understood that the above specific scenarios are only used as examples and should not be a limitation of this embodiment.
  • the number of pending tasks associated with each occupied storage unit can be selected according to the actual situation.
  • FIG. 3 shows a schematic diagram of the numbering method of tasks to be processed in a specific scenario, wherein, after the application for the current storage unit, the number (104) of the last task to be processed associated with the current storage unit is subtracted from the number of the current storage unit.
  • the number of the first associated processing unit (101), and the result is the number of pending tasks associated with the current storage unit.
  • FIG. 4 shows a schematic diagram of the concatenation of linked lists in a specific scenario. Each storage unit in the linked list can be randomly selected from free storage units, and there is no sequence requirement. Among them, according to the parameters of the received data, there may be multiple linked list information maintained, and the multiple linked lists are not related to each other.
  • the addresses of the successfully applied free storage units can be sequentially recorded in the information of the linked list, because the data to be used by the task to be processed may be stored in multiple occupied storage units. Therefore, the corresponding data can be found through the linked list.
  • the first address stored in the linked list is the starting address information of the stored data (storage unit 0 in Figure 4), and the last address stored in the linked list is the end address of the data. Therefore, when processing the task to be processed, the starting address of the required data can be found in the linked list to start processing the data until the end address information is found, that is, all the required data has been processed.
  • the first storage unit read is the occupied storage unit starting from the associated data of the current task to be processed.
  • the starting address information in the linked list is obtained.
  • the subsequent occupied storage unit to be read is searched based on the address information connected in series in the linked list.
  • the data of each storage unit has been read and is currently pending.
  • the space release module 33 is used to release the corresponding occupied storage unit when the number of tasks to be processed in a certain occupied storage unit is 0.
  • the state information is updated to an idle state, that is, it is switched to an idle storage unit for subsequent application and use of storage space for data to be stored.
  • each storage unit maintains corresponding storage unit state information, storage unit identification information, number information of tasks to be processed, and linked list information
  • the storage unit state information is used to indicate the storage state of the storage unit, such as idle state or occupied state
  • the initial storage state of each storage unit is idle, that is, an idle storage unit, if it is occupied after application, it is an occupied storage unit, and it is an idle state after being occupied and released, that is, an idle storage unit.
  • the linked list information includes the linked list address information, which is used to represent the address of the storage unit connected in series with the current storage unit, including the starting address information, the ending address information and some address information associated in the middle
  • the storage unit identification information is used to indicate the identification of each storage unit. to facilitate the management of storage units.
  • the system for releasing cache space in this embodiment can be applied in various scenarios, such as a scenario of cell measurement, a scenario of data demodulation, a scenario of parameter estimation, etc., which is not limited in this embodiment.
  • Embodiments of the present invention also provide an electronic device, which can be expressed in the form of a computing device (for example, a server device), and includes a memory, a processor, and a computer program stored in the memory and running on the processor, wherein When the processor executes the computer program, the method for releasing the cache space in Embodiment 1 of the present invention can be implemented.
  • a computing device for example, a server device
  • the processor executes the computer program, the method for releasing the cache space in Embodiment 1 of the present invention can be implemented.
  • FIG. 6 shows a schematic diagram of the hardware structure of this embodiment.
  • the electronic device 9 specifically includes:
  • At least one processor 91 at least one memory 92, and a bus 93 for connecting different system components (including processor 91 and memory 92), wherein:
  • the bus 93 includes a data bus, an address bus, and a control bus.
  • Memory 92 includes volatile memory, such as random access memory (RAM) 921 and/or cache memory 922 , and may further include read only memory (ROM) 923 .
  • RAM random access memory
  • ROM read only memory
  • the memory 92 also includes a program/utility 925 having a set (at least one) of program modules 924 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, examples of which are Each or some combination of these may include an implementation of a network environment.
  • program modules 924 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, examples of which are Each or some combination of these may include an implementation of a network environment.
  • the processor 91 executes various functional applications and data processing by running the computer program stored in the memory 92, such as the method for releasing the cache space in Embodiment 1 of the present invention.
  • the electronic device 9 may further communicate with one or more external devices 94 (eg, keyboards, pointing devices, etc.). Such communication may take place through input/output (I/O) interface 95 . Also, the electronic device 9 may communicate with one or more networks (eg, a local area network (LAN), a wide area network (WAN), and/or a public network such as the Internet) through a network adapter 96 . The network adapter 96 communicates with other modules of the electronic device 9 via the bus 93 .
  • I/O input/output
  • networks eg, a local area network (LAN), a wide area network (WAN), and/or a public network such as the Internet
  • the network adapter 96 communicates with other modules of the electronic device 9 via the bus 93 .
  • An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, implements the steps of the method for releasing cache space in Embodiment 1 of the present invention.
  • the readable storage medium may include, but is not limited to, a portable disk, a hard disk, a random access memory, a read-only memory, an erasable programmable read-only memory, an optical storage device, a magnetic storage device, or any of the above suitable combination.
  • the present invention can also be implemented in the form of a program product, which includes program codes, when the program product runs on a terminal device, the program code is used to cause the terminal device to execute the implementation Steps of the method for releasing cache space in Embodiment 1 of the present invention.
  • the program code for executing the present invention can be written in any combination of one or more programming languages, and the program code can be completely executed on the user equipment, partially executed on the user equipment, as an independent
  • the software package executes on the user's device, partly on the user's device, partly on the remote device, or entirely on the remote device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

本发明公开了一种缓存空间的释放方法、释放***、电子设备及存储介质,缓存空间包括若干存储单元,方法包括:根据每一占用存储单元关联的待处理任务的数量选择当前占用存储单元;选择当前处理任务;当当前占用存储单元中未关联有待处理任务时,释放当前占用存储单元。本发明中,根据当前的占用存储单元所关联的待处理任务的数量来选择当前占用存储单元,从而进一步选择当前处理任务,当占用存储单元所关联的所有的待处理任务处理完毕,则可以释放占用存储单元,本发明可以动态调节缓存中存储数据的空间,及时将占用存储单元释放,提高了缓存空间的使用效率,克服了缓存空间分布不均,占用的缓存空间无法得到释放使新的数据容易丢失的缺陷。

Description

缓存空间的释放方法、释放***、电子设备及存储介质
本申请要求申请日为2020年12月23日的中国专利申请CN202011540102.4的优先权。本申请引用上述中国专利申请的全文。
技术领域
本发明涉及无线通讯领域,特别涉及一种缓存空间的释放方法、释放***、电子设备及存储介质。
背景技术
在通信的过程中,芯片内的数据管理器一直在接收数据,将数据存储至缓存中,由于同一数据可能与多个处理任务相关联,接收数据存入缓存后,需要对多个数据多次重复读取以进行非重复性的任务处理,因此缓存数据在读取后无法立即释放。
通常情况下,芯片内的缓冲空间较小且大小固定,无法动态调节;如果旧的缓冲内容释放效率较低,随着缓存内容的不断增加,缓存空间逐渐占满,可能存在接收数据无法存储或丢失的问题。
如:在邻区测量的场景下,由于每个小区的定时不同,小区时域数据随机分布,可能存在部分重叠或完全重叠,因此,在测量的过程中在采样的数据存储缓存中后,需要多次读取采样的数据以多次对同一数据进行处理。一般处理的方式主要采用环形数据管理器和FIFO(先入先出)机制对小区数据进行缓存管理,这种处理方式至少存在如下问题:环形数据管理器和FIFO机制无法控制缓存读写访问的优先级,只能按照数据到达的顺序逐次处理各个小区的数据,由于缓存中的数据无法得到释放,新到的数据无法得到缓存空间而丢失。特别是在极端场景下,小区的分布不均匀,在小区分布比较密集的区域,硬件测量处理能力与数据到达的速率不匹配,此时需要较大的存储空间来缓冲未处理数据,导致存储空间使用效率较低,占用面积较大。这进一步使得芯片无法继续接收数据或者即便接收了数据,数据也会被丢失,而导致处理任务无法被执行的缺陷发生。
发明内容
本发明要解决的技术问题是为了克服现有技术中缓存中的数据难以得到释放使新接收的数据无法存储、会发生数据丢失的缺陷,提供一种提高存储在缓存中的数据的释放效率的缓存空间的释放方法、释放***、电子设备及存储介质。
本发明是通过下述技术方案来解决上述技术问题:
本发明提供了一种缓存空间的释放方法,所述缓存空间包括若干存储单元,所述释放方法包括:
根据每一占用存储单元关联的待处理任务的数量选择目标占用存储单元;
在所述目标占用存储单元中选择目标处理任务;
当所述目标占用存储单元中未关联有待处理任务时,释放所述目标占用存储单元。
较佳地,所述在所述目标占用存储单元中选择目标处理任务的步骤后还包括:
当所述目标处理任务处理完成后,判断所述目标处理任务关联的占用存储单元是否还关联有待处理任务,若否,则执行所述释放所述目标占用存储单元的步骤,若是,则返回所述根据每一占用存储单元关联的待处理任务的数量选择目标占用存储单元的步骤。
较佳地,所述根据每一占用存储单元关联的待处理任务的数量选择目标占用存储单元的步骤包括:
计算每一占用存储单元关联的待处理任务的数量;
在关联的待处理任务的数量少于任务处理阈值的占用存储单元中选择一个作为目标占用存储单元;或,将关联的待处理任务的数量由少到多进行排序,并从排序靠前的若干个占用存储单元中选择一个作为目标占用存储单元;或,选择关联的待处理任务的数量最少的占用存储单元作为目标占用存储单元。
较佳地,所述在所述目标占用存储单元中选择目标处理任务的步骤包括:
将所述目标占用存储单元所关联的第一个待处理任务作为目标处理任务。
较佳地,每一占用存储单元关联的待处理任务依次编号,所述计算每一占用存储单元关联的待处理任务的数量的步骤包括:
对于每一占用存储单元,获取当前关联的第一个待处理任务的第一编号及当前关联的最后一个待处理任务的第二编号;
根据所述第二编号及所述第一编号计算所述占用存储单元关联的待处理任务的数量。
较佳地,所述根据每一占用存储单元关联的待处理任务的数量选择目标占用存储单元的步骤前还包括:
接收待存储数据,并根据所述待存储数据的大小将所述待存储数据存储至至少一个所述空闲存储单元中,当所述待存储数据存储在多个所述空闲存储单元中时,所述空闲存储单元之间通过链表串联,所述待存储数据至少关联一个待处理任务。
较佳地,当所述目标处理任务需要使用的数据存储在多个占用存储单元中时,通过对应的链表获取所述占用存储单元的地址。
较佳地,所述根据每一占用存储单元关联的待处理任务的数量选择目标占用存储单元的步骤前还包括:
将所述缓存空间划分为大小相同的若干存储单元。
较佳地,所述根据所述待存储数据的大小将所述待存储数据存储至至少一个所述空闲存储单元中的步骤包括:
判断当前空闲存储单元所包含的存储空间是否符合所述待存储数据的存储需求:若是,则将所述待存储数据存储至对应的空闲存储单元中;若否,则等待占用存储单元的释放直至当前空闲存储单元所包含的存储空间符合所述待存储数据的存储需求,或,先将一部分待存储数据存储至当前空闲存储单元中,当存在新的空闲存储单元时,再将另一部分数据待存储至当所述新的空闲存储单元中。
较佳地,所述存储单元包括存储单元状态信息、待处理任务的数量信息、链表地址信息及存储单元标识信息,所述存储单元状态信息用于指示所述存储单元存储状态,所述链表地址信息用于表示与当前存储单元串联的存储单元的地址。
本发明还提供了一种缓存空间的释放***,所述缓存空间包括若干存储单元,所述释放***包括:存储单元选择模块、处理任务选择模块及空间释放模块;
所述存储单元选择模块用于根据每一占用存储单元关联的待处理任务的数量选择目标占用存储单元;
所述处理任务选择模块用于在所述目标占用存储单元中选择目标处理任务;
所述空间释放模块用于当所述目标占用存储单元中未关联有待处理任务时,释放所述目标占用存储单元。
较佳地,所述释放***还包括任务判断模块,所述处理任务选择模块还用于在选择目标处理任务后调用所述任务判断模块,所述任务判断模块用于当所述目标处理任务处理完成后,判断所述目标处理任务关联的占用存储单元是否还关联有待处理任务,若否,则调用所述空间释放模块,若是,则调用所述存储单元选择模块。
较佳地,所述存储单元选择模块包括:任务计算单元及存储选择单元;
所述任务计算单元用于计算每一占用存储单元关联的待处理任务的数量;
所述存储选择单元用于在关联的待处理任务的数量少于任务处理阈值的占用存储单元中选择一个作为目标占用存储单元;或,所述存储选择单元用于将关联的待处理任务的数量由少到多进行排序,并在从排序靠前的若干个占用存储单元中选择一个作为目标占用存储单元;或,所述存储选择单元用于选择关联的待处理任务的数量最少的占用存储单元作为目标占用存储单元。
较佳地,所述处理任务选择模块用于将所述目标占用存储单元所关联的第一个待处理任务作为目标处理任务。
较佳地,每一占用存储单元关联的待处理任务依次编号,所述任务计算单元用于对于每一占用存储单元,获取当前关联的第一个待处理任务的第一编号及当前关联的最后一个待处理任务的第二编号,并根据所述第二编号及所述第一编号计算所述占用存储单元关联的待处理任务的数量。
较佳地,所述释放***还包括数据接收模块,用于接收待存储数据,并根据所述待存储数据的大小将所述待存储数据存储至至少一个所述空闲存储单元中,当所述待存储数据存储在多个所述空闲存储单元中时,所述空闲存储单元之间通过链表串联,所述待存储数据至少关联一个待处理任务。
较佳地,所述空间释放模块用于当所述目标处理任务需要使用的数据存储在多个占用存储单元中时,通过对应的链表获取所述占用存储单元的地址。
较佳地,所述释放***还包括:存储单元划分模块,用于将所述缓存空间划分为大小相同的若干存储单元。
较佳地,所述数据接收模块用于判断当前空闲存储单元所包含的存储空间是否符合所述待存储数据的存储需求:若是,则将所述待存储数据存储至对应的空闲存储单元中;若否,则等待占用存储单元的释放直至当前空闲存储单元所包含的存储空间符合所述待存储数据的存储需求,或,先将一部分待存储数据存储至当前空闲存储单元中,当存在新的空闲存储单元时,再将另一部分数据待存储至当所述新的空闲存储单元中。
较佳地,所述存储单元包括存储单元状态信息、待处理任务的数量信息、链表地址信息及存储单元标识信息,所述存储单元状态信息用于指示所述存储单元存储状态,所述链表地址信息用于表示与当前存储单元串联的存储单元的地址。
本发明还提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上所述的缓存空间的释放方法。
本发明还提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上述的缓存空间的释放方法的步骤。
本发明的积极进步效果在于:本发明中,根据当前的占用存储单元所关联的待处理任务的数量来选择目标占用存储单元,从而进一步选择目标处理任务,当占用存储单元所关联的所有的待处理任务处理完毕,则可以释放占用存储单元,即占用存储单元转换为空闲存储单元以供后续数据进行存储,本发明通过每一占用存储单元关联的待处理任 务的数量可以动态调节缓存中可以存储数据的空间,可以及时将占用存储单元释放,提高了缓存空间的使用效率,克服了现有机制中缓存空间分布不均,占用的缓存空间无法得到释放使新的数据容易丢失的缺陷。
附图说明
图1为本发明实施例1中的缓存空间的释放方法的流程图。
图2为本发明实施例1中的缓存空间的释放方法的部分流程图。
图3为本发明实施例1中具体场景下待处理任务的编号方式的示意图。
图4为本发明实施例1中具体场景下链表串联的示意图。
图5为本发明实施例2中的缓存空间的释放***的模块示意图。
图6为本发明实施例3中的电子设备的模块示意图。
具体实施方式
下面通过实施例的方式进一步说明本发明,但并不因此将本发明限制在所述的实施例范围之中。
实施例1
本实施例提供了一种缓存空间的释放方法,缓存空间包括若干存储单元,每一存储单元可以独立申请及释放,若干存储单元包括占用存储单元及空闲存储单元的两种存储状态,占用存储单元表示存储单元中已存储数据,空闲存储单元表示存储单元中还未存储数据,当空闲存储单元存储数据后空闲存储单元切换为占用存储单元,当占用存储单元中的数据释放后,占用存储单元切换为空闲存储单元。
如图1所示,本实施例中的释放方法包括:
步骤11、根据每一占用存储单元关联的待处理任务的数量选择目标占用存储单元。
具体的,步骤11中可以先计算每一占用存储单元关联的待处理任务的数量,再根据计算的数量选择目标占用存储单元。
在第一种具体的实施方式中,可以在关联的待处理任务的数量少于任务处理阈值的占用存储单元中选择一个作为目标占用存储单元,如:可以设定任务处理阈值为3,所有关联的待处理任务的数量少于3个的占用存储单元均为候选存储空间,在候选存储空间中随机选择一个作为目标占用存储单元。
在第二种具体的实施方式中,可以将关联的待处理任务的数量由少到多进行排序,从排序靠前的若干个占用存储单元中选择一个作为目标占用存储单元,如:可以设定预 设数量为4,将关联的待处理任务的数量由少至多进行排序,将排序为前4名的占用存储单元座位候选存储单元,在候选存储空间中随机选择一个座位目标占用存储单元。
在第三种具体的实施方式中,选择关联的待处理任务的数量最少的占用存储单元作为目标占用存储单元。
本实施例中,为了提高目标占用存储单元的选择效率,优选第三种具体实施方式。
步骤12、在目标占用存储单元中选择目标处理任务。
具体的,可以在目标占用存储单元中随机选择目标处理任务,也可以按照一定的顺序来选择目标处理任务,本实施例中,选择后一种方式来对步骤12进行具体说明:本实施例中,将目标占用存储单元所关联的第一个待处理任务作为目标处理任务,应当理解,在其他实施例中也可以将目标占用存储单元所关联的最后一个待处理任务作为目标处理任务,或者所关联的中间的待处理任务作为目标处理任务。
步骤13、当目标处理任务处理完成后,判断目标处理任务关联的占用存储单元是否还关联有待处理任务,若否,则执行步骤14,若是,则返回步骤11。
步骤14、释放目标占用存储单元,返回步骤11。
应当理解,在步骤11之前,可以根据实际需求先将缓存空间划分为若干存储单元,以供后续动态分配与释放缓存空间,本实施例中,优选将缓存空间划分为大小相同的若干存储单元以进一步提高后续动态分配与释放缓存空间的效率与稳定性。
本实施例中,根据当前的占用存储单元所关联的待处理任务的数量来选择目标占用存储单元,并进一步选择与目标占用存储单元关联的目标处理任务,在目标占用存储单元所关联的所有的待处理任务处理完毕,则可以释放占用存储单元,即占用存储单元切换为空闲存储单元以供后续数据进行存储,本实施例通过每一占用存储单元关联的待处理任务的数量可以动态调节缓存中可以存储数据的空间,可以及时将占用存储单元释放,提高了缓存空间的使用效率,克服了现有机制中缓存空间分布不均,占用的缓存空间无法得到释放使新的数据容易丢失的缺陷。
如图2所示,本实施例中的缓存空间的释放方法还可以进一步包括:
步骤21、接收待存储数据。
其中,步骤21中在开始接收待存储数据时,先申请空闲存储单元,步骤21后可以进一步包括判断当前空闲存储单元所包含的存储空间是否符合待存储数据的存储需求的步骤:若是,则执行步骤22;若否,在一种具体的实施方式中,可以等待占用存储单元的释放直至当前空闲存储单元所包含的存储空间符合待存储数据的存储需求,这种情况下可以每隔第一时间阈值再次执行判断当前空闲存储单元所包含的存储空间是否符合待 存储数据的存储需求的步骤直至当前空闲存储单元所包含的存储空间符合待存储数据的存储需求;在另一种具体的实施方式中,可以先将一部分待存储数据存储至当前空闲存储单元中,当存在新的空闲存储单元时,再将另一部分数据待存储至当新的空闲存储单元中,在这种实施方式中,可以仅当新的出现的所有空闲存储单元所含有的存储空间之和满足另一部分数据的全部存储需求时,再存储另一部分数据,也可以一旦一个占用存储单元切换至空闲存储单元时,则先将一部分数据中的部分存储至新出现的空闲存储单元中,直至所有的待存储数据均存储完毕。
其中,待存储数据的数据源可以为一个或者多个,当存在多个数据源时并行进行多个空闲存储空间的申请、存储,本实施例中,多个数据源不会占用同一个存储单元,以方便对不同数据源的数据进行管理,应当理解,在其他实施例中,多个数据源也可以占用同一个存储单元,具体可以根据实际进行选择。
步骤22、根据待存储数据的大小将待存储数据存储至至少一个空闲存储单元中。
应当理解,步骤21及步骤22可以与步骤11-步骤14同步进行,因此,可以实现接收待处理数据的同时,进行缓冲空间的释放,换句话而言,在实现缓存空间的释放的同时,可以实现待处理数据的接收,从而可以使数据的处理过程持续、不间断的执行,进一步提高了数据处理的效率。
其中,可以随机为接收的待存储数据分配空闲存储单元,只要分配的空闲存储单元包含的存储空间的总和满足待存储数据的存储需求即可。
应当理解,每一待存储数据至少关联一个待处理任务,执行步骤22的同时,或者在步骤21中空闲存储单元申请成功后,可以进一步执行步骤:对每一占用存储单元关联的待处理任务依次编号,如在一种具体场景下:当前存在3个占用存储单元,第一个占用存储单元关联有两个待处理任务,第二占用存储单元关联有一个待处理任务,第三个占用存储单元关联又三个待处理任务,则可以将第一个占用存储单元中的待处理任务分别编号为001与002,将第二个占用存储单元中的待处理任务编号为003,将第三个占用存储单元中的待处理任务分别编号为004、005及006,应当理解上述具体场景仅作为举例说明,不应当成为本实施例的限制,实际情况中的编号方式、待处理任务储量、占用存储单元数量以及每一占用存储单元所关联的待处理任务的数量都可以根据实际情况进行选择。
本实施例中,在步骤11中可以根据每一占用存储单元关联的待处理任务的编号来计算所关联的待处理任务的数量,具体的,对于每一占用存储单元,获取当前关联的第一个待处理任务的第一编号及当前关联的最后一个待处理任务的第二编号,并根据第二编 号及第一编号计算占用存储单元关联的待处理任务的数量。图3示出了一具体场景下待处理任务的编号方式的示意图,该场景下,包括存储单元HEAD(第一个存储单元,即存储单元的首部)…..前存储单元2、前存储单元1、前存储单元0、当前存储单元、后存储单元0、后存储单元1、后存储单元2……等等存储单元。其中101、102、103及104代表不同的待处理任务,从图中可以看到,每一待处理任务所需要使用的数据可能存储在多个存储单元中,如待处理任务101所需要使用的数据分别存储在前存储单元2及当前存储单元中,对于待处理任务101,前存储单元2即为数据读取起始存储单元,待处理任务102所需要使用的数据分别存储在前存储单元1及后存储单元0中,对于待处理任务102,前存储单元1即为数据读取起始存储单元,待处理任务103所需要使用的数据分别存储在前存储单元0及后存储单元1中,对于待处理任务103,前存储单元0即为数据读取起始存储单元,待处理任务104所需要使用的数据分别存储在当前存储单元及后存储单元2中,对于待处理任务104,当前存储单元即为数据读取起始存储单元。其中,在当前存储单元申请后,当前存储单元所关联的最后一个待处理任务的编号(104)减去当前处存储单元所关联的第一个处理单元的编号(101),结果为当前存储单元所关联的待处理任务数量。
应当理解,当待存储数据所占用的存储空间比较小时,可能仅需申请一个空闲存储单元,而当待存储数据所占用的存储空间比较大时,需要申请多个空闲存储单元,此时,各个存储单元之间可以通过链表串联。图4示出了一具体场景下链表串联的示意图,链表中的各存储单元可以在空闲存储单元中随机选择,无顺序要求。存储单元0通过在链表中存储的地址与存储单元1链接,存储单元1通过在链表中存储的地址,与存储单元2连接……如,在图3中,待处理任务101所需要使用的数据分别存储在前存储单元2及当前存储单元中,则链表中会存储有链接前存储单元2及当前存储单元的地址,在处理待处理任务101时,读取前存储单元2中的数据后,则会通过链表中的地址找到当前存储单元,并继续读当前存储单元中对应的数据。
其中,根据接收数据的参数,可能存在维护多个链表信息,多个链表之间无相互关联。
如当对于当前的待存储数据的空闲存储单元申请成功后,可以将申请成功的空闲存储单元的地址依次记录在链表的信息中,由于待处理任务需要使用的数据可能存储在多个占用存储单元中,因此可以通过链表来找到对应的数据,链表中存储的第一个地址为存数据的起始地址信息(图4中的存储单元0),链表存储的最后一个地址则为数据的终止地址信息,因此在处理待处理任务时,可以在链表中找到所需数据的起始地址以开始 对数据进行处理直到找到终止地址信息,即对所需数据全部处理完毕。
具体的,根据当前待处理任务需求,可能需要连续读取一个或多个占用存储单元的数据,读取的第一个存储单元为当前待处理任务关联数据起始的占用存储单元,具体可以通过链表中的起始地址信息得到,第一个占用存储单元读取完毕后,基于链表中串联的地址信息查找后续需要读取的占用存储单元,每个存储单元的数据读取完毕,当前待处理任务处理完毕后,需要更新占用存储单元中的待处理任务数量,当某一占用存储单元中的待处理任务数量为0时,将相应的占用存储单元释放,该存储单元状态信息更新为空闲状态,即切换为空闲存储单元以供后续待存储数据进行存储空间的申请及使用。
本实施例中,每个存储单元维护对应的存储单元状态信息、存储单元标识信息、待处理任务的数量信息以及链表信息,存储单元状态信息用于指示存储单元存储状态,如空闲状态或占用状态,每个存储单元初始存储状态为空闲,即为空闲存储单元,申请后为占用,即为占用存储单元,占用并释放后为空闲状态,即为空闲存储单元。链表信息包括链表地址信息,用于表示与当前存储单元串联的存储单元的地址,包括起始地址信息、终止地址信息及中间关联的若干地址信息,存储单元标识信息用于表示各个存储单元的标识符以方便对存储单元进行管理。
应当理解,本实施例中的缓存空间的释放方法可以应用在多种场景中,如在小区测量的场景、数据解调的场景、参数估计的场景等等,本实施例并不对此进行限制。
实施例2
本实施例提供了一种缓存空间的释放***,缓存空间包括若干存储单元,如图5所示,释放***包括:存储单元选择模块31、处理任务选择模块32及空间释放模块33。
存储单元选择模块31用于根据每一占用存储单元关联的待处理任务的数量选择目标占用存储单元。具体的,存储单元选择模块31用于先计算每一占用存储单元关联的待处理任务的数量,再根据计算的数量选择目标占用存储单元。
在第一种具体的实施方式中,存储单元选择模块31用于在关联的待处理任务的数量少于任务处理阈值的占用存储单元中选择一个作为目标占用存储单元,如:可以设定任务处理阈值为3,所有关联的待处理任务的数量少于3个的占用存储单元均为候选存储空间,在候选存储空间中随机选择一个作为目标占用存储单元。
在第二种具体的实施方式中,存储单元选择模块31用于将关联的待处理任务的数量由少到多进行排序,从排序靠前的若干个占用存储单元中选择一个作为目标占用存储单元,如:可以设定预设数量为4,将关联的待处理任务的数量由少至多进行排序,将排序为前4名的占用存储单元座位候选存储单元,在候选存储空间中随机选择一个座位目标 占用存储单元。
在第三种具体的实施方式中,存储单元选择模块31用于选择关联的待处理任务的数量最少的占用存储单元作为目标占用存储单元。
本实施例中,为了提高目标占用存储单元的选择效率,优选第三种具体实施方式。
处理任务选择模块32用于在目标占用存储单元中选择目标处理任务。具体的,处理任务选择模块32用于在目标占用存储单元中随机选择目标处理任务,也可以按照一定的顺序来选择目标处理任务,本实施例中,选择后一种方式来对处理任务选择模块32进行具体说明:本实施例中,处理任务选择模块32用于将目标占用存储单元所关联的第一个待处理任务作为目标处理任务,应当理解,在其他实施例中处理任务选择模块32也可以用于将目标占用存储单元所关联的最后一个待处理任务作为目标处理任务,或者所关联的中间的待处理任务作为目标处理任务。
空间释放模块33用于当目标占用存储单元中未关联有待处理任务时,释放目标占用存储单元,并调用存储单元选择模块31。
其中,释放***还可以进一步包括任务判断模块34,处理任务选择模块32还用于在选择目标处理任务后调用任务判断模块34,任务判断模块34用于当目标处理任务处理完成后,判断目标处理任务关联的占用存储单元是否还关联有待处理任务,若否,则调用空间释放模块33,若是,则调用存储单元选择模块31。
应当理解,释放***还可以进一步包括:存储单元划分模块35,用于将缓存空间划分为若干存储单元,以供后续动态分配与释放缓存空间,本实施例中,优选通过存储单元划分模块35将缓存空间划分为大小相同的若干存储单元以进一步提高后续动态分配与释放缓存空间的效率与稳定性。
本实施例中,根据当前的占用存储单元所关联的待处理任务的数量来选择目标占用存储单元,处理任务选择模块32用于进一步选择与目标占用存储单元关联的目标处理任务,空间释放模块33用于在目标占用存储单元所关联的所有的待处理任务处理完毕,则可以释放占用存储单元,即占用存储单元切换为空闲存储单元以供后续数据进行存储,本实施例通过每一占用存储单元关联的待处理任务的数量可以动态调节缓存中可以存储数据的空间,可以及时将占用存储单元释放,提高了缓存空间的使用效率,克服了现有机制中缓存空间分布不均,占用的缓存空间无法得到释放使新的数据容易丢失的缺陷。
本实施例中的缓存空间的释放***还可以进一步包括:数据接收模块36,用于接收待存储数据,并根据待存储数据的大小将待存储数据存储至至少一个空闲存储单元中,当待存储数据存储在多个空闲存储单元中时,空闲存储单元之间通过链表串联,待存储 数据至少关联一个待处理任务。
数据接收模块36可以用于在开始接收待存储数据时,先申请空闲存储单元,数据接收模块36可以进一步用于判断当前空闲存储单元所包含的存储空间是否符合待存储数据的存储需求:若是,则根据待存储数据的大小将待存储数据存储至至少一个空闲存储单元中;若否,在一种具体的实施方式中,可以等待占用存储单元的释放直至当前空闲存储单元所包含的存储空间符合待存储数据的存储需求,这种情况下可以每隔第一时间阈值再次判断当前空闲存储单元所包含的存储空间是否符合待存储数据的存储需求直至当前空闲存储单元所包含的存储空间符合待存储数据的存储需求;在另一种具体的实施方式中,可以先将一部分待存储数据存储至当前空闲存储单元中,当存在新的空闲存储单元时,再将另一部分数据待存储至当新的空闲存储单元中,在这种实施方式中,可以仅当新的出现的所有空闲存储单元所含有的存储空间之和满足另一部分数据的全部存储需求时,再存储另一部分数据,也可以一旦一个占用存储单元切换至空闲存储单元时,则先将一部分数据中的部分存储至新出现的空闲存储单元中,直至所有的待存储数据均存储完毕。
其中,待存储数据的数据源可以为一个或者多个,当存在多个数据源时数据接收模块36可以用于并行进行多个空闲存储空间的申请、存储,本实施例中,多个数据源不会占用同一个存储单元,以方便对不同数据源的数据进行管理,应当理解,在其他实施例中,多个数据源也可以占用同一个存储单元,具体可以根据实际进行选择。
本实施例中,可以同时调用数据接收模块36与存储单元选择模块31、处理任务选择模块32、任务判断模块34及空间释放模块33,因此,可以实现接收待处理数据的同时,进行缓冲空间的释放,换句话而言,在实现缓存空间的释放的同时,可以实现待处理数据的接收,从而可以使数据的处理过程持续、不间断的执行,进一步提高了数据处理的效率。
其中,存储单元选择模块31可以随机为接收的待存储数据分配空闲存储单元,只要分配的空闲存储单元包含的存储空间的总和满足待存储数据的存储需求即可。
应当理解,每一待存储数据至少关联一个待处理任务,调用数据接收模块36或调用存储单元选择模块31的同时,可以调用存储单元选择模块,存储单元选择模块用于对于每一占用存储单元,获取当前关联的第一个待处理任务的第一编号及当前关联的最后一个待处理任务的第二编号,并根据第二编号及第一编号计算占用存储单元关联的待处理任务的数量。如在一种具体场景下:当前存在3个占用存储单元,第一个占用存储单元关联有两个待处理任务,第二占用存储单元关联有一个待处理任务,第三个占用存储单 元关联又三个待处理任务,则可以将第一个占用存储单元中的待处理任务分别编号为001与002,将第二个占用存储单元中的待处理任务编号为003,将第三个占用存储单元中的待处理任务分别编号为004、005及006,应当理解上述具体场景仅作为举例说明,不应当成为本实施例的限制,实际情况中的编号方式、待处理任务储量、占用存储单元数量以及每一占用存储单元所关联的待处理任务的数量都可以根据实际情况进行选择。
图3示出了一具体场景下待处理任务的编号方式的示意图,其中,在当前存储单元申请后,当前存储单元所关联的最后一个待处理任务的编号(104)减去当前处存储单元所关联的第一个处理单元的编号(101),结果为当前存储单元所关联的待处理任务数量。
应当理解,当待存储数据所占用的存储空间比较小时,数据接收模块36可能仅需申请一个空闲存储单元,而当待存储数据所占用的存储空间比较大时,数据接收模块36需要申请多个空闲存储单元,此时,各个存储单元之间可以通过链表串联。图4示出了一具体场景下链表串联的示意图,链表中的各存储单元可以在空闲存储单元中随机选择,无顺序要求。其中,根据接收数据的参数,可能存在维护多个链表信息,多个链表之间无相互关联。
如当对于当前的待存储数据的空闲存储单元申请成功后,可以将申请成功的空闲存储单元的地址依次记录在链表的信息中,由于待处理任务需要使用的数据可能存储在多个占用存储单元中,因此可以通过链表来找到对应的数据,链表中存储的第一个地址为存数据的起始地址信息(图4中的存储单元0),链表存储的最后一个地址则为数据的终止地址信息,因此在处理待处理任务时,可以在链表中找到所需数据的起始地址以开始对数据进行处理直到找到终止地址信息,即对所需数据全部处理完毕。
具体的,根据当前待处理任务需求,可能需要连续读取一个或多个占用存储单元的数据,读取的第一个存储单元为当前待处理任务关联数据起始的占用存储单元,具体可以通过链表中的起始地址信息得到,第一个占用存储单元读取完毕后,基于链表中串联的地址信息查找后续需要读取的占用存储单元,每个存储单元的数据读取完毕,当前待处理任务处理完毕后,需要更新占用存储单元中的待处理任务数量,空间释放模块33用于当某一占用存储单元中的待处理任务数量为0时,将相应的占用存储单元释放,该存储单元状态信息更新为空闲状态,即切换为空闲存储单元以供后续待存储数据进行存储空间的申请及使用。
本实施例中,每个存储单元维护对应的存储单元状态信息、存储单元标识信息、待处理任务的数量信息以及链表信息,存储单元状态信息用于指示存储单元存储状态,如空闲状态或占用状态,每个存储单元初始存储状态为空闲,即为空闲存储单元,申请后 为占用,即为占用存储单元,占用并释放后为空闲状态,即为空闲存储单元。链表信息包括链表地址信息,用于表示与当前存储单元串联的存储单元的地址,包括起始地址信息、终止地址信息及中间关联的若干地址信息,存储单元标识信息用于表示各个存储单元的标识符以方便对存储单元进行管理。
应当理解,本实施例中的缓存空间的释放***可以应用在多种场景中,如在小区测量的场景、数据解调的场景、参数估计的场景等等,本实施例并不对此进行限制。
实施例3
本发明实施例还提供一种电子设备,电子设备可以通过计算设备的形式表现(例如可以为服务器设备),包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中处理器执行计算机程序时可以实现本发明实施例1中缓存空间的释放方法。
图6示出了本实施例的硬件结构示意图,如图6所示,电子设备9具体包括:
至少一个处理器91、至少一个存储器92以及用于连接不同***组件(包括处理器91和存储器92)的总线93,其中:
总线93包括数据总线、地址总线和控制总线。
存储器92包括易失性存储器,例如随机存取存储器(RAM)921和/或高速缓存存储器922,还可以进一步包括只读存储器(ROM)923。
存储器92还包括具有一组(至少一个)程序模块924的程序/实用工具925,这样的程序模块924包括但不限于:操作***、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。
处理器91通过运行存储在存储器92中的计算机程序,从而执行各种功能应用以及数据处理,例如本发明实施例1中缓存空间的释放方法。
电子设备9进一步可以与一个或多个外部设备94(例如键盘、指向设备等)通信。这种通信可以通过输入/输出(I/O)接口95进行。并且,电子设备9还可以通过网络适配器96与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。网络适配器96通过总线93与电子设备9的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备9使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理器、外部磁盘驱动阵列、RAID(磁盘阵列)***、磁带驱动器以及数据备份存储***等。
应当注意,尽管在上文详细描述中提及了电子设备的若干单元/模块或子单元/模块,但是这种划分仅仅是示例性的并非强制性的。实际上,根据本申请的实施方式,上文描述的两个或更多单元/模块的特征和功能可以在一个单元/模块中具体化。反之,上文描述 的一个单元/模块的特征和功能可以进一步划分为由多个单元/模块来具体化。
实施例4
本发明实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现本发明实施例1中缓存空间的释放方法的步骤。
其中,可读存储介质可以采用的更具体可以包括但不限于:便携式盘、硬盘、随机存取存储器、只读存储器、可擦拭可编程只读存储器、光存储器件、磁存储器件或上述的任意合适的组合。
在可能的实施方式中,本发明还可以实现为一种程序产品的形式,其包括程序代码,当所述程序产品在终端设备上运行时,所述程序代码用于使所述终端设备执行实现本发明实施例1中缓存空间的释放方法的步骤。
其中,可以以一种或多种程序设计语言的任意组合来编写用于执行本发明的程序代码,所述程序代码可以完全地在用户设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户设备上部分在远程设备上执行或完全在远程设备上执行。
虽然以上描述了本发明的具体实施方式,但是本领域的技术人员应当理解,这仅是举例说明,本发明的保护范围是由所附权利要求书限定的。本领域的技术人员在不背离本发明的原理和实质的前提下,可以对这些实施方式做出多种变更或修改,但这些变更和修改均落入本发明的保护范围。

Claims (22)

  1. 一种缓存空间的释放方法,其特征在于,所述缓存空间包括若干存储单元,所述释放方法包括:
    根据每一占用存储单元关联的待处理任务的数量选择目标占用存储单元;
    在所述目标占用存储单元中选择目标处理任务;
    当所述目标占用存储单元中未关联有待处理任务时,释放所述目标占用存储单元。
  2. 如权利要求1所述的缓存空间的释放方法,其特征在于,所述在所述目标占用存储单元中选择目标处理任务的步骤后还包括:
    当所述目标处理任务处理完成后,判断所述目标处理任务关联的占用存储单元是否还关联有待处理任务,若否,则执行所述释放所述目标占用存储单元的步骤,若是,则返回所述根据每一占用存储单元关联的待处理任务的数量选择目标占用存储单元的步骤。
  3. 如权利要求1所述的缓存空间的释放方法,其特征在于,所述根据每一占用存储单元关联的待处理任务的数量选择目标占用存储单元的步骤包括:
    计算每一占用存储单元关联的待处理任务的数量;
    在关联的待处理任务的数量少于任务处理阈值的占用存储单元中选择一个作为目标占用存储单元;或,将关联的待处理任务的数量由少到多进行排序,并从排序靠前的若干个占用存储单元中选择一个作为目标占用存储单元;或,选择关联的待处理任务的数量最少的占用存储单元作为目标占用存储单元。
  4. 如权利要求1所述的缓存空间的释放方法,其特征在于,
    所述在所述目标占用存储单元中选择目标处理任务的步骤包括:
    将所述目标占用存储单元所关联的第一个待处理任务作为目标处理任务。
  5. 如权利要求3所述的缓存空间的释放方法,其特征在于,每一占用存储单元关联的待处理任务依次编号,所述计算每一占用存储单元关联的待处理任务的数量的步骤包括:
    对于每一占用存储单元,获取当前关联的第一个待处理任务的第一编号及当前关联的最后一个待处理任务的第二编号;
    根据所述第二编号及所述第一编号计算所述占用存储单元关联的待处理任务的数量。
  6. 如权利要求1所述的缓存空间的释放方法,其特征在于,所述释放方法还包括:
    接收待存储数据,并根据所述待存储数据的大小将所述待存储数据存储至至少一个空闲存储单元中,当所述待存储数据存储在多个所述空闲存储单元中时,所述空闲存储单元之间通过链表串联,所述待存储数据至少关联一个待处理任务。
  7. 如权利要求6所述的缓存空间的释放方法,其特征在于,当所述目标处理任务需要使用的数据存储在多个占用存储单元中时,通过对应的链表获取所述占用存储单元的地址。
  8. 如权利要求1所述的缓存空间的释放方法,其特征在于,所述根据每一占用存储单元关联的待处理任务的数量选择目标占用存储单元的步骤前还包括:
    将所述缓存空间划分为大小相同的若干存储单元。
  9. 如权利要求6所述的缓存空间的释放方法,其特征在于,所述根据所述待存储数据的大小将所述待存储数据存储至至少一个所述空闲存储单元中的步骤包括:
    判断当前空闲存储单元所包含的存储空间是否符合所述待存储数据的存储需求:若是,则将所述待存储数据存储至对应的空闲存储单元中;若否,则等待占用存储单元的释放直至当前空闲存储单元所包含的存储空间符合所述待存储数据的存储需求,或,先将一部分待存储数据存储至当前空闲存储单元中,当存在新的空闲存储单元时,再将另一部分数据待存储至当所述新的空闲存储单元中。
  10. 如权利要求1-9任意一项所述的缓存空间的释放方法,其特征在于,所述存储单元包括存储单元状态信息、待处理任务的数量信息、链表地址信息及存储单元标识信息,所述存储单元状态信息用于指示所述存储单元存储状态,所述链表地址信息用于表示与当前存储单元串联的存储单元的地址。
  11. 一种缓存空间的释放***,其特征在于,所述缓存空间包括若干存储单元,所述释放***包括:存储单元选择模块、处理任务选择模块及空间释放模块;
    所述存储单元选择模块用于根据每一占用存储单元关联的待处理任务的数量选择目标占用存储单元;
    所述处理任务选择模块用于在所述目标占用存储单元中选择目标处理任务;
    所述空间释放模块用于当所述目标占用存储单元中未关联有待处理任务时,释放所述目标占用存储单元。
  12. 如权利要求11所述的缓存空间的释放***,其特征在于,所述释放***还包括任务判断模块,所述处理任务选择模块还用于在选择目标处理任务后调用所述任务判断模块,所述任务判断模块用于当所述目标处理任务处理完成后,判断所述目标处理任务关联的占用存储单元是否还关联有待处理任务,若否,则调用所述空间释放模块,若是,则调用所述存储单元选择模块。
  13. 如权利要求11所述的缓存空间的释放***,其特征在于,所述存储单元选择模块包括:任务计算单元及存储选择单元;
    所述任务计算单元用于计算每一占用存储单元关联的待处理任务的数量;
    所述存储选择单元用于在关联的待处理任务的数量少于任务处理阈值的占用存储单元中选择一个作为目标占用存储单元;或,所述存储选择单元用于将关联的待处理任务的数量由少到多进行排序,并从排序靠前的若干个占用存储单元中选择一个作为目标占用存储单元;或,所述存储选择单元用于选择关联的待处理任务的数量最少的占用存储单元作为目标占用存储单元。
  14. 如权利要求11所述的缓存空间的释放***,其特征在于,
    所述处理任务选择模块用于将所述目标占用存储单元所关联的第一个待处理任务作为目标处理任务。
  15. 如权利要求13所述的缓存空间的释放***,其特征在于,每一占用存储单元关联的待处理任务依次编号,所述任务计算单元用于对于每一占用存储单元,获取当前关联的第一个待处理任务的第一编号及当前关联的最后一个待处理任务的第二编号,并根据所述第二编号及所述第一编号计算所述占用存储单元关联的待处理任务的数量。
  16. 如权利要求11所述的缓存空间的释放***,其特征在于,所述释放***还包括数据接收模块,用于接收待存储数据,并根据所述待存储数据的大小将所述待存储数据存储至至少一个空闲存储单元中,当所述待存储数据存储在多个所述空闲存储单元中时,所述空闲存储单元之间通过链表串联,所述待存储数据至少关联一个待处理任务。
  17. 如权利要求16所述的缓存空间的释放***,其特征在于,所述空间释放模块用于当所述目标处理任务需要使用的数据存储在多个占用存储单元中时,通过对应的链表获取所述占用存储单元的地址。
  18. 如权利要求11所述的缓存空间的释放***,其特征在于,所述释放***还包括:存储单元划分模块,用于将所述缓存空间划分为大小相同的若干存储单元。
  19. 如权利要求16所述的缓存空间的释放***,其特征在于,所述数据接收模块用于判断当前空闲存储单元所包含的存储空间是否符合所述待存储数据的存储需求:若是,则将所述待存储数据存储至对应的空闲存储单元中;若否,则等待占用存储单元的释放直至当前空闲存储单元所包含的存储空间符合所述待存储数据的存储需求,或,先将一部分待存储数据存储至当前空闲存储单元中,当存在新的空闲存储单元时,再将另一部分数据待存储至当所述新的空闲存储单元中。
  20. 如权利要求11-19任意一项所述的缓存空间的释放***,其特征在于,所述存储单元包括存储单元状态信息、待处理任务的数量信息、链表地址信息及存储单元标识信息,所述存储单元状态信息用于指示所述存储单元存储状态,所述链表地址信息用于表 示与当前存储单元串联的存储单元的地址。
  21. 一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至10任一项所述的缓存空间的释放方法。
  22. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至10任一项所述的缓存空间的释放方法的步骤。
PCT/CN2021/136650 2020-12-23 2021-12-09 缓存空间的释放方法、释放***、电子设备及存储介质 WO2022135160A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011540102.4 2020-12-23
CN202011540102.4A CN112650449B (zh) 2020-12-23 2020-12-23 缓存空间的释放方法、释放***、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022135160A1 true WO2022135160A1 (zh) 2022-06-30

Family

ID=75359543

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/136650 WO2022135160A1 (zh) 2020-12-23 2021-12-09 缓存空间的释放方法、释放***、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN112650449B (zh)
WO (1) WO2022135160A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112650449B (zh) * 2020-12-23 2022-12-27 展讯半导体(南京)有限公司 缓存空间的释放方法、释放***、电子设备及存储介质
CN112995704B (zh) * 2021-04-25 2021-08-06 武汉中科通达高新技术股份有限公司 一种缓存管理方法、装置、电子设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6105108A (en) * 1997-10-24 2000-08-15 Compaq Computer Corporation Method and apparatus for releasing victim data buffers of computer systems by comparing a probe counter with a service counter
CN101073051A (zh) * 2004-12-10 2007-11-14 皇家飞利浦电子股份有限公司 用于高速缓冲存储器替换的数据处理***和方法
CN101753580A (zh) * 2010-01-08 2010-06-23 烽火通信科技股份有限公司 包处理芯片及其数据存储、转发方法
US20150052312A1 (en) * 2013-08-15 2015-02-19 International Business Machines Corporation Protecting the footprint of memory transactions from victimization
CN105159777A (zh) * 2015-08-03 2015-12-16 中科创达软件股份有限公司 进程的内存回收方法及装置
CN110032438A (zh) * 2019-04-24 2019-07-19 北京高途云集教育科技有限公司 延时任务执行方法、装置及电子设备
CN112650449A (zh) * 2020-12-23 2021-04-13 展讯半导体(南京)有限公司 缓存空间的释放方法、释放***、电子设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05143497A (ja) * 1991-11-18 1993-06-11 Nec Corp バツフア管理装置およびバツフア管理方法
CN1184777C (zh) * 2002-04-17 2005-01-12 华为技术有限公司 以太网交换芯片传输数据过程中缓存的管理和分配方法
CN107665146B (zh) * 2016-07-29 2020-07-07 华为技术有限公司 内存管理装置和方法
CN106681829B (zh) * 2016-12-09 2020-07-24 北京康吉森技术有限公司 一种内存管理方法及***
CN111538694B (zh) * 2020-07-09 2020-11-10 常州楠菲微电子有限公司 一种用于网络接口支持多链接和重传的数据缓存方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6105108A (en) * 1997-10-24 2000-08-15 Compaq Computer Corporation Method and apparatus for releasing victim data buffers of computer systems by comparing a probe counter with a service counter
CN101073051A (zh) * 2004-12-10 2007-11-14 皇家飞利浦电子股份有限公司 用于高速缓冲存储器替换的数据处理***和方法
CN101753580A (zh) * 2010-01-08 2010-06-23 烽火通信科技股份有限公司 包处理芯片及其数据存储、转发方法
US20150052312A1 (en) * 2013-08-15 2015-02-19 International Business Machines Corporation Protecting the footprint of memory transactions from victimization
CN105159777A (zh) * 2015-08-03 2015-12-16 中科创达软件股份有限公司 进程的内存回收方法及装置
CN110032438A (zh) * 2019-04-24 2019-07-19 北京高途云集教育科技有限公司 延时任务执行方法、装置及电子设备
CN112650449A (zh) * 2020-12-23 2021-04-13 展讯半导体(南京)有限公司 缓存空间的释放方法、释放***、电子设备及存储介质

Also Published As

Publication number Publication date
CN112650449B (zh) 2022-12-27
CN112650449A (zh) 2021-04-13

Similar Documents

Publication Publication Date Title
WO2022135160A1 (zh) 缓存空间的释放方法、释放***、电子设备及存储介质
EP3796150B1 (en) Storage volume creation method and apparatus, server, and storage medium
CA2780231C (en) Goal oriented performance management of workload utilizing accelerators
CN107665146B (zh) 内存管理装置和方法
US10628347B2 (en) Deallocation of memory buffer in multiprocessor systems
US10884667B2 (en) Storage controller and IO request processing method
CN107515784B (zh) 一种在分布式***中计算资源的方法与设备
US20140304306A1 (en) Database Management System With Database Hibernation and Bursting
CN110162396A (zh) 内存回收方法、装置、***和存储介质
US8458719B2 (en) Storage management in a data processing system
US20180322075A1 (en) Method for processing client requests in a cluster system, a method and an apparatus for processing i/o according to the client requests
US10872015B2 (en) Data storage system with strategic contention avoidance
WO2024119930A1 (zh) 调度方法、装置、计算机设备和存储介质
US20070174836A1 (en) System for controlling computer and method therefor
EP3188026B1 (en) Memory resource management method and apparatus
CN111858393A (zh) 内存页面管理方法、内存页面管理装置、介质与电子设备
CN113031857A (zh) 数据写入方法、装置、服务器及存储介质
US8135911B2 (en) Managing a region cache
CN106339279B (zh) 一种业务恢复方法及装置
CN117093335A (zh) 分布式存储***的任务调度方法及装置
CN111324438B (zh) 请求的调度方法、装置、存储介质及电子设备
CN113485644A (zh) 一种io数据存储方法和服务器
TWI696115B (zh) 記憶體儲存裝置及其操作方法
US10783096B2 (en) Storage system and method of controlling I/O processing
CN113760781A (zh) 数据处理方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21909157

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21909157

Country of ref document: EP

Kind code of ref document: A1