CN113377724A - Cache space management method, device and storage medium - Google Patents

Cache space management method, device and storage medium Download PDF

Info

Publication number
CN113377724A
CN113377724A CN202110751348.4A CN202110751348A CN113377724A CN 113377724 A CN113377724 A CN 113377724A CN 202110751348 A CN202110751348 A CN 202110751348A CN 113377724 A CN113377724 A CN 113377724A
Authority
CN
China
Prior art keywords
cache
file
list
space
files
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110751348.4A
Other languages
Chinese (zh)
Inventor
王浩
林顺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Yaji Software Co Ltd
Original Assignee
Xiamen Yaji Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Yaji Software Co Ltd filed Critical Xiamen Yaji Software Co Ltd
Priority to CN202110751348.4A priority Critical patent/CN113377724A/en
Priority to PCT/CN2021/115424 priority patent/WO2023272918A1/en
Publication of CN113377724A publication Critical patent/CN113377724A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/174Redundancy elimination performed by the file system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • G06F16/184Distributed file systems implemented as replicated file system
    • G06F16/1844Management specifically adapted to replicated file systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure relates to the technical field of data processing, and discloses a cache space management method, a cache space management device and a storage medium, wherein the cache space management method comprises the following steps: the method comprises the steps of creating a cache list file for storing a cache list in a cache space in advance, obtaining a cache task, copying a file corresponding to the cache task to the cache space, obtaining the file size of the file if the copying is successful, calculating the sum of the size of all cache files in the cache list and the size of the file, namely the size of occupied space in the cache space, and clearing the cache space if the sum of the size of all cache files and the size of the file is larger than a preset threshold value. According to the method, the information of each cache file in the cache space is stored by file establishment in advance, and the size of the occupied space in the cache space can be determined through the cache list, so that redundant data in the cache space can be cleared in time, sufficient usable space in the cache space is further ensured, and the files can be cached at any time.

Description

Cache space management method, device and storage medium
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a method and an apparatus for managing a cache space, and a storage medium.
Background
With the continuous updating and maintenance of the game, the number of cache files in the cache space is increased day by day, a considerable part of resources in the cache space are resources of the old version game before the game version is updated, the part of resources cannot be used in the updated new version game, and a large amount of cache space is occupied, so that the new cache resources cannot be cached successfully; secondly, only when the cache space is full, automatic cleaning is triggered, and the cleaning process is slow, which also causes that part of cache resources cannot be cached.
Disclosure of Invention
To solve the above technical problem or at least partially solve the above technical problem, the present disclosure provides a cache space management method, apparatus, and storage medium.
In a first aspect, an embodiment of the present disclosure provides a cache space management method, where a cache list file is created in a cache space in advance, a cache list is stored in the cache list file, the cache list includes a plurality of fields, and the plurality of fields include a size field of the cache file, and the method includes:
obtaining a cache task to be processed, and copying a file corresponding to the cache task to a cache space;
after the file corresponding to the caching task is copied successfully, inquiring the size of the file;
inquiring the size field of the cache files in the cache list to obtain the sum of the sizes of all the cache files in the cache list;
and if the sum of the sizes of all the cached files in the cache list and the sum of the file sizes of the files are larger than a preset threshold value, cleaning the cache space.
In a second aspect, an embodiment of the present disclosure provides a cache space management apparatus, where a cache list file is created in a cache space in advance, a cache list is stored in the cache list file, the cache list includes a plurality of fields, and the plurality of fields include a size field of the cache file, the apparatus includes:
the acquisition module is used for acquiring the cache task to be processed and copying a file corresponding to the cache task to a cache space;
the first query module is used for querying the file size of the file after the file corresponding to the caching task is copied successfully;
the second query module is used for querying the size field of the cache file in the cache list to obtain the sum of the sizes of all the cache files in the cache list;
and the clearing module is used for clearing the cache space if the sum of the sizes of all the cache files in the cache list and the sum of the file sizes of the files are larger than a preset threshold value.
In a third aspect, an embodiment of the present disclosure provides a cache space management device, including:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the cache space management method as described above.
In a fourth aspect, the present disclosure provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the cache space management method as described above.
The embodiment of the disclosure provides a cache space management method, a cache space management device and a storage medium. The method of the embodiment of the application comprises the steps of creating a cache list file for storing a cache list in a cache space in advance, copying a file corresponding to the cache task into the cache space by obtaining the cache task, obtaining the file size of the file if the copying is successful, calculating the sum of the size of all cache files in the cache list and the size of the file, namely the size of occupied space in the cache space, cleaning the cache space if the sum of the size of all cache files and the size of the file is larger than a preset threshold value, timely determining information of the cache space through the cache list file without calling a file interface for determination, accurately determining the size of the occupied space in the cache space through the cache list, automatically triggering an automatic cleaning function according to the size of the occupied space so as to conveniently clear redundant data in the cache space in time and further ensure that sufficient usable space exists in the cache space, the file is convenient to cache at any time.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present disclosure, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a schematic diagram of a resource storage form according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating a process of loading game resources according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of an application scenario provided by the embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a method for managing a cache space according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of a method for managing a cache space according to an embodiment of the present disclosure;
fig. 6 is a schematic flowchart of a method for managing a cache space according to an embodiment of the present disclosure;
fig. 7 is a schematic flowchart of a method for managing a cache space according to an embodiment of the present disclosure;
fig. 8 is a schematic flowchart of a method for managing a cache space according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a cache space management apparatus according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a cache space management device according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, aspects of the present disclosure will be further described below. It should be noted that the embodiments and features of the embodiments of the present disclosure may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced in other ways than those described herein; it is to be understood that the embodiments disclosed in the specification are only a few embodiments of the present disclosure, and not all embodiments.
As used herein, the singular forms "a", "an", "the" and "the" include plural referents unless the context clearly dictates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
The method of the embodiment of the application comprises the steps of creating a cache list file for storing a cache list in a cache space in advance, copying a file corresponding to the cache task into the cache space by obtaining the cache task, obtaining the file size of the file if the copying is successful, calculating the sum of the size of all cache files in the cache list and the size of the file, namely the size of occupied space in the cache space, cleaning the cache space if the sum of the size of all cache files and the size of the file is larger than a preset threshold value, timely determining information of the cache space through the cache list file without calling a file interface for determination, accurately determining the size of the occupied space in the cache space through the cache list, automatically triggering an automatic cleaning function according to the size of the occupied space so as to conveniently clear redundant data in the cache space in time and further ensure that sufficient usable space exists in the cache space, the file is convenient to cache at any time.
Specifically, the game is composed of various resources, the resources may be understood as a combination of data, which is a basic component element of the game, and the implementation of the game is controlled, for example, the resources may specifically refer to a game scene, a model, audio, video, text, texture, and the like, during the use of the game, all the various used resources may be exported to a system file in the form of a file, and the resources may be named with a globally unique identifier (name), for example, in the case of a naming representation form of the various resources shown in fig. 1, the various resources exist in the form of a file and have a uniquely corresponding file name, and the file corresponding to each resource included in the system file in fig. 1 may be divided into a road, a tree, and a car, and sub-files included in each file, such as a road01 and a car 101.
Specifically, the game resources may be loaded on a small game platform, and the characteristics of the small game platform in terms of resource management include: the bag body is limited, the small game platform is used for reducing the time for loading the game and accelerating the starting speed of the game, the limitation on the game bag body is strict, the bag body can comprise main resources of the game, and for example, the small game platform limits the size of a main bag to be not more than 4M; the system comprises file system interfaces, all the small game platforms provide a set of exclusive file system interfaces, developers can use the interfaces to manage files, including file addition, deletion, reading and writing, but the exclusive file system interfaces of each small game platform are different from one another, and the compatibility is poor; the temporary file directory is used for automatically downloading files corresponding to game resources into the temporary file directory by the small game platform if a downloading cache path is not specified when the resources are downloaded on the small game platform, the files stored in the temporary file directory only have read permission, and although the size of the temporary directory is not limited, the files in the temporary file directory can be automatically cleaned by the small game platform when the small game exits, so that the secondary loading of the resources is inconvenient; the small game platform provides 200M buffer space for each small game, and the buffer space is recycled by the small game platform only when the small game is removed by the player.
Specifically, the process of loading game resources by using a small game platform can be shown in fig. 2. It can be understood that, during the running of the game, the loading of the game resource comprises the following steps: s210, resource positioning, namely analyzing a source global Unique label value (ID) into a real path of a resource on a file system; s220, reading the resource file, and reading the file into a cache space according to the path analyzed in the S210; s230, deserializing and finding a dependency list, which is used for deserializing the resource information read into the cache space in the S220 into a resource object and finding a dependency resource uuid list of the resource; s240, waiting for the loading of the dependent resources, loading all the dependent resources according to the dependent resource uuid list, and waiting for the completion of the loading of the dependent resources; and S250, initializing the resources, wherein the resources are used for initializing the resources, and the initialized resources are complete available resources.
Specifically, the specific operation flow of reading the resource file in S220 includes: and acquiring a real path of the resource analyzed by the S210 on the file system, judging whether the resource is on the remote server or not according to the real path, if not, directly reading the local resource, otherwise, downloading the resource from the remote server to a local space, and reading the resource in the local space, wherein the local space can comprise a temporary file directory or a cache space.
The embodiment of the present application is described in an application scenario where a mini game platform loads game resources, and it can be understood that the mini game platform may be configured in a client terminal 31 or a server terminal 32 as shown in fig. 3, and specifically, the present embodiment is applicable to a situation where the client terminal 31 performs cache space management, and the method may be executed by a cache space management device, where the device may be implemented in a software and/or hardware manner, and the device may be configured in an electronic device, for example, a terminal, and specifically includes but is not limited to a smart phone, a palmtop computer, a tablet computer, a wearable device with a display screen, a desktop computer, a notebook computer, an all-in-one machine, a smart home device, and the like. Alternatively, the embodiment may be applicable to the case of performing cache space management in the server 32, and the method may be performed by a cache space management apparatus, which may be implemented in a software and/or hardware manner, and may be configured in an electronic device, such as a server.
Fig. 4 is a flowchart illustrating a method for managing a cache space according to an embodiment of the present disclosure, where a cache list file is created in a cache space in advance, and a cache list is stored in the cache list file, where the cache list includes a plurality of fields, and the plurality of fields include a size field of the cache file.
It can be understood that by creating a cachelist file in a cache space in advance, that is, creating a cache list file as a database of the cache space for recording information of each cache file in the cache space, the information may include an identification field, a last use time field, and a size field of each cache file, by reading the cache list file in the cache space, a cache list in a memory is obtained, by querying the cache list in the memory, information of each cache file in the cache space can be directly obtained, without calling a file system interface (readdir) to obtain information of the cache file in the cache space each time the cache file is queried, speed of querying information of the cache file can be increased, and loading time of a game is further reduced.
The cache space management method includes steps S410 to S440 shown in fig. 4:
s410, obtaining the cache task to be processed, and copying the file corresponding to the cache task to a cache space.
It can be understood that, in the game loading process, after the resource file is downloaded to the temporary file directory, a cache task corresponding to the resource file is generated in the memory, where the cache task indicates information of the resource file downloaded in the local space, and the cache task may be added to a cache queue of the memory, at this time, the cache task to be processed corresponds to information of any downloaded resource file in the cache queue, where the downloaded resource file may be referred to as a cache file after being stored in the cache space, and preferably, the cache queue may include file names of the downloaded resource files, such as file names corresponding to various resources of the game shown in fig. 1 when the resources exist in a file form.
It can be understood that in the process of game loading, game resource files can be directly downloaded to a temporary file object, various limitations in the downloading process are reduced, the positions of the files corresponding to the game resource files in a temporary file directory are found according to a caching task, and the files are copied to a caching space from the temporary file directory, wherein in a small game platform, a certain amount of caching space is allocated to each small game, and the caching space is used for caching all the files downloaded from a Content Distribution Network (CDN), so that when the files are loaded for the second time, the caching resources are directly read from the caching space, and the game loading time is reduced.
And S420, after the file corresponding to the caching task is copied successfully, inquiring the file size of the file.
Understandably, on the basis of the above S410, the file corresponding to the caching task is successfully copied from the temporary file directory into the caching space corresponding to the file, that is, after the file is completely copied from the temporary file directory into the caching space, the file size of the file is queried through the file system interface.
Illustratively, the mini game A allocates a cache space A in the mini game platform, and a file downloaded by the mini game A is copied into the cache space A from the temporary file directory, so that the file is directly read from the cache space A when the mini game A is loaded next time.
Optionally, after the file corresponding to the caching task fails to be copied, the caching space is cleared.
It can be understood that, on the basis of the above S410, if the file corresponding to the caching task fails to be copied, the copying failure may have the following two cases: the cache space has no unoccupied space or the unoccupied space is not enough to store the file or the file is not completely copied into the cache space in the copying process, and the situation that the game fails, such as a flash back situation, may occur, and at this time, the automatic cleaning of the cache space is directly triggered.
S430, inquiring the size field of the cache file in the cache list to obtain the sum of the sizes of all the cache files in the cache list.
Understandably, on the basis of the above S420, the size fields of all the cache files recorded in the cache list are queried, that is, the file sizes of all the cache files in the cache list are directly queried, and the sum of the file sizes of all the cache files is calculated, where the cache files are stored in the cache space, that is, the size of the occupied space in the cache space, that is, the total size of all the files in the cache space.
S440, if the sum of the sizes of all the cached files in the cache list and the sum of the file sizes of the files are larger than a preset threshold value, cleaning the cache space.
Understandably, on the basis of the above S430, if the sum of the sizes of all the cache files recorded in the cache list and the sum of the sizes of the files corresponding to the cache task determined in the above S420 are greater than the preset threshold, that is, the sum of the size of the occupied space in the cache space and the size of the file currently copied to the cache space is greater than the preset threshold, the automatic cleaning of the cache space is triggered, so that it is ensured that the cache space always has a free space for continuing to cache the files, wherein the preset threshold can be set according to the user' S requirements, so as to clean the cache space in time.
Understandably, the information of the cache file recorded in the cache list does not include the information of the cache file corresponding to the currently acquired cache task to be processed, and at this time, the cache file corresponding to the cache task is not stored in the cache space, so that the sum of the file size of the cache file corresponding to the current cache task and the size of the cache file recorded in the cache list needs to be counted, and thus it is determined that the cache space does not trigger the automatic cleaning function after the cache file corresponding to the cache task is stored in the cache space.
The method for managing the cache space provided by the embodiment of the disclosure includes creating a cache list file for storing a cache list in a cache space in advance, obtaining a cache task, copying a file corresponding to the cache task to the cache space, obtaining the file size of the file if the copying is successful, calculating the sum of the size of all cache files included in the cache list and the size of the file, namely the size of an occupied space in the cache space, if the sum of the size of all cache files and the size of the file is larger than a preset threshold, cleaning the cache space, determining information of the cache space in time through the cache list file without calling a file interface, determining the size of the occupied space in the cache space accurately through the cache list, automatically triggering an automatic cleaning function according to the size of the occupied space, so as to clean redundant data in the cache space in time, and further, sufficient usable space exists in the cache space, and the file is convenient to cache at any time.
On the basis of the foregoing embodiment, optionally, before querying the size field of the cache file in the cache list, the method further includes:
reading the content of a cache list file in a cache space to obtain a cache list; and initializing the cache list into the memory.
It can be understood that the content of the cache list file established in the cache space in advance is read to obtain the cache list in the memory, that is, the cache list obtained through the reading operation is stored in the memory, and the cache list file stores the information of each cache file in the current cache space, that is, the information of the file corresponding to the cache task is not included.
Correspondingly, the size field of the cache file in the cache list is inquired from the memory.
Understandably, after obtaining the cache list in the memory, directly querying the size field of each cache file in the cache list from the memory.
According to the cache space management method provided by the embodiment of the disclosure, the cache list in the memory is obtained by reading the cache list file created in advance in the cache space, the information of the cache file can be directly obtained by querying the cache list, a file system interface is not required to be called, the information obtaining time can be effectively reduced, and the information of the cache space can be conveniently known.
On the basis of the foregoing embodiment, fig. 5 is a schematic flowchart of a method for managing a cache space according to an embodiment of the present disclosure, and optionally, the multiple fields further include an identification field of the cache file; after querying the size field of the cached file in the cache list, the cache space management method further includes steps S510 to S520 shown in fig. 5:
s510, recording the identification of the file to an identification field of a cache file in the cache list, and recording the file size of the file to a size field of the cache file in the cache list to obtain an updated cache list.
It can be understood that after the size field of the cache file in the cache list is queried, the cache list is updated, the process of determining that the cache space triggers automatic cleaning can be accelerated, the cache list does not need to be updated, and whether the cache space needs to be automatically cleaned can be directly determined according to the sum of the file size of the file queried before and the size of the cache file in the cache list.
Understandably, recording the identification (file name/name) of the file corresponding to the caching task to be processed into the identification field of the caching file in the caching list, namely, newly adding the information of the file in the caching list in the memory, wherein the information comprises the identification field; and recording the file size of the file to be queried to a size field of a cache file in a cache list, that is, adding a size field corresponding to the file to the cache list in a memory, where information of the file (a file corresponding to a cache task to be processed) in the cache list includes an identification field and a size field, to obtain an updated cache list, where the cache list includes, for example, 10 pieces of information, each piece of information corresponds to a cache file stored in a cache space, and after a file corresponding to a cache task to be processed is stored in the cache space, the information of the file is also correspondingly added to the cache list, and at this time, the cache list to which the file information is added includes 11 pieces of information.
And S520, writing the updated cache list into a cache list file.
It can be understood that, on the basis of the above S510, the cache list file is updated in time after a certain time according to the updated cache list, where the cache list file may be understood as a subset of the real cache list in the memory, that is, the number of cache file information included in the cache list file may be less than or equal to the number of cache file information included in the cache list, and there may be a case that the cache list is updated, but the cache list file is not updated yet.
The method for managing the cache space disclosed by the embodiment of the invention can timely feed back the information of the cache file newly stored in the cache space by writing the identification field and the size field of the file corresponding to the cache task to be processed into the cache list in the memory, updating the cache list, regularly updating the cache list file in the cache space according to the cache list, thereby ensuring the accuracy of the information in the cache list file stored in the cache space and being convenient for quickly inquiring the information of the cache file in the subsequent game loading process.
On the basis of the foregoing embodiment, fig. 6 is a schematic flowchart of a method for managing a cache space according to an embodiment of the present disclosure, and optionally, the multiple fields further include a last usage time field of the cache file; the cleaning of the buffer space includes steps S610 to S630 shown in fig. 6:
s610, inquiring the last use time field of the cache files in the cache list to obtain the last use time of all the cache files in the cache list.
It can be understood that the queried cache list may be an updated cache list or an un-updated cache list, where the un-updated cache list does not include information of a file corresponding to the cache task to be processed, the queried cache list is the last usage time of the cache file in the cache list, the last usage time of the file corresponding to the cache task to be processed is the latest usage time just stored in the cache space, and the usage time interval is short and does not belong to the cache file to be cleaned.
S620, one or more cache files to be cleaned are determined based on the last use time of all the cache files.
Optionally, the cache files are sorted in the rebuffering list according to the last usage time of the cache files, and one or more cache files to be cleaned are determined.
It can be understood that after the game is updated in multiple versions, the cache space also has the files corresponding to the old version before updating after the game is updated, and the last use time of the files corresponding to the part of old version is earlier along with the starting and running of the game, so that the memory occupied by the cache space can be reduced to a certain extent by deleting the useless files corresponding to the part of old version; and sequencing the cache list according to the last using time to obtain one or more cache files to be cleaned, wherein the number of the plurality of cache files to be cleaned can be set to be one third of the number of the cache files in the cache space.
Illustratively, the cache list includes information of 30 cache files, each piece of information corresponds to 1 cache file, the information of the 30 cache files is sorted according to the last use time of each cache file, the information of the 30 cache files is sorted according to the time sequence from front to back, the number to be cleaned is set to 1/3 of 30 total numbers, that is, the number to be cleaned is 10, the last determined information of the cache file to be cleaned is the first 10 sorted information, and the cache file in the cache space is determined to be the cache file to be cleaned according to the identification field in each piece of information.
S630, deleting one or more cache files to be cleaned.
Optionally, deleting one or more cache files to be cleaned includes:
generating a corresponding deletion task for each cache file to be cleaned; adding the generated deleting task into a deleting queue, wherein the deleting queue is a queue established in the memory; and acquiring a deleting task from the deleting queue at regular time, and deleting the cache file corresponding to the deleting task.
Understandably, on the basis of the S620, a deletion task is generated according to the determined cache file to be cleaned, and the generated deletion task is added to a deletion queue in the memory, where the deletion queue includes a plurality of deletion tasks, that is, the deletion task is newly added to the deletion queue, and the cache files in the cache space are cleaned according to the deletion task, that is, the cache files corresponding to the identification fields in the cache space are cleaned according to the identification fields (names) of the cache files included in the deletion task.
According to the cache space management method provided by the embodiment of the disclosure, the cache list in the memory is inquired, the cache list is sequenced according to the last using time, the cache file to be cleaned is obtained, the specific information of the cache file to be deleted can be made clear, the useless file which is not used for a long time is automatically deleted, the occupied space in the cache space is reduced, the file can be cached in time, the cache space is prevented from being cleaned manually by a user, and the user experience is improved.
On the basis of the foregoing embodiment, optionally, the cache space management method further includes:
when the identifier of one cache file in the cache list is inquired from the memory every time, the information of the last use time field of the cache file in the cache list is updated to be inquiry time, and the updated cache list is obtained.
It can be understood that, in the loading process of the game, if the scene corresponding to the cache file is used or the identification field of the cache file in the cache list in the memory is queried, the last use time of the file is updated according to the use or query time, that is, the information of the last use time field of the cache file in the cache list is updated to be the query time or the application/use/load time, so as to obtain the updated cache list.
And writing the updated cache list into the cache list file.
It can be understood that the cache list file is updated in time after a preset time according to the updated cache list, where the preset time may be 200ms, and the cache list file may be understood as a subset of a real cache list in the memory, that is, the number of information included in the cache list file may be less than or equal to the number of information in the cache list, and there may be a case that the cache list is updated but the cache list file is not updated yet, at this time, all the information of the file corresponding to the cache task to be processed is updated, and the updated cache list includes the identification field, the last usage time field, and the size field of the file.
According to the cache space management method provided by the embodiment of the disclosure, the last use time field of each cache file can be accurately determined, the cache list can be updated in time, and the cache list can accurately reflect all information of each file cached in the cache space.
Fig. 7 is a schematic flow chart of a cache space management method according to an embodiment of the present disclosure, where on the basis of the foregoing embodiment, an automatic cleaning is triggered to be performed according to a determination that a file corresponding to a cache task is successfully copied and a sum of sizes of cache files in a cache list and a sum of file sizes is greater than a preset threshold, where the automatic cleaning includes the following contents shown in fig. 7:
and acquiring a cache task to be processed.
Understandably, in the game loading process, the cache file is downloaded to the temporary file directory, a cache task is generated, the cache task is added to a cache queue of the memory, and the cache task to be processed is the cache task in the cache queue in the memory.
And copying the file corresponding to the caching task to a caching space.
Understandably, the cache files stored in the temporary file directory are copied to the cache space, so that the cache files can be conveniently and directly read when the game is opened again subsequently, and the game loading time is reduced.
And judging whether the file is copied successfully.
Understandably, whether automatic cleaning of the cache space is triggered or not can be directly determined by judging whether copying is successful or not, if copying is failed, the available space of the cache space is considered to be insufficient, automatic cleaning of the cache space is directly triggered, and the cache space is cleaned.
And if all the cache files are completely copied into the cache space, inquiring the file size of the cache files.
Understandably, the file system interface of the mini-game platform can be invoked to query the size of the cached file.
And inquiring the size field of the cache file in the cache list, and obtaining the sum of the sizes of all the cache files in the cache list.
Understandably, the size sum of all the cache files stored in the cache space, that is, the size of the occupied space in the cache space is determined by querying the size field of each cache file recorded in the cache list.
The sum of the sizes of all the cache files and the sum of the file sizes of the files are larger than a preset threshold value.
Understandably, the automatic cleaning function of the cache space is triggered by judging the relationship between the sum of the sizes of all the cache files and the sum of the file sizes of the files and a preset threshold value, and the usable space is further increased.
And if the sum of the sizes of all the cached files and the sum of the file sizes of the files are less than or equal to a preset threshold value, further updating the cache list.
It can be understood that if the sum of the size of the occupied space in the cache space and the size of the file is less than or equal to the preset threshold, it indicates that the usable space in the cache space is sufficient, and automatic cleaning is not triggered, and the cache list in the memory can be updated according to the file, and the cache list file in the cache space is updated at regular time, so that information of each cache file can be accurately queried.
And if the sum of the sizes of all the cache files and the sum of the file sizes of the files are larger than a preset threshold value, automatically cleaning the cache space.
Understandably, when the size of the occupied space of the cache space is larger than a preset threshold, the cache space is automatically cleaned, and after the automatic cleaning is triggered by the two modes, the method for determining the cache file to be cleaned is the same and is determined by the final use time of the cache file.
After updating the cache list and automatically cleaning the cache space, judging whether a new cache task which is not processed exists in the cache queue in the memory.
It can be understood that after the operations of updating the cache list and automatically clearing the cache space are completed, whether a new unprocessed cache task exists in the cache queue is detected, and if the new unprocessed cache task does not exist, the operation is ended.
And if the new cache task exists, waiting for a preset period, continuing to acquire the cache task, and performing loop processing until the flow is finished when the cache task does not exist.
Fig. 8 is a schematic flow chart of a cache space management method according to an embodiment of the present disclosure, and based on the above embodiment, taking automatic cleaning of a cache space as an example, the method includes the following steps as shown in fig. 8:
and reading the content of the cache list file to obtain a cache list in the memory, and updating the cache list file according to the cache list after a certain time.
And querying a cache list in the memory, sequencing according to the last used time field of each cache file recorded in the cache list, and determining the cache files to be cleaned.
And generating a deletion task according to any cache file to be cleaned, adding the deletion task in a deletion queue in the memory, adding the deletion task in the deletion queue in the cache space, waiting for executing the deletion task in the deletion queue in the memory, and deleting the corresponding cache file in the cache space according to the deletion task, wherein the deletion task can comprise an identification field of the cache file.
Fig. 9 is a schematic structural diagram of a cache space management apparatus according to an embodiment of the present disclosure. The cache space management apparatus provided in the embodiment of the present disclosure may execute the processing procedure provided in the embodiment of the cache space management method, and create a cache list file in the cache space in advance, where the cache list file stores a cache list, the cache list includes a plurality of fields, and the plurality of fields include a size field of the cache file, as shown in fig. 9, the cache space management apparatus 900 includes:
an obtaining module 910, configured to obtain a cache task to be processed, and copy a file corresponding to the cache task to a cache space;
the first query module 920 is configured to query the file size of the file after the file corresponding to the caching task is copied successfully;
a second query module 930, configured to query the size field of the cached files in the cache list, to obtain the sum of the sizes of all cached files in the cache list;
a cleaning module 940, configured to clean the cache space if the sum of the sizes of all the cache files in the cache list and the sum of the file sizes of the files are greater than a preset threshold.
Optionally, the cache space management apparatus 900 further includes a second cleaning module, specifically configured to:
and after the file corresponding to the caching task fails to be copied, cleaning the caching space.
Optionally, the cache space management apparatus 900 further includes a reading module, specifically configured to:
reading the content of a cache list file in a cache space to obtain a cache list;
initializing a cache list into a memory;
correspondingly, the size field of the cache file in the cache list is inquired from the memory.
Optionally, the plurality of fields further include an identification field of the cache file; after querying the size field of the cache file in the cache list, the cache space management apparatus 900 further includes an updating module, specifically configured to:
recording the identification of the file into an identification field of a cache file in a cache list, and recording the file size of the file into a size field of the cache file in the cache list to obtain an updated cache list;
and writing the updated cache list into the cache list file.
Optionally, the plurality of fields further include a last usage time field of the cache file; the cleaning module 940 cleans the cache space, and is specifically configured to:
inquiring the last use time field of the cache files in the cache list to obtain the last use time of all the cache files in the cache list;
determining one or more cache files to be cleaned based on the last use time of all the cache files;
and deleting one or more cache files to be cleaned.
Optionally, the cache space management apparatus 900 further includes a second updating module, specifically configured to:
when the identification of one cache file in the cache list is inquired from the memory every time, updating the information of the last use time field of the cache file in the cache list as inquiry time to obtain an updated cache list;
and writing the updated cache list into the cache list file.
Optionally, deleting one or more cache files to be cleaned in the cleaning module 940 is specifically configured to:
generating a corresponding deletion task for each cache file to be cleaned;
adding the generated deleting task into a deleting queue, wherein the deleting queue is a queue established in the memory;
and acquiring a deleting task from the deleting queue at regular time, and deleting the cache file corresponding to the deleting task.
The cache space management apparatus in the embodiment shown in fig. 9 may be used to implement the technical solution of the above method embodiment, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 10 is a schematic structural diagram of a cache space management device according to an embodiment of the present disclosure. The electronic device may be a server or a client as above. The cache space management device provided in the embodiment of the present disclosure may execute the processing procedure provided in the foregoing embodiment, as shown in fig. 10, the cache space management device 1000 includes: a processor 1100, a communication interface 1200, and a memory 1300; wherein the computer program is stored in the memory 1300 and configured to be executed by the processor 1100 for performing the cache space management method as described above.
In addition, the embodiment of the present disclosure also provides a computer readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the cache space management method of the foregoing embodiment.
Furthermore, the embodiment of the present disclosure also provides a computer program product, which includes a computer program or instructions, and the computer program or instructions, when executed by a processor, implement the above cache space management method.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present disclosure, which enable those skilled in the art to understand or practice the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A cache space management method is characterized in that a cache list file is created in a cache space in advance, a cache list is stored in the cache list file, the cache list comprises a plurality of fields, and the fields comprise size fields of the cache file, and the method comprises the following steps:
obtaining a cache task to be processed, and copying a file corresponding to the cache task to a cache space;
after the file corresponding to the caching task is copied successfully, inquiring the size of the file;
inquiring the size field of the cache files in the cache list to obtain the sum of the sizes of all the cache files in the cache list;
and if the sum of the sizes of all the cached files in the cache list and the sum of the file sizes of the files are larger than a preset threshold value, cleaning the cache space.
2. The method of claim 1, further comprising:
and after the file corresponding to the cache task fails to be copied, cleaning the cache space.
3. The method of claim 1, wherein before querying the size field of the cached file in the cache list, the method further comprises:
reading the content of the cache list file in the cache space to obtain a cache list;
initializing the cache list into a memory;
correspondingly, the size field of the cache file in the cache list is inquired from the memory.
4. The method of claim 3, wherein the plurality of fields further comprises an identification field of a cache file; after the size field of the cached file in the cache list is queried, the method further includes:
recording the identification of the file to the identification field of the cache file in the cache list, and recording the file size of the file to the size field of the cache file in the cache list to obtain an updated cache list;
and writing the updated cache list into the cache list file.
5. The method of claim 3, wherein the plurality of fields further comprises a last usage time field of the cache file; the cleaning the cache space comprises:
inquiring the last use time field of the cache files in the cache list to obtain the last use time of all the cache files in the cache list;
determining one or more cache files to be cleaned based on the last use time of all the cache files;
and deleting the one or more cache files to be cleaned.
6. The method of claim 5, further comprising:
when the identifier of one cache file in the cache list is inquired from the memory every time, updating the information of the last use time field of the cache file in the cache list as inquiry time to obtain an updated cache list;
and writing the updated cache list into the cache list file.
7. The method of claim 5, wherein the deleting the one or more cached files to be cleaned comprises:
generating a corresponding deletion task for each cache file to be cleaned;
adding the generated deleting task to a deleting queue, wherein the deleting queue is a queue established in a memory;
and acquiring a deleting task from the deleting queue at regular time, and deleting the cache file corresponding to the deleting task.
8. A cache space management apparatus, wherein a cache list file is created in a cache space in advance, the cache list file having a cache list stored therein, the cache list including a plurality of fields, the plurality of fields including a size field of the cache file, the apparatus comprising:
the system comprises an acquisition module, a cache space and a processing module, wherein the acquisition module is used for acquiring a cache task to be processed and copying a file corresponding to the cache task to the cache space;
the first query module is used for querying the file size of the file after the file corresponding to the cache task is copied successfully;
the second query module is used for querying the size field of the cache file in the cache list to obtain the sum of the sizes of all the cache files in the cache list;
and the cleaning module is used for cleaning the cache space if the sum of the sizes of all the cache files in the cache list and the sum of the file sizes of the files are larger than a preset threshold value.
9. A cache space management apparatus, comprising:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the cache space management method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for cache space management according to any one of claims 1 to 7.
CN202110751348.4A 2021-07-02 2021-07-02 Cache space management method, device and storage medium Pending CN113377724A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110751348.4A CN113377724A (en) 2021-07-02 2021-07-02 Cache space management method, device and storage medium
PCT/CN2021/115424 WO2023272918A1 (en) 2021-07-02 2021-08-30 Cache space management method and apparatus, and electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110751348.4A CN113377724A (en) 2021-07-02 2021-07-02 Cache space management method, device and storage medium

Publications (1)

Publication Number Publication Date
CN113377724A true CN113377724A (en) 2021-09-10

Family

ID=77580710

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110751348.4A Pending CN113377724A (en) 2021-07-02 2021-07-02 Cache space management method, device and storage medium

Country Status (2)

Country Link
CN (1) CN113377724A (en)
WO (1) WO2023272918A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117235024B (en) * 2023-11-16 2024-01-26 江西国泰利民信息科技有限公司 Cache updating method, system, storage medium and equipment based on code analysis

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447037A (en) * 2014-08-29 2016-03-30 优视科技有限公司 Caching clearing method and device
US20180121465A1 (en) * 2016-11-01 2018-05-03 Microsoft Technology Licensing, Llc Network-based communication and file sharing system
CN109815425A (en) * 2018-12-14 2019-05-28 平安科技(深圳)有限公司 Caching data processing method, device, computer equipment and storage medium
CN113051055A (en) * 2021-03-24 2021-06-29 北京沃东天骏信息技术有限公司 Task processing method and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9021226B2 (en) * 2011-06-10 2015-04-28 International Business Machines Corporation Moving blocks of data between main memory and storage class memory
CN104219283B (en) * 2014-08-06 2017-12-29 上海爱数信息技术股份有限公司 File on-demand based on cloud storage is downloaded and automatic synchronous method and its device
CN104462523B (en) * 2014-12-23 2018-05-01 合一网络技术(北京)有限公司 The method for sorting and system of equipment cache file
CN105589926A (en) * 2015-11-27 2016-05-18 深圳市美贝壳科技有限公司 Method for clearing cache files of mobile terminal in real time
CN109324983A (en) * 2017-07-31 2019-02-12 武汉斗鱼网络科技有限公司 A kind of method, storage medium, equipment and the system of automatic cleaning cache file
CN108920271A (en) * 2018-05-17 2018-11-30 广州优视网络科技有限公司 Application cache method for cleaning, device, storage medium and terminal
CN109413159A (en) * 2018-09-27 2019-03-01 平安普惠企业管理有限公司 Cache file update method, device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447037A (en) * 2014-08-29 2016-03-30 优视科技有限公司 Caching clearing method and device
US20180121465A1 (en) * 2016-11-01 2018-05-03 Microsoft Technology Licensing, Llc Network-based communication and file sharing system
CN109815425A (en) * 2018-12-14 2019-05-28 平安科技(深圳)有限公司 Caching data processing method, device, computer equipment and storage medium
CN113051055A (en) * 2021-03-24 2021-06-29 北京沃东天骏信息技术有限公司 Task processing method and device

Also Published As

Publication number Publication date
WO2023272918A1 (en) 2023-01-05

Similar Documents

Publication Publication Date Title
CN109254733B (en) Method, device and system for storing data
US7117294B1 (en) Method and system for archiving and compacting data in a data storage array
CN102110121B (en) A kind of data processing method and system thereof
CN109739815B (en) File processing method, system, device, equipment and storage medium
US20130013561A1 (en) Efficient metadata storage
WO2015117426A1 (en) File management method and device
CN106599111B (en) Data management method and storage system
CN106446044B (en) Storage space recovery method and device
CN109101599B (en) Incremental index updating method and system
CN109766318B (en) File reading method and device
CN110716924B (en) Method and device for deleting expired data
CN110968478A (en) Log collection method, server and computer storage medium
CN106161193B (en) Mail processing method, device and system
CN111198856A (en) File management method and device, computer equipment and storage medium
CN111736915B (en) Management method, device, equipment and medium for cloud host instance hardware acceleration equipment
CN109472540B (en) Service processing method and device
CN111291006B (en) Data recovery processing method, device and equipment and readable storage medium
CN106874343B (en) Data deletion method and system for time sequence database
CN113377724A (en) Cache space management method, device and storage medium
CN113377722A (en) Resource data reading method and device and storage medium
CN111382180B (en) Data clearing method and device for local cache
CN112965939A (en) File merging method, device and equipment
US8832176B1 (en) Method and system for processing a large collection of documents
CN113377723B (en) Cache file management method, device and storage medium
CN113805864A (en) Project engineering generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210910