CN116089321A - Memory management method, device, electronic device and storage medium - Google Patents

Memory management method, device, electronic device and storage medium Download PDF

Info

Publication number
CN116089321A
CN116089321A CN202211705490.6A CN202211705490A CN116089321A CN 116089321 A CN116089321 A CN 116089321A CN 202211705490 A CN202211705490 A CN 202211705490A CN 116089321 A CN116089321 A CN 116089321A
Authority
CN
China
Prior art keywords
memory
linked list
block
address
capacity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211705490.6A
Other languages
Chinese (zh)
Inventor
李世豪
孙舒婷
黄鹏
吴昌金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202211705490.6A priority Critical patent/CN116089321A/en
Publication of CN116089321A publication Critical patent/CN116089321A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0253Garbage collection, i.e. reclamation of unreferenced memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Memory System (AREA)

Abstract

The application relates to a memory management method, a device, an electronic device and a storage medium, wherein the method comprises the following steps: detecting a current first memory application request of a producer, responding to the detected current first memory application request, obtaining a corresponding application memory capacity, and a current address corresponding to a first memory block to be used in a first idle memory linked list, and using the first memory block to be used according to the current address; determining the memory address offset of the memory block to be used according to the application memory capacity, and acquiring a new address corresponding to a new first memory block to be used in the first idle memory linked list according to the current address and the memory address offset; and responding to the detected next first memory application request, and using the first memory block to be used according to the new address. By the method and the device, the problem that memory space is wasted easily in the memory management process is solved, and an efficient and accurate memory management method is realized.

Description

Memory management method, device, electronic device and storage medium
Technical Field
The present disclosure relates to the field of memory management technologies, and in particular, to a memory management method, a device, an electronic device, and a storage medium.
Background
In computer technology, a cache technology is generally used, for example, intermediate results of calculation are not processed in time, and are temporarily stored by using a memory. In the related art, memory management generally divides a memory area into memory block areas with different sizes, searches for the divided memory block with the closest size when applying for memory, returns the memory address, and re-applies for the memory block with the size or splits the memory block with the larger size into several smaller blocks for use if the memory block with the size is used up. However, when applying for memories of different sizes frequently, the size returned by the memory manager is always larger than the size of the actually used memory, which will cause many unused memory holes, i.e. memory fragments, so that the memory space is easily wasted.
At present, no effective solution is proposed for the problem that memory space is easily wasted in the memory management process in the related art.
Disclosure of Invention
The embodiment of the application provides a memory management method, a memory management device, an electronic device and a storage medium, which are used for at least solving the problem that memory space is easily wasted in the memory management process in the related technology.
In a first aspect, an embodiment of the present application provides a memory management method, where the method includes:
detecting a current first memory application request of a producer;
responding to the detected current first memory application request, obtaining a corresponding application memory capacity, a current address corresponding to a first memory block to be used in a first idle memory linked list, and using the first memory block to be used according to the current address;
determining the memory address offset of the first memory block to be used according to the application memory capacity, and acquiring a new address corresponding to the first memory block to be used in the first idle memory linked list according to the current address and the memory address offset;
detecting a next first memory application request of the producer; and responding to the detected next first memory application request, and using the first memory block to be used according to the new address.
In some of these embodiments, the method further comprises:
and deleting the first memory block to be used in the first idle memory linked list according to the current address and the memory address offset, and newly adding a first used memory block in the first used memory linked list.
In some embodiments, after the obtaining the corresponding application memory capacity, the method further includes:
under the condition that the applied memory capacity is detected to be smaller than or equal to a preset capacity value, acquiring a current address in the first idle memory linked list, and using the current memory block to be used according to the current address;
under the condition that the applied memory capacity is detected to be larger than the preset capacity value, acquiring the memory capacity to be used and the current address corresponding to a second memory block to be used in a second idle memory linked list;
using the second memory block to be used according to the applied memory capacity and the memory capacity to be used, deleting the second memory block to be used in the second idle memory linked list according to the current address of the second memory block to be used, and newly adding a second used memory block in the second used memory linked list;
the memory capacity of the second idle memory linked list is larger than the memory capacity of the first idle memory linked list, and the memory capacity of the second used memory linked list is larger than the memory capacity of the first used memory linked list.
In some of these embodiments, the method further comprises:
Detecting a first memory release request of a consumer, and responding to the detected first memory release request, and sending a memory pool corresponding to the first memory release request to a preset memory Chi Huancun linked list for caching;
detecting a memory pool release request corresponding to the memory Chi Huancun linked list;
responding to the detected memory pool release request, and under the condition that the number of memory blocks in the first idle memory linked list is detected to be larger than the number of preset memory blocks, resetting the first idle memory linked list and the first used memory linked list, and generating a new first idle memory linked list and a new first used memory linked list;
and resetting the second idle memory linked list and the second used memory linked list according to the memory pool release request, and generating a new second idle memory linked list and a new second used memory linked list.
In some of these embodiments, the method further comprises:
acquiring a preset central memory block bitmap;
generating a second memory application request corresponding to the memory pool according to the application memory capacity, and searching the central memory block bitmap according to the second memory application request; wherein the memory pool comprises the first idle memory linked list;
Under the condition that the continuous memory area to be used exists in the central memory block bitmap, distributing the corresponding memory blocks to the memory pool according to the continuous memory area to be used;
and under the condition that the searching of the continuous memory area to be used fails, acquiring a new central memory block bitmap, and searching the new central memory block bitmap until the corresponding memory block is distributed to the memory pool.
In some of these embodiments, the method further comprises:
detecting a second memory release request of the memory pool; the second memory release request comprises a memory address to be released and a memory capacity to be released;
responding to the detected second memory release request, determining a to-be-released central memory block according to the to-be-released memory address, and determining a corresponding to-be-released address offset in the bitmap of the to-be-released central memory block according to the to-be-released memory capacity;
and performing zero setting operation on the bitmap of the central memory block according to the offset of the address to be released to obtain a new bitmap of the central memory, and performing releasing operation on the bitmap of the central memory block to be released according to the new bitmap of the central memory.
In some of these embodiments, the method further comprises:
acquiring a private memory pool application request sent by the producer, and detecting whether an unused memory pool exists in a preset memory Chi Huancun linked list according to the private memory pool application request;
if the unused memory pool is detected to exist, the memory pool is distributed to the producer; if the unused memory pool is detected to be missing, a new memory pool created through the memory Chi Huancun linked list is obtained, and the new memory pool is distributed to the producer.
In a second aspect, an embodiment of the present application provides a memory management device, where the device includes: the device comprises a detection module, a first use module, an address offset module and a second use module;
the detecting module is used for detecting a current first memory application request of a producer;
the first use module is used for responding to the detected current first memory application request, obtaining corresponding application memory capacity, a current address corresponding to a first memory block to be used in a first idle memory linked list, and using the first memory block to be used according to the current address;
The address offset module is configured to determine, according to the application memory capacity, a memory address offset of the first memory block to be used, and obtain, according to the current address and the memory address offset, a new address corresponding to the first memory block to be used in the first idle memory linked list;
the second use module is used for detecting a next first memory application request of the producer; and responding to the detected next first memory application request, and using the first memory block to be used according to the new address.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and capable of running on the processor, where the processor implements the memory management method according to the first aspect when executing the computer program.
In a fourth aspect, embodiments of the present application provide a storage medium having stored thereon a computer program which, when executed by a processor, implements a memory management method as described in the first aspect above.
Compared with the related art, the memory management method, the device, the electronic device and the storage medium provided by the embodiment of the application, by detecting the current first memory application request of the producer, responding to the detected current first memory application request, obtaining the corresponding application memory capacity, and the current address corresponding to the first memory block to be used in the first idle memory linked list, and using the first memory block to be used according to the current address; determining the memory address offset of the memory block to be used according to the application memory capacity, and acquiring a new address corresponding to a new first memory block to be used in the first idle memory linked list according to the current address and the memory address offset; detecting a next first memory application request of a producer; in response to the detected next first memory application request, the first memory block to be used is used according to the new address, so that the size of the memory block to be used can be flexibly determined according to the size of the application memory capacity, memory fragments generated by dividing the memory into a plurality of memory blocks with fixed sizes in the related art are avoided, the utilization rate of the memory is effectively improved, the problem that the memory space is easily wasted in the memory management process is solved, and an efficient and accurate memory management method is realized.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the other features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is an application environment diagram of a memory management method according to an embodiment of the present application;
FIG. 2 is a flow chart of a memory management method according to an embodiment of the present application;
FIG. 3 is a flow chart of a method of memory pool memory application according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a memory management method according to a preferred embodiment of the present application;
FIG. 5 is a block diagram of a memory management device according to an embodiment of the present application;
fig. 6 is a block diagram of the interior of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described and illustrated below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden on the person of ordinary skill in the art based on the embodiments provided herein, are intended to be within the scope of the present application. Moreover, it should be appreciated that while such a development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as having the benefit of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly and implicitly understood by those of ordinary skill in the art that the embodiments described herein can be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs. Reference herein to "a", "an
The terms "a," "an," "the," and the like, are not limited to quantities and may be used in the singular or plural. The terms "comprising," "including," "having," and any variations thereof, as used in this application 5, are intended to cover
Non-exclusive inclusion; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to only those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus
And (5) a meta. The terms "connected," "coupled," and the like in this application are not limited to physical 0 or mechanical connections, but may include electrical connections, whether direct or indirect. The book is provided with
The term "plurality" as used herein means greater than or equal to two. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. The terms "first," "second," and the like, as used herein,
"third" and the like are merely distinguishing between similar objects and do not represent a particular ordering for objects.
The memory management method provided by the embodiment of the application can be applied to an application environment shown in fig. 1.
Wherein the terminal device 102 communicates with the server device 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server device 104 or may be located on a cloud or other network server. The producer may send the first memory to the terminal device 102
The request is applied for and transmitted to the server device 104 via the terminal device 102. The server device 104 obtains the corresponding application memory capacity according to the current first memory application request, and the current address corresponding to the current first memory block to be used in the first memory idle linked list, and the current address uses the current first memory block to be used; the server device 104 determines the memory address offset of the memory block to be used according to the applied memory capacity, and obtains the first memory address according to the current address and the memory address offset
An idle memory linked list summarizes new addresses corresponding to the new first memory blocks to be used; the server 1045 obtains a next first memory application request sent by the producer via the terminal 102, according to the next first memory application request
The first memory application requests that the new first memory block to be used is used according to the new address. The terminal device 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, which may be smart watches, smart bracelets, headsets, etc. The server device 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
It should be noted that, the execution body in the embodiment of the present application may be a server device, an operating system, a memory management platform, or a memory management system, that is, the execution body may be various, and may be set, used, or changed according to needs. In addition, a third party application may also assist the execution body in executing the embodiment.
The present embodiment provides a memory management method, and fig. 2 is a flowchart of a memory management method according to an embodiment of the present application, as shown in fig. 2, where the flowchart includes the following steps:
Step S210, a current first memory application request of a producer is obtained.
The first memory application request refers to request information sent by a producer for applying for a memory space. For example, the producer may send the first memory application request to a memory pool module disposed on the server device via the terminal device, or directly to a central memory management module disposed on the server device. It will be appreciated that the producer may be one or more.
Step S220, responding to the detected first memory application request, obtaining the corresponding application memory capacity, the current address corresponding to the first memory block to be used in the first idle memory linked list, and using the first memory block to be used according to the current address.
Step S230, determining the memory address offset of the memory block to be used according to the applied memory capacity, and obtaining a new address corresponding to the first memory block to be used in the first idle memory linked list according to the current address and the memory address offset.
In the steps S220 to S230, after the current first memory application request is detected, the current first memory application request may be analyzed, and the size of the memory space required by the producer, that is, the application memory capacity, is calculated, and then the memory blocks in the first idle memory linked list are used according to the application memory capacity.
Specifically, it may be first determined whether the remaining capacity of the first memory block to be used in the first free memory linked list header meets the size of the memory space required by the producer, that is, whether the remaining capacity of the current first memory block to be used is greater than or equal to the requested memory capacity. If yes, the memory address of the first memory block to be used is backwards offset by the same length as the application memory capacity, namely, the memory address offset of the memory block to be used is determined according to the application memory capacity, the current address corresponding to the current first memory block to be used is backwards offset according to the memory offset to obtain a new memory address, then the memory address before the first memory block to be used is returned to a producer, the producer uses the first memory block to be used based on the memory address before the offset, and data to be stored are put into the first memory block to be used; and subtracting the applied memory capacity from the residual capacity of the first memory block to be used so as to obtain the new residual capacity of the first memory block to be used. If the current residual capacity of the first memory block to be used does not meet the size of the memory space required by the producer, the memory pool module applies for a 4Kb memory space from the central memory management module, and hangs the 4Kb memory block into a first free memory linked list of the memory pool, meanwhile, initializes the residual capacity of the first memory block to be used to 4Kb, calculates the residual capacity after meeting the current application, shifts the memory address of the first memory block to be used backwards by the same length as the applied memory capacity, and finally returns the memory address before the shift of the first memory block to be used to the producer for use.
Step S240, detecting the next first memory application request of the producer; and responding to the detected next first memory application request, and using the first memory block to be used according to the new address.
After the memory address of the current memory block in the first idle memory linked list is offset backward by the length equal to the length of the applied memory capacity in the step S230, the first memory application request of the producer is continuously detected, and when the next first memory application request is detected, the steps are repeated according to the next memory application request, so as to determine a new address and a remaining capacity corresponding to the first memory to be used in the first idle memory linked list, and return the memory address before offset to the producer for use.
Through the steps S210 to S240, the application memory capacity is determined according to the first memory application request, and the memory address offset of the memory block to be used in the idle memory linked list is determined according to the application memory capacity, so that the memory fragments generated by dividing the memory into a plurality of memory blocks with fixed sizes in the related art are avoided, the utilization rate of the memory is effectively improved, the problem that the memory space is easily wasted in the memory management process is solved, and the efficient and accurate memory management method is realized.
The embodiments of the present application are described and illustrated below by means of preferred embodiments. Fig. 3 is a flowchart of a method for applying for memory in a memory pool according to an embodiment of the present application, as shown in fig. 3, the flowchart includes the following steps:
step S301, a first memory application request initiated by a producer is obtained.
Step S302, according to the current first memory application request, the corresponding application memory capacity is obtained, and whether the application memory capacity is larger than 4Kb is judged.
Step S303, if the judgment result in the step S302 is no, judging whether the memory block of the first idle memory linked list header meets the requirement of the application memory capacity. If the determination result of the step S303 is no, the following step S304 is executed, and if the determination result of the step S303 is yes, the following step S305 is directly executed.
Step S304, apply for 4Kb memory block to the central memory management module, and add the 4Kb memory block into the first idle memory linked list.
In step S305, the requested memory capacity is subtracted from the remaining memory block capacity, and the starting address of the memory block is shifted backward by the same length as the requested memory capacity.
Step S306, if the judgment result of the step S302 is yes, directly applying for the central management module, and adding the memory block into the first used memory linked list.
Step S307, return the initial address before the memory block offset to the producer.
In some embodiments, the memory management method further includes the following steps: and deleting the first memory block to be used in the first idle memory linked list according to the current address and the memory address offset, and newly adding a first used memory block in the first used memory linked list. Through the embodiment, the memory blocks applied for use by the producer are stored in the first use memory linked list according to the current address and the memory address offset, so that the memory can be quickly released from the first use memory linked list when the memory blocks are returned after the consumer uses the data, and the memory management efficiency is improved.
In some embodiments, the memory pool further includes a second free memory linked list and a second used memory linked list; after obtaining the corresponding application memory capacity, the memory management method further comprises the following steps:
step S251, under the condition that the applied memory capacity is detected to be smaller than or equal to a preset capacity value, the current address in the first idle memory linked list is obtained, and the current memory block to be used is used according to the current address.
The preset capacity value may be preset by a worker, for example, the preset capacity value may be preset to 4Kb, that is, a minimum memory operation unit under the Linux operating system. When the applied memory capacity is detected to be smaller than or equal to the preset capacity value, it is indicated that the memory space required by the producer is smaller at this time, so that the current address of the memory block to be used, which is free in the head of the first free memory linked list, can be returned to be used through the step S251 or the steps S220 to S230, and the address of the memory block to be used is offset backwards by the same length as the applied memory capacity, so as to obtain a new memory address and a new residual capacity of the memory block to be used.
Step S252, under the condition that the applied memory capacity is detected to be larger than the preset capacity value, the memory capacity to be used and the current address corresponding to the second memory block to be used in the second idle memory linked list are obtained.
Step S253, using the second to-be-used memory block according to the applied memory capacity and the to-be-used memory capacity, deleting the second to-be-used memory block in the second idle memory linked list according to the current address of the second to-be-used memory block, and adding a second used memory block in the second to-be-used memory linked list.
The memory capacity of the second idle memory linked list is larger than that of the first idle memory linked list, and the memory capacity of the second used memory linked list is larger than that of the first used memory linked list. For example, the first free memory linked list and the first used memory linked list may store memory blocks having a memory space value less than or equal to 4Kb, and the second free memory linked list and the second used memory linked list may store memory blocks having a memory space value greater than 4 Kb.
In the above steps S252 to S253, each time the producer applies memory to the memory pool, the result of comparison between the applied memory capacity and the preset capacity value is detected. If the applied memory space value is larger than the preset capacity value, the fact that the memory space required by the producer is larger at the moment is indicated, so that whether a memory block with the same size as the applied memory capacity exists in the second idle memory linked list can be searched, if the memory block meeting the requirement of the applied memory capacity is not searched, the memory block with the applied memory capacity is applied to the central memory management module for being used by the producer, and the memory block is added into the second used memory linked list. If a second to-be-used memory block meeting the requirement of the application memory capacity is found from the second idle memory linked list, namely, the to-be-used memory capacity is larger than or equal to the application memory capacity, returning the current address corresponding to the second to-be-used memory block to the producer, deleting the second to-be-used memory block from the second idle memory linked list, and adding the second to-be-used memory block into the second used memory linked list.
In the related art, a large memory block is usually continuously segmented into small memory blocks for use, and meanwhile, when the large memory block is continuously applied, the application cannot be performed, and another large memory block needs to be opened up, so that memory fragments are easy to generate, and the utilization rate of the memory is low. In the present application, through the steps S251 to S253, the first free memory linked list and the first used memory linked list with smaller memory space, and the second free memory linked list and the second used memory linked list with larger memory space are divided in the memory pool, so that the memory capacity is smaller in the application
The memory space matched with the application memory capacity can be taken out from the first idle memory linked list in an hour, and the whole large memory space is taken out from the second idle memory linked list when the application 5 is requesting that the memory capacity is larger, thereby effectively avoiding
The memory fragments caused by the continuous segmentation of the original large memory blocks are avoided, and the utilization rate of the memory is further improved.
In some embodiments, the memory management method further includes the following steps:
in step S261, a first memory release request of the consumer is detected, and in response to the detected first memory release request, a memory pool corresponding to the first memory release request is sent to a preset memory Chi Huancun link 0 table for buffering.
The first memory release request refers to request information sent by a consumer for releasing a memory space. After the consumer finishes processing the memory data of the producer, returning the memory pool associated with the memory data to a memory pool linked list management module which is used for caching the memory pool and is deployed on the server equipment;
the memory pool linked list management module is used for managing a memory Chi Huancun linked list. It will be appreciated that the consumer 5 may be one or more.
In step S262, a memory pool release request corresponding to the linked list of the memory Chi Huancun is detected.
For example, when the number of memory pools cached in the memory Chi Huancun linked list is detected to be greater than a certain threshold by the memory pool linked list management module, the memory pool management module may correspondingly generate a memory pool
The request is released and detected by the server device. For example, when the memory pool list management module returns the memory pools of the consumer 0, the memory pool list management module inquires the number of the cache memory pools which are currently managed, if the number of the cache memory pools is larger than a set certain threshold value,
the general cache memory pool maintained by the memory management module can be destroyed, and the destroyed memory pool can be reset through the subsequent steps so as to return the memory blocks managed by the destroyed memory pool to the central memory management module.
Step S263, in response to the detected memory pool release request, in case that the number of memory blocks in the first idle memory linked list is detected to be greater than the number of preset memory blocks, resetting the first idle memory linked list and the 5 th used memory linked list, and generating a new first idle memory linked list and a new first used memory linked list
And (5) a memory linked list.
And responding to the memory pool release request, and resetting the first idle memory linked list and the first using memory linked list with smaller memory space. Specifically, the number of memory blocks in the first idle memory linked list is detected before reset, if the number of memory blocks in the first idle memory linked list is detected to be larger than the number of preset memory blocks, all memory blocks managed by the first idle memory linked list are returned to the central memory management module, and then the memory blocks in the first used memory linked list are deleted and correspondingly added to the first idle memory linked list so as to generate a new first idle memory linked list and a new first used memory linked list.
Step S263, according to the memory pool release request, performing a reset operation on the second free memory linked list and the second used memory linked list, and generating a new second free memory linked list and a new second used memory linked list.
And responding to the memory pool release request, and resetting the second idle memory linked list and the second using memory linked list with larger memory space. And deleting all the memory blocks in the second used memory linked list and correspondingly adding the memory blocks into the second idle memory linked list to generate a new second idle memory linked list and a second used memory linked list. It can be understood that, since the memory block with larger memory space can be regarded as a dedicated memory block, that is, the memory block can only be used when a request for applying the memory capacity to meet the memory space value is issued, when all the memory pools are reset, the memory block with large memory space managed by the second idle memory linked list can be considered as the memory block with the memory space value no longer needed when the task is executed at this time or the next time, so that the memory space can be released in time, and excessive space memory is avoided. In addition, the frequency of using the memory blocks for large memory space is generally lower than the frequency of using the memory blocks for small memory space in the application process, so that the memory blocks for the memory space with excessive occupied space need to be cleaned in time, and the reset range of the memory blocks for the small memory space managed by the first idle memory linked list can be gradually reduced.
Through the steps S261 to S263, when the consumer releases the memory pool, the memory pool is cached by the memory pool linked list management module, and the memory applied by the memory pool can be used by the next consumer, thereby avoiding repeatedly applying for memory to the system, improving the memory application efficiency and reducing the risk of memory fragmentation.
In some embodiments, the memory management method further includes the following steps:
in step S271, a preset central memory block bitmap is obtained.
The bitmap of the central memory block refers to a bit map that is generated in advance and corresponds to the central memory block in the central memory management module. For example, the central memory block is divided into a plurality of memory areas according to a size of 1Kb, each memory area corresponds to 1bit in the bit map corresponding to the central memory block one by one, the correspondence is usually corresponding to the order from the small memory address to the large memory address, each bit 1 indicates that the corresponding memory area is used, and 0 indicates that the corresponding memory area is not used.
Step S272, generating a second memory application request corresponding to the memory pool according to the applied memory capacity, and searching the central memory block bitmap according to the second memory application request; the memory pool comprises the first idle memory linked list.
When it is detected that the free memory cached in the memory pool does not meet the memory space value required by the current producer, that is, the remaining capacity of the free memory in the memory pool is smaller than the memory capacity of the application, a second memory application request corresponding to the memory pool is generated and can be detected by the server device. Then in response to the detected second memory application request, the application memory capacity is first aligned by 1 Kb. And then traversing the searching center block by the center memory management module, and judging whether a center memory block with the continuous bit of 0 corresponding to the idle memory meeting the aligned memory application size exists.
Step S273, when the continuous memory area to be used exists in the bitmap of the central memory block, the corresponding memory block is allocated to the memory pool according to the continuous memory area to be used.
Specifically, after the searching in step S272 that the central memory block bitmap has an idle memory area with a continuous bit of 0, and the space size of the idle memory area satisfies the applied memory capacity measurement, that is, after detecting that the central memory block bitmap has a corresponding continuous memory area to be used, the central memory block bitmap may place 1 the continuous bit corresponding to the continuous memory area to be used, and azimuth the first memory address corresponding to the continuous bit to the memory pool module for managing the memory pool, so that the memory pool can be applied to a memory block with a corresponding space value from the central memory management module.
In step S274, in the case of failure in retrieving the continuous memory area to be used, a new central memory block bitmap is obtained, and the new central memory block bitmap is retrieved until the corresponding memory block is allocated to the memory pool.
Specifically, in the case that the continuous memory area to be used is not detected in the step S273, the central memory management module may apply for a memory space with a larger space value to the currently applied operating system, for example, may apply for a memory with a size of 2Mb to the operating system, and perform alignment processing on the central memory block applied at this time, and add the aligned processing to the central memory block linked list, and simultaneously establish a new central memory block bitmap corresponding to the central memory block bitmap, and initialize the central memory block bitmap to 0; and then, returning the initial memory value corresponding to the continuous bit to the memory pool module, wherein the continuous bit position 1 meeting the application memory capacity in the new central memory block bitmap.
Through the steps S271 to S274, the central memory block is divided into memory areas with a fixed size by the central memory management module and managed by the bitmap, so that adjacent unused memory areas can be merged by the bitmap, instead of actively searching whether the upstream and downstream memories are used or not by the partner system in the related art and then actively merging; meanwhile, the bitmap table correspondingly formed by the continuous small memory areas can improve the memory hit rate and the calculation efficiency of the CPU, so that the memory management efficiency can be effectively improved.
In some embodiments, the memory management method further includes the following steps:
step S281, detecting a second memory release request of the memory pool; the second memory release request includes a memory address to be released and a memory capacity to be released.
When the memory pool releases the memory to the central memory management module, not only the memory address to be released is transferred to the central memory management module, but also the size of the memory, namely the memory capacity to be released is required to be transferred.
Step S282, in response to the detected second memory release request, determines a to-be-released central memory block according to the to-be-released memory address, and determines a to-be-released address offset corresponding to the to-be-released central memory block in the central memory block bitmap according to the to-be-released memory capacity.
Step S283, performing zero setting operation on the bitmap of the central memory block according to the address offset to be released, obtaining a new bitmap of the central memory, and performing releasing operation on the central memory block to be released according to the new bitmap of the central memory.
In the steps S282 to S283, in response to the second memory release request, the central memory management module determines to which central memory block it manages the memory block to be released belongs according to the released memory address, calculates the offset of the memory in the bitmap of the central memory block according to the initial memory address of the central memory block, and positions the consecutive bits 0 of the size of the released memory corresponding to the offset. If the bit maps corresponding to the central memory block are all 0, the central memory block can be further released to the operating system.
Through the steps S281 to S283, the memory is released to the central memory management module in time through the memory pool, so that the utilization rate of the memory can be effectively improved, and meanwhile, as the small memory is indirectly managed by the memory pool, the large memory is managed by the central memory management module, and the memory fragmentation problem caused by repeated application and release of the large memory and the small memory is avoided or reduced.
In some embodiments, the memory management method further includes the following steps:
step S201, obtain a private memory pool application request sent by the producer, and detect whether an unused memory pool exists in a preset memory Chi Huancun linked list according to the private memory pool application request.
Step S202, if the unused memory pool is detected, the memory pool is distributed to the producer; if the unused memory pool is detected to be missing, a new memory pool created by the memory pool cache linked list is acquired, and the new memory pool is distributed to the producer.
In the above steps S201 to S202, in the initial stage, the producer may apply for the private memory pool to the memory pool linked list management module, and the memory pool linked list management module may query whether there is an unused memory pool in the memory Chi Huancun linked list managed by the producer, if it is detected that there is an unused memory pool, the memory pool is returned to the producer corresponding to the private memory pool application request, and if it is not detected that there is an unused memory pool, a new memory pool may be created and the scope is given to the producer.
Through the steps S201 to S202, the detected private memory pool application request initiated by the producer allocates the memory pool in the cache chain table to the corresponding producer, so that each producer has the corresponding private memory pool, thereby avoiding or reducing the competition phenomenon when a plurality of producers apply for the memory from the memory manager at the same time, and being beneficial to further improving the efficiency and accuracy of the memory management.
In the following, an embodiment of the present application will be described in detail with reference to the actual application scenario, and fig. 4 is a schematic architecture diagram of a memory management method according to a preferred embodiment of the present application, as shown in fig. 4, where a system architecture of the memory management method mainly includes a memory pool module, a memory pool linked list management module, and a unique central memory management module. The memory pool module consists of a first idle memory linked list, a first used memory linked list, a second idle memory linked list and a second used memory linked list, and can provide functions of memory application and reset. The memory pool linked list management module is a manager of the memory pool, and mainly comprises a memory Chi Huancun linked list. The central memory management module is internally provided with a central memory block linked list for maintaining a plurality of central memory blocks. Each central memory block may correspond to a bitmap; for example, the central memory block may be divided into a plurality of memory areas according to a size of 1Kb, each memory area corresponds to 1bit in the bit map corresponding to the central memory block one by one, the correspondence may correspond in order from small to large according to the memory addresses, each bit is set to 1 to indicate that it is used, and set to 0 to indicate that it is not used.
It should be noted that the steps illustrated in the above-described flow or flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
The embodiment also provides a memory management device, which is used for implementing the foregoing embodiments and preferred embodiments, and is not described in detail. As used below, the terms "module," "unit," "sub-unit," and the like may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 5 is a block diagram of a memory management device according to an embodiment of the present application, as shown in fig. 5, the device includes: the detection module 52, the first usage module 54, the address offset module 56, and the second usage module 58.
The detecting module 52 is configured to detect a current first memory application request of a producer; the first usage module 54 is configured to obtain a corresponding application memory capacity in response to the detected current first memory application request, and a current address corresponding to a current first memory block to be used in the first idle memory linked list, and use the current first memory block to be used according to the current address; the address offset module 56 is configured to determine an address offset of the memory block to be used according to the applied memory capacity, and obtain a new address corresponding to a new first memory block to be used in the first idle memory linked list according to the current address and the address offset; the second usage module 58 is configured to detect a next first memory application request from the producer, and use the new first memory block to be used according to the new address in response to the detected next first memory application request.
Through the above embodiment, the first usage module 54 determines the application memory capacity according to the first memory application request, and the address offset module 56 determines the memory address offset of the memory block to be used in the idle memory linked list according to the application memory capacity, thereby avoiding the memory fragmentation generated by dividing the memory into a plurality of memory blocks with fixed sizes in the related art, effectively improving the utilization rate of the memory, solving the problem of easily wasting the memory space in the memory management process, and realizing the efficient and accurate memory management method.
The above-described respective modules may be functional modules or program modules, and may be implemented by software or hardware. For modules implemented in hardware, the various modules described above may be located in the same processor; or the above modules may be located in different processors in any combination.
The embodiment also provides a computer device, which may be a server, and fig. 6 is a structural diagram of an interior of the computer device according to an embodiment of the application, as shown in fig. 6. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used to store a memory pool. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement the memory management method described above.
It will be appreciated by those skilled in the art that the structure shown in fig. 6 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
The present embodiment also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, detecting a current first memory application request of a producer.
S2, responding to the detected first memory application request, obtaining the corresponding application memory capacity, a current address corresponding to a current first memory block to be used in a first idle memory linked list, and using the current first memory block to be used according to the current address.
S3, determining the memory address offset of the memory block to be used according to the applied memory capacity, and acquiring a new address corresponding to a new first memory block to be used in the first idle memory linked list according to the current address and the memory address offset.
And S4, responding to the detected next first memory application request of the producer, and using the new first memory block to be used according to the new address.
It should be noted that, specific examples in this embodiment may refer to examples described in the foregoing embodiments and alternative implementations, and this embodiment is not repeated herein.
In addition, in combination with the memory management method in the above embodiment, the embodiment of the application may be implemented by providing a storage medium. The storage medium has a computer program stored thereon; the computer program, when executed by a processor, implements any of the memory management methods of the above embodiments.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It should be understood by those skilled in the art that the technical features of the above-described embodiments may be combined in any manner, and for brevity, all of the possible combinations of the technical features of the above-described embodiments are not described, however, they should be considered as being within the scope of the description provided herein, as long as there is no contradiction between the combinations of the technical features.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. A memory management method, the method comprising:
detecting a current first memory application request of a producer;
responding to the detected current first memory application request, obtaining a corresponding application memory capacity, a current address corresponding to a first memory block to be used in a first idle memory linked list, and using the first memory block to be used according to the current address;
Determining the memory address offset of the first memory block to be used according to the application memory capacity, and acquiring a new address corresponding to the first memory block to be used in the first idle memory linked list according to the current address and the memory address offset;
detecting a next first memory application request of the producer; and responding to the detected next first memory application request, and using the first memory block to be used according to the new address.
2. The memory management method according to claim 1, wherein the method further comprises:
and deleting the first memory block to be used in the first idle memory linked list according to the current address and the memory address offset, and newly adding a first used memory block in the first used memory linked list.
3. The memory management method according to claim 2, wherein after the obtaining the corresponding application memory capacity, the method further comprises:
under the condition that the applied memory capacity is detected to be smaller than or equal to a preset capacity value, acquiring a current address in the first idle memory linked list, and using the current memory block to be used according to the current address;
Under the condition that the applied memory capacity is detected to be larger than the preset capacity value, acquiring the memory capacity to be used and the current address corresponding to a second memory block to be used in a second idle memory linked list;
using the second memory block to be used according to the applied memory capacity and the memory capacity to be used, deleting the second memory block to be used in the second idle memory linked list according to the current address of the second memory block to be used, and newly adding a second used memory block in the second used memory linked list;
the memory capacity of the second idle memory linked list is larger than the memory capacity of the first idle memory linked list, and the memory capacity of the second used memory linked list is larger than the memory capacity of the first used memory linked list.
4. The memory management method according to claim 3, wherein the method further comprises:
detecting a first memory release request of a consumer, and responding to the detected first memory release request, and sending a memory pool corresponding to the first memory release request to a preset memory Chi Huancun linked list for caching;
detecting a memory pool release request corresponding to the memory Chi Huancun linked list;
Responding to the detected memory pool release request, and under the condition that the number of memory blocks in the first idle memory linked list is detected to be larger than the number of preset memory blocks, resetting the first idle memory linked list and the first used memory linked list, and generating a new first idle memory linked list and a new first used memory linked list;
and resetting the second idle memory linked list and the second used memory linked list according to the memory pool release request, and generating a new second idle memory linked list and a new second used memory linked list.
5. The memory management method according to claim 1, wherein the method further comprises:
acquiring a preset central memory block bitmap;
generating a second memory application request corresponding to the memory pool according to the application memory capacity, and searching the central memory block bitmap according to the second memory application request; wherein the memory pool comprises the first idle memory linked list;
under the condition that the continuous memory area to be used exists in the central memory block bitmap, distributing the corresponding memory blocks to the memory pool according to the continuous memory area to be used;
And under the condition that the searching of the continuous memory area to be used fails, acquiring a new central memory block bitmap, and searching the new central memory block bitmap until the corresponding memory block is distributed to the memory pool.
6. The memory management method according to claim 5, further comprising:
detecting a second memory release request of the memory pool; the second memory release request comprises a memory address to be released and a memory capacity to be released;
responding to the detected second memory release request, determining a to-be-released central memory block according to the to-be-released memory address, and determining a corresponding to-be-released address offset in the bitmap of the to-be-released central memory block according to the to-be-released memory capacity;
and performing zero setting operation on the bitmap of the central memory block according to the offset of the address to be released to obtain a new bitmap of the central memory, and performing releasing operation on the bitmap of the central memory block to be released according to the new bitmap of the central memory.
7. The memory management method according to any one of claims 1 to 6, further comprising:
Acquiring a private memory pool application request sent by the producer, and detecting whether an unused memory pool exists in a preset memory Chi Huancun linked list according to the private memory pool application request;
if the unused memory pool is detected to exist, the memory pool is distributed to the producer; if the unused memory pool is detected to be missing, a new memory pool created through the memory Chi Huancun linked list is obtained, and the new memory pool is distributed to the producer.
8. A memory management device, the device comprising: the device comprises a detection module, a first use module, an address offset module and a second use module;
the detecting module is used for detecting a current first memory application request of a producer;
the first use module is used for responding to the detected current first memory application request, obtaining corresponding application memory capacity, a current address corresponding to a first memory block to be used in a first idle memory linked list, and using the first memory block to be used according to the current address;
the address offset module is configured to determine, according to the application memory capacity, a memory address offset of the first memory block to be used, and obtain, according to the current address and the memory address offset, a new address corresponding to the first memory block to be used in the first idle memory linked list;
The second use module is used for detecting a next first memory application request of the producer; and responding to the detected next first memory application request, and using the first memory block to be used according to the new address.
9. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the memory management method of any of claims 1 to 7.
10. A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the memory management method of any of claims 1 to 7 when run.
CN202211705490.6A 2022-12-29 2022-12-29 Memory management method, device, electronic device and storage medium Pending CN116089321A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211705490.6A CN116089321A (en) 2022-12-29 2022-12-29 Memory management method, device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211705490.6A CN116089321A (en) 2022-12-29 2022-12-29 Memory management method, device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN116089321A true CN116089321A (en) 2023-05-09

Family

ID=86186270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211705490.6A Pending CN116089321A (en) 2022-12-29 2022-12-29 Memory management method, device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN116089321A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116361234A (en) * 2023-06-02 2023-06-30 深圳中安辰鸿技术有限公司 Memory management method, device and chip
CN116627855A (en) * 2023-07-24 2023-08-22 荣耀终端有限公司 Memory processing method and related device
CN117033002A (en) * 2023-10-09 2023-11-10 苏州元脑智能科技有限公司 Memory management method, device, equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116361234A (en) * 2023-06-02 2023-06-30 深圳中安辰鸿技术有限公司 Memory management method, device and chip
CN116361234B (en) * 2023-06-02 2023-08-08 深圳中安辰鸿技术有限公司 Memory management method, device and chip
CN116627855A (en) * 2023-07-24 2023-08-22 荣耀终端有限公司 Memory processing method and related device
CN116627855B (en) * 2023-07-24 2023-10-31 荣耀终端有限公司 Memory processing method and related device
CN117033002A (en) * 2023-10-09 2023-11-10 苏州元脑智能科技有限公司 Memory management method, device, equipment and storage medium
CN117033002B (en) * 2023-10-09 2024-02-09 苏州元脑智能科技有限公司 Memory management method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN116089321A (en) Memory management method, device, electronic device and storage medium
CN110995776B (en) Block distribution method and device of block chain, computer equipment and storage medium
US9940020B2 (en) Memory management method, apparatus, and system
CN109933543B (en) Data locking method and device of Cache and computer equipment
CN108965450B (en) Service request response method, device, computer equipment and storage medium
CN110928803B (en) Memory management method and device
CN111078410A (en) Memory allocation method and device, storage medium and electronic equipment
CN112579595A (en) Data processing method and device, electronic equipment and readable storage medium
CN110162395B (en) Memory allocation method and device
CN113065887B (en) Resource processing method, resource processing device, computer equipment and storage medium
CN116991855B (en) Hash table processing method, device, equipment, medium, controller and solid state disk
US10997077B2 (en) Increasing the lookahead amount for prefetching
CN115934354A (en) Online storage method and device
CN113076266B (en) Memory management method and device, electronic equipment and storage medium
CN113849311B (en) Memory space management method, device, computer equipment and storage medium
US10152258B1 (en) Big block allocation of persistent main memory
CN113032156B (en) Memory allocation method and device, electronic equipment and storage medium
CN111708715B (en) Memory allocation method, memory allocation device and terminal equipment
CN112395245B (en) Access device and method of processor and computer equipment
CN113986833A (en) File merging method, system, computer system and storage medium
US20210110201A1 (en) Computing system performing image backup and image backup method
CN115729438A (en) Data access method, device and storage medium
CN108959517B (en) File management method and device and electronic equipment
CN112764897A (en) Method, device and system for processing task request and computer readable storage medium
CN110442447B (en) Message queue-based load balancing method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination