WO2024099448A1 - 内存释放、内存恢复方法、装置、计算机设备及存储介质 - Google Patents

内存释放、内存恢复方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2024099448A1
WO2024099448A1 PCT/CN2023/131110 CN2023131110W WO2024099448A1 WO 2024099448 A1 WO2024099448 A1 WO 2024099448A1 CN 2023131110 W CN2023131110 W CN 2023131110W WO 2024099448 A1 WO2024099448 A1 WO 2024099448A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory block
memory
released
metadata
page table
Prior art date
Application number
PCT/CN2023/131110
Other languages
English (en)
French (fr)
Inventor
郑豪
Original Assignee
杭州阿里云飞天信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州阿里云飞天信息技术有限公司 filed Critical 杭州阿里云飞天信息技术有限公司
Publication of WO2024099448A1 publication Critical patent/WO2024099448A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/14Protection against unauthorised use of memory or access to memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]

Definitions

  • the operating system allocates a certain number of memory blocks to each process and creates corresponding page table data for each process to record information about each memory block allocated to the process.
  • the memory management module of the operating system will also record the hot and cold status information of each memory block allocated to each process. For cold memory blocks, they will be released through some processing mechanisms to free up space and improve resource utilization.
  • a data compression algorithm can be called to compress the data in a cold memory block, and the compressed data is stored in a specific storage space.
  • the memory block can be released and reused for allocation. Before releasing a memory block, it is necessary to clear the page table entry information corresponding to the memory block in the page table data of the process to which the memory block belongs, and record the compression information of the memory block in the metadata used to manage each compressed memory block, such as the storage location information of the compressed data, etc.
  • the operating system can detect the anomaly through the page table data, retrieve the compressed data and decompress it through the metadata record, and reallocate a free memory block to store the decompressed data for the process to access.
  • the above solution needs to call the data compression algorithm to compress the data in the memory block.
  • the execution of the data compression algorithm will have a certain overhead. Based on this, it is necessary to improve it to increase the speed of memory release. Similarly, when restoring the memory, it is also necessary to call the data decompression algorithm, and it is also necessary to improve the memory recovery solution to increase the speed of memory recovery.
  • this specification provides a memory release, memory recovery method, device, computer equipment and storage medium.
  • a memory release method wherein the memory includes multiple memory blocks, and the method includes: determining a memory block to be released in the memory; determining whether each bit of the memory block to be released is zero; if so, clearing the page table entry information corresponding to the memory block to be released, and creating metadata corresponding to the memory block to be released, and then releasing the memory block to be released; wherein the metadata includes preset mark information, and the preset mark information is used to indicate that each bit of the memory block to be released is zero.
  • a memory recovery method comprising: in response to a memory recovery request, obtaining metadata of a memory block to be recovered, the metadata including preset tag information or storage location information of compressed data; allocating a target memory block from the memory; if the metadata includes preset tag information, adding page table entry information of the target memory block to page table data to complete the recovery of the target memory block; wherein the preset tag information is used to indicate that each bit of the memory block to be recovered is zero.
  • a memory release device wherein the memory includes a plurality of memory blocks, and the device includes: a memory block determination module, used to determine a memory block to be released in the memory; a bit determination module, used to determine whether each bit of the memory block to be released is zero; and a release module, used to: determine whether each bit of the memory block to be released is zero.
  • the page table entry information corresponding to the memory block to be released is cleared, and after creating metadata corresponding to the memory block to be released, the memory block to be released is released; wherein the metadata includes preset mark information, and the preset mark information is used to indicate that all bits of the memory block to be released are zero.
  • a memory recovery device comprising: a memory block determination module, used to: in response to a memory recovery request, obtain metadata of a memory block to be recovered, the metadata including preset tag information or storage location information of compressed data; an allocation module, used to: allocate a target memory block from a memory; a recovery module, used to: when the metadata includes the preset tag information, add page table entry information of the target memory block to page table data, to complete the recovery of the target memory block; wherein the preset tag information is used to indicate that each bit of the memory block to be recovered is zero.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the method embodiment described in the first aspect are implemented.
  • a computer device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein when the processor executes the computer program, the steps of the method embodiment described in the first aspect are implemented.
  • FIG. 1 shows a schematic diagram of page table data according to an exemplary embodiment of this specification.
  • FIG. 2A shows a flow chart of a memory release method according to an exemplary embodiment of this specification.
  • FIG. 2B shows a schematic diagram of a memory according to an exemplary embodiment of this specification.
  • FIG. 2C shows a schematic diagram of a NUMA architecture according to an exemplary embodiment of this specification.
  • FIG. 2D shows a schematic diagram of a target search tree according to an exemplary embodiment of this specification.
  • FIG. 2E shows a flow chart of a memory recovery method according to an exemplary embodiment of this specification.
  • FIG3 shows a block diagram of a computer device where a memory release device/memory recovery device is located according to an exemplary embodiment of this specification.
  • FIG. 4 shows a block diagram of a memory release device according to an exemplary embodiment of this specification.
  • FIG5 shows a block diagram of a memory recovery device according to an exemplary embodiment of this specification.
  • first, second, third, etc. may be used in this specification to describe various information, these information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information.
  • word “if” as used herein may be interpreted as "at the time of” or "when” or "in response to determining”.
  • the operating system allocates virtual address space and physical address space to the process and creates a page table corresponding to the process.
  • the page table is used to record the mapping relationship between the virtual address space and the physical address space.
  • the operating system also maintains metadata for the managed memory to manage the memory.
  • the page table of the process needs to be updated because the storage location of the process data has changed; the change in the storage location of the data also causes the state of the memory block to change, so the metadata also needs to be updated.
  • the page table is a concept of virtual memory technology.
  • the operating system In order to allow programs to obtain more available memory and expand physical memory into larger logical memory, the operating system also uses virtual memory technology. It abstracts physical memory into address space. The operating system allocates an independent set of virtual addresses to each process, and the virtual addresses of different processes are mapped to the physical addresses of different memories. If the program wants to access the virtual address, the operating system will convert it into a different physical address.
  • addresses There are two concepts of addresses involved here: the memory address used by the program is called the virtual address (Virtual Memory Address, VA); the space address actually existing in the hardware is called the physical address (Physical Memory Address, PA).
  • Page tables are stored in memory, and the conversion from virtual memory to physical memory is achieved through the MMU (Memory Management Unit) of the CPU (Central Processing Unit).
  • MMU Memory Management Unit
  • CPU Central Processing Unit
  • the memory management unit of the operating system divides the memory according to the set management granularity, and each management granularity can be called a page (page) or a block.
  • the pages allocated to the process are 0 to N as an example.
  • the page table includes N page table entries, and each page table entry is used to represent the correspondence between the virtual address and the physical address of each page.
  • the entire page table records the relationship between the virtual address space, page table and physical address space of the process, and a virtual address of the process can be mapped to the corresponding physical address through the page table.
  • memory compression can be used to process infrequently used memory pages (usually called cold pages) in memory, thereby reducing memory usage.
  • a memory page originally allocated to a process is compressed, and the compressed data is stored in a specified location in memory.
  • the page table entry information corresponding to the compressed memory block needs to be updated.
  • the operating system also maintains metadata of memory blocks.
  • metadata may include but is not limited to metadata indicating whether a memory block is allocated, total metadata of memory or metadata indicating the hot and cold status of a memory block, metadata indicating the process to which the memory block belongs, metadata indicating the memory allocation status of the process to which the memory block belongs, etc.
  • computer equipment is dedicated to virtual machines. In the memory allocation scheme for virtual machines, a fixed virtual address space is configured for each virtual machine.
  • additional metadata mmap (memory map, the mapping relationship between virtual addresses and physical addresses of memory) is created to indicate the memory allocation status.
  • This metadata can realize bidirectional query of virtual addresses and physical addresses, which can improve query efficiency.
  • various metadata of the compressed memory pages also need to be updated.
  • the page table information of the memory page in the page table data is cleared during compression, which triggers a page fault exception.
  • the operating system then decompresses the memory block, specifically finding the compressed data, allocating a new physical memory page, storing the decompressed data in the new memory page, updating the page table data, and mapping the physical address of the new memory page to the virtual address, so that the process can resume access to the required memory page.
  • the memory compression scheme of the related art directly calls the data compression algorithm to compress the data stored in it.
  • the data compression algorithm has a certain overhead. Based on this, the memory compression scheme needs to be improved to increase the speed of memory compression.
  • the memory decompression scheme when decompressing the memory, it is also necessary to call the data decompression algorithm to perform the decompression operation. Based on this, the memory decompression scheme also needs to be improved to increase the speed of memory decompression.
  • FIG. 2A it is a schematic diagram of a memory release method according to an exemplary embodiment of the present specification, which includes the following steps 202 to 206 .
  • step 202 in response to a memory release request, a memory block to be released is determined.
  • step 204 it is determined whether all bits of the memory block to be released are zero.
  • step 206 if yes, clear the page table entry information corresponding to the memory block to be released, and after creating metadata corresponding to the memory block to be released, release the memory block to be released; wherein, the metadata includes preset mark information, and the preset mark information is used to indicate that each bit of the memory block to be released is zero.
  • the method of this embodiment can be applied to the operating system of any computer device and can be used to release the allocated memory block in the memory.
  • the computer device may adopt a traditional memory management architecture, that is, the entire memory is managed by the operating system.
  • the computer device may adopt a reserved memory memory allocation architecture, as shown in FIG2B, which is a schematic diagram of a reserved memory scenario shown in this specification according to an exemplary embodiment.
  • the host machine's memory includes two storage spaces, as shown in FIG2B using different filling methods to show two storage spaces of the memory, including a non-reserved storage space a for use by the kernel (filled with oblique lines in the figure), and a reserved storage space b for use by the virtual machine (filled with vertical lines and grayscale in the figure).
  • the non-reserved storage space a is used for use by the kernel in the figure, and applications running on the operating system (such as application 1 to application 3 in the example in the figure) can use the non-reserved storage space a.
  • the reserved storage space b can be used by virtual machines (VM, Virtual Machine), such as VM1 to VMn shown in the figure, a total of n virtual machines.
  • VM Virtual Machine
  • the two storage spaces can use different management granularities, that is, the way of dividing the memory can be different.
  • FIG2B for the convenience of example, the two storage spaces are illustrated in a continuous manner in the figure. It can be understood that in actual applications, the two storage spaces may be non-contiguous.
  • the reserved storage space occupies most of the memory and is not available to the host kernel.
  • a module can be inserted into the kernel of the operating system to manage the reserved storage space.
  • the reserved storage space is divided into larger granularity, such as dividing the reserved storage space into memory blocks (Memory Section, ms) of 2MB and other sizes for management; in some scenarios, large granularity is also commonly used, such as 1GB (GigaByte) and so on are optional, and this embodiment does not limit this.
  • the operating system can use different modules to manage the reserved storage space and the non-reserved storage space respectively.
  • the method of this embodiment can be applied to the module that manages the non-reserved storage space in the operating system to process the compression of the memory blocks in the reserved storage space.
  • the computer device may be a device including multiple physical CPUs, and a non-uniform memory access architecture (NUMA) may be used as needed.
  • NUMA non-uniform memory access architecture
  • the NUMA architecture includes at least two NUMA nodes. As shown in FIG2C , taking two NUMA nodes as an example, the host machine may include NUMA node 1 and NUMA node 2. Under the NUMA architecture, multiple physical CPUs and multiple memories of the host machine belong to different NUMA nodes. Each NUMA node includes at least one physical CPU and at least one physical memory.
  • FIG2C takes an example in which a NUMA node includes a physical CPU and a physical memory. Inside the NUMA node, an integrated memory controller bus (IMBC) is used between the physical CPU and the physical memory.
  • IMBC integrated memory controller bus
  • IMC Bus IMC Bus
  • QPI Quick Path Interconnect
  • the memory of this embodiment may include any of the above-mentioned physical memories.
  • any physical memory in the NUMA architecture may also adopt a reserved memory architecture.
  • the storage space managed by this embodiment may also refer to the reserved storage space in any physical memory in the NUMA architecture.
  • the memory release request in step 202 can be obtained in a variety of ways.
  • the memory management module of the operating system can have a memory aging management function, which can manage the hot and cold changes of each memory block in the memory, and maintain metadata indicating the hot and cold status of the memory as needed.
  • the hot and cold status of each memory block can be determined by scanning the usage of each memory block in the memory.
  • the cold page set can record the memory block in the cold state
  • the hot page set can record the memory block in the hot state.
  • the scheme of this embodiment can specifically apply the memory compression module in the operating system, and the memory compression module can be started according to the set conditions, for example, it can be determined that the memory release request is received after receiving the user's start instruction; it can also be started periodically, or it can be determined that the memory release request is received after receiving the start instruction of other modules of the operating system.
  • the memory block to be released can be determined from the cold page set, which is called the memory block to be released in this embodiment.
  • it can be a batch processing scenario, for example, multiple memory blocks can be taken out from the cold page set at one time, and each memory block can be processed in a serial manner, that is, for one memory block each time, the method of this embodiment is executed to release it.
  • parallel processing is also optional, and this embodiment does not limit this.
  • this embodiment does not directly compress the data stored in it, but first determines whether all bits of the memory block to be released are zero. If all bits are determined to be zero, this embodiment does not need to perform data compression operations.
  • one or more virtual machines can be run on a host device.
  • the virtual machine is a process for the host, but the virtual machine itself is a virtual computer device, and the memory usage rate of a computer device usually does not reach 100%, that is, usually the virtual machine will have some storage space left for the memory allocated to it, and all bits of these unused storage spaces are zero. This is because when the host creates a virtual machine and allocates a certain amount of storage space to it, it will initialize the allocated storage space, and the initialization process is to set all bits of the storage space to zero.
  • this embodiment determines whether all bits of the memory block to be released are zero, directly clears the page table entry information corresponding to the memory block to be released, and records the compression information of the memory block to be released in the metadata before releasing the memory block to be released. If not all bits are zero, the data in the memory block to be released is compressed, the compressed data is stored, the page table entry information corresponding to the memory block to be released is cleared, and the corresponding metadata is created to record that the memory block to be released is compressed, and then the memory block to be released is released.
  • the determination of whether all bits of the memory block to be released are zero includes: without changing the read and write permissions of the memory block to be released, pre-judgment of whether all bytes of the memory block to be released are zero; if it is pre-judged that all bits of the memory block to be released are zero, the read and write permissions of the memory block to be released are changed to read-only, and then judgment is made as to whether all bits of the memory block to be released are zero.
  • this embodiment first pre-judges whether all bytes of the memory block to be released are zero without changing the read and write permissions of the memory block to be released. If it is pre-judged that all bits of the memory block to be released are not zero, it is determined that all bits of the memory block to be released are not zero. After the permission is changed to read-only, it is further determined whether all bits of the memory block to be released are zero.
  • the pre-judgment of whether all bits of the memory block to be released are all zero includes: selecting n bits from the memory block to be released, and determining whether the n bits are all zero; wherein n is a positive integer; if not, pre-judging that all bits of the memory block to be released are not all zero; and if so, pre-judging whether all bits of the memory block to be released except the set number of bits are all zero.
  • some bits can be selected for judgment first, and as long as one of them is not zero, it can be pre-judged that all bits of the memory block to be released are not all zero.
  • the number n selected can be flexibly configured as needed, for example, it can be determined based on the size of the memory block and the actual overhead requirements, and this embodiment does not limit this.
  • n bits from the memory block to be released there are multiple ways to select n bits from the memory block to be released, for example, any of the following: selecting n bits starting from the highest bit of the memory block to be released; selecting n bits starting from the lowest bit of the memory block to be released; or randomly selecting n bits from the memory block to be released.
  • selecting n bits from the highest bit of the memory block to be released or selecting n bits from the lowest bit of the memory block to be released are both faster; and the method of randomly selecting n bits from the memory block to be released can increase the probability of making accurate pre-judgment.
  • the way to determine whether it is zero can be to compare the information stored in each bit with zero.
  • it can also be determined by encoding.
  • the information of the n bits taken out can be encoded to obtain first encoding information, and the first encoding information can be compared with the preset second encoding information.
  • the second encoding information is information obtained by encoding n zeros. Based on this, the first encoding information and the second encoding information are compared to determine whether they are the same, so as to determine whether the n bits taken out from the memory block to be released are all zero.
  • encoding methods can be used, such as various existing hash algorithms, including but not limited to MD5 (Message-Digest Algorithm), SHA (Secure Hash Algorithm), etc.
  • MD5 Message-Digest Algorithm
  • SHA Secure Hash Algorithm
  • a custom encoding algorithm is also optional, and the encoding algorithm can generate unique encoding information for any input.
  • the operating system allocates an independent set of virtual addresses to each process, and uses page tables to map the virtual addresses of different processes to the physical addresses of different memories.
  • Each process corresponds to a page table data.
  • the page table data of each process includes multiple page table entry information, and each page table entry corresponds to a memory block.
  • some operating systems in order to reduce the storage space occupied by page table data and quickly find the mapping relationship between virtual addresses and physical addresses, some operating systems also adopt a multi-level page table solution, that is, the page table data of each process can include multiple directory entries by level.
  • the page table data includes the following four data records with page table directory entries: global page directory entry PGD (Page Global Directory); upper page directory entry PUD (Page Upper Directory); intermediate page directory entry PMD (Page Middle Directory); page table entry PTE (Page Table Entry).
  • PGD Page Global Directory
  • PUD Page Upper Directory
  • PMD Page Middle Directory
  • PTE Page Table Entry
  • the page table entry information corresponding to the memory block to be released needs to be cleared.
  • one of the page table entries records the physical address PA1 of the memory block corresponding to the virtual address VA1.
  • the memory block corresponding to PA1 is the memory block to be released in this embodiment.
  • the clearing in this embodiment is to clear the physical address PA1 of the memory block in the page table entry.
  • some page table data may include four-level data, and the corresponding level of data can be updated according to the actual memory management granularity. In actual applications, it can be flexibly configured as needed, and this embodiment does not limit this.
  • the compression information of the memory block to be released is also recorded in the metadata.
  • the metadata is used to manage the information of each compressed memory block, and the information recorded in the metadata may include but is not limited to: the virtual address corresponding to the memory block to be released (which may include the virtual address of the user state and the virtual address of the kernel state), the physical address of the storage location of the compressed data or the size information of the compressed data, etc.
  • preset mark information may also be included, and the preset mark information is used to indicate that all bits of the memory block to be released are zero.
  • the data in the memory block to be released is compressed, the compressed data is stored, the page table entry information corresponding to the memory block to be released is cleared, and metadata corresponding to the memory block to be released is created before the memory block to be released; wherein the metadata includes the storage location information of the compressed data.
  • the memory block to be released is not all zero, it is necessary to perform the operation of compressing the data in the memory block to be released and storing the compressed data; and the aforementioned operation of clearing the page table entry information corresponding to the memory block to be released and creating corresponding metadata, which includes the storage location information of the compressed data.
  • the storage method of compressed data has different implementation methods according to different scenarios.
  • the operating system also has a small block memory management module specifically used to manage small blocks of memory, which can store compressed data in a storage space with a smaller management granularity.
  • the memory includes multiple memory blocks, and each of the memory blocks is divided into multiple memory segments; the memory blocks are designed to have a larger granularity, thereby reducing the occupancy of block metadata of the memory blocks; in addition, the memory segments in the memory blocks can also be specifically managed. Therefore, the data of a memory block becomes smaller after compression, so the compressed data can be stored through memory segments.
  • the step of storing the compressed data before the step of storing the compressed data, it also includes: determining whether the size of the compressed data meets a preset release condition. In this embodiment, it may also be the case that after the data in the memory block is compressed, the compressed data is still large. Based on this, this embodiment can also first determine whether the size of the compressed data meets the preset release condition. If it does, the operation of storing the compressed data is performed. If it does not, the memory block to be released is not released, that is, the release of the memory block to be released fails.
  • the preset release condition can be flexibly configured according to actual needs. For example, it can be that the size of the compressed data is greater than or equal to the preset size threshold, or the ratio of the size of the compressed data to the size of the memory block is greater than or equal to the preset ratio threshold, etc.
  • the memory block to be released After completing the above-mentioned operations of clearing the page table entry information corresponding to the memory block to be released, and recording the compression information of the memory block to be released in the metadata, the memory block to be released can be released, and the memory block to be released is recycled by the memory management module for reallocation. It can be understood that since the memory block to be released is released, its allocation status needs to be updated, and the actual application can also include the update of other metadata of the memory block to be released, such as metadata indicating whether the memory block is allocated, the total metadata of the memory, the hot and cold status metadata of the memory block, the memory allocation data mmap of the process to which each memory block in the memory belongs, etc.
  • the metadata may record many compressed memory blocks.
  • the metadata also includes the address of the page table entry corresponding to the memory block to be released; the method also includes: maintaining a target search tree based on the metadata, each node in the target search tree corresponds to the metadata of a memory block, and the address of the page table entry in the metadata of the memory block corresponding to the node is used as the unique identifier of the node.
  • the target search tree is used for fast search, and the structure of the tree can have multiple options, such as red-black tree, etc., which can be flexibly configured as needed in actual applications.
  • Each node in the target search tree uses the metadata corresponding to a memory block, and each node records the address of the page table item in the metadata of the memory block corresponding to the node as a unique identifier, so that when restoring, the memory block and its information to be restored can be quickly found through the identifier of the page table item.
  • FIG. 2D this is a schematic diagram of a target search tree shown in this specification according to an exemplary embodiment.
  • the tree includes 7 nodes (N1 to N7).
  • node N2 and node N3 are linked to the compression information K2 of a memory block, and records the address of the page table item in the compression information K2;
  • node N3 is linked to the compression information K2 of a memory block, and records the address of the page table item in the compression information K2;
  • node N3 is linked to the compression information K2 of a memory block.
  • the compressed information K3 is obtained, and the address of the page table entry in the compressed information K3 is recorded.
  • the implementation of linking the node to the compressed information may be a pointer to the compressed information stored in the node.
  • the computer device can be a single-core CPU device or a multi-core CPU.
  • the efficiency of compression processing can also be improved.
  • the data stored in the memory block to be released is compressed, which may include: the current CPU compresses the data stored in the memory block to be released; or, a target CPU is selected from other CPUs to create a process for compressing the data stored in the memory block to be released; wherein the process is bound to the target CPU so that the operating system schedules the target CPU to execute the process; wherein the running information of the target CPU meets the preset idle condition, and/or the communication efficiency between the target CPU and the current CPU meets the preset communication condition.
  • the above-mentioned preset idle condition can be flexibly configured as needed, and the preset communication condition can also be flexibly configured as needed. For example, it is determined from multiple dimensions such as whether it is the same CPU core, whether it is the same socket (socket, an abstract endpoint for bidirectional communication between application processes on different hosts in a network), or whether it is located in the same NUMA node. As an example, if the current CPU is in an idle state, the current CPU is used. If it is not in an idle state, it is determined based on the communication efficiency. For example, the CPU on the same socket is selected because they share the same cache line and the communication efficiency is relatively higher. The CPU on the same NUMA is selected again, and finally an idle CPU across NUMA is selected.
  • FIG. 2E it is a flowchart of a memory recovery method shown in this specification according to an exemplary embodiment, including the following steps: in step 212, in response to a memory recovery request, metadata of the memory block to be recovered is obtained, and the metadata includes preset tag information or storage location information of compressed data; in step 214, a target memory block is allocated from the memory; in step 216, if the metadata includes the preset tag information, the page table entry information of the target memory block is added to the page table data to complete the recovery of the target memory block; wherein the preset tag information is used to indicate that each bit of the memory block to be recovered is zero.
  • Memory recovery means that the memory blocks originally allocated to the process are released during memory compression, and when needed, the memory blocks are reallocated to the process. It is understood that the physical addresses of the recovered memory blocks and the memory blocks originally allocated to the process are not necessarily the same.
  • the memory recovery request can be initiated when the operating system finds a page fault exception. For example, it is mentioned in the above embodiment that in the page table data, the physical address corresponding to the virtual address VA1 assigned to the process is cleared. When the operating system finds that the process accesses the virtual address VA1, since there is no record in the page table data, a page fault exception is triggered, that is, a memory recovery request is triggered, thereby executing the memory recovery solution of this embodiment.
  • it can also be a memory recovery actively performed by the operating system, for example, the operating system detects that there are too many compressed memory blocks, or the remaining storage space of the memory is large, etc.
  • the memory recovery request carries a virtual address.
  • the address of the page table entry corresponding to the virtual address carried by the memory recovery request can be obtained from the page table data; the node recording the address of the page table entry can be searched from the preset target search tree, and the metadata of the memory block to be recovered can be obtained using the found node.
  • the page table entry information of the target memory block can be added to the page table data, that is, taking the virtual address VA1 as an example, the physical address of the target memory block corresponding to the virtual address VA1 is written into the page table data.
  • the compressed data is obtained according to the storage location information, and after the compressed data is decompressed and stored in the target memory block, the page table entry information of the target memory block is added to the page table data to complete the recovery of the target memory block.
  • the method of this embodiment is applied to the current CPU of multiple CPUs of a computer device.
  • the decompression of the compressed data includes: decompressing the compressed data by the current CPU; or selecting a target CPU from other CPUs to create a process for decompressing the compressed data. wherein the process is bound to the target CPU so that the operating system schedules the target CPU to execute the process; wherein the operation information of the target CPU satisfies a preset idle condition, and/or the communication efficiency between the target CPU and the current CPU satisfies a preset communication condition.
  • the specific method of selecting the target CPU can be referred to the aforementioned embodiment, which will not be described in detail here.
  • a memory block ms to be compressed can be selected from the cold page set; if there is no memory block, fail, otherwise continue.
  • it can also be a batch processing scenario, where multiple memory blocks to be compressed can be obtained at one time, and then each memory block is processed serially.
  • the mmap of all processes is in one piece of data (such as a linked list, etc.), so it is necessary to hold a lock when querying mmap through ms, that is, it is necessary to temporarily lock the linked list, and then obtain the mmap corresponding to the ms through traversal query.
  • a copy of the entire mmap data can be created as needed.
  • the entire mmap is locked when querying, and the lock is released after the copy is created.
  • the corresponding mmap of other memory blocks to be compressed in the batch can be queried using the copy.
  • the virtual address vaddr corresponding to the memory block ms to be swapped out and the corresponding page table entry pmd can be obtained, and a metadata item for managing the compression information of the memory block can be established.
  • the information recorded in the item may include: the user-state virtual address corresponding to ms, the kernel-state virtual address corresponding to ms, the page table entry address corresponding to ms, the physical address of the storage location of the data stored in ms after compression, the length of the data stored in ms after compression, and other information, so as to facilitate the use of subsequent pages after compression, and also for finding during data decompression and memory block recovery.
  • the judgment in this step is in the pre-judgment stage. It does not judge the entire memory block in its entirety, but rather judges part of the content of the memory block.
  • the judgment in this step is a full judgment of the memory block, that is, it is necessary to read each bit of the memory block.
  • the judgment in this step is still in the pre-judgment stage, because the read and write permissions of the memory block are not modified when judging the memory block, and the memory block may be updated during the judgment process.
  • a temporary storage space tm is required to temporarily store the compression result of the data in the memory block ms.
  • the size of the temporary storage space tm is larger than the size of the memory block ms.
  • the specific size can be flexibly set as needed, for example, it can be twice the size of the memory block ms.
  • the temporary storage space tm can be allocated for the first memory block to be compressed, and the subsequent temporary storage space tm can be reused for use in the compression operations of other memory blocks to be compressed.
  • the temporary storage space tm it can be determined first whether there is a In the temporary storage space tm, if it does not exist, it is allocated; if it exists, it can be reused. Since the temporary storage space tm may store data, the temporary storage space tm is cleared (that is, the temporary storage space tm is initialized, and each bit position is zero).
  • Data compression has a certain overhead; in some examples, the computer device has only one CPU core, so step 10 and step 11 may not be performed.
  • the operation status of the current CPU can be scanned as needed to determine whether there is an idle CPU. If not, the current CPU is selected and the process jumps to step 12, otherwise the process jumps to step 11.
  • the selection rule here can be determined in many ways, for example, it can be determined from multiple dimensions such as whether it is the same CPU core, whether it is the same socket (socket, an abstract endpoint for bidirectional communication between application processes on different hosts in the network), or whether it is located on the same NUMA node. As an example, if the current CPU is in an idle state, the current CPU is used. If it is not in an idle state, the CPU on the same socket is selected next, because they share the same cache line, and the communication efficiency is relatively higher; the CPU on the same NUMA is selected again, and finally the idle CPU across NUMA is selected.
  • the selected CPU executes the above compression thread.
  • the data compression algorithm and whether to use hardware acceleration can be specified in the compression thread; the CPU compresses the data in the memory block ms by executing the thread, and stores the compression result in the temporary storage space tm.
  • step 13 Determine whether the size of the compression result tlen meets the preset release condition; for example, whether it is larger than the size of the original memory block ms. If so, the compression fails and jumps to step 18; otherwise, jumps to step 14.
  • the preset release condition here can also be implemented in other ways, for example, a threshold can be set to determine the difference between the size of the compression result tlen and the size of the memory block ms, and whether the compression fails is determined based on the size relationship between the difference and the set threshold.
  • the set threshold represents the threshold of whether the compression fails, which can be configured as needed in actual applications, and this embodiment does not limit this.
  • the allocation method here can be in various ways according to the actual memory management scheme. For example, there may be a designated storage space in the memory specifically used to store compressed data, or the memory may be managed in a specific manner, such as in the aforementioned small block memory management embodiment, the memory is divided into multiple memory blocks, each memory block is further divided into multiple memory segments, and this embodiment can allocate a memory segment that satisfies the size tlen of the compression result according to the size tlen.
  • the compression information here refers to the user-mode virtual address corresponding to the ms mentioned in step 3, the kernel-mode virtual address corresponding to the ms, the physical address of the storage location of the compressed data stored in the ms, the compressed length of the data stored in the ms, and other information.
  • the memory block ms is compressed successfully, the memory block is released, and the entire memory compression process is completed.
  • the physical memory block corresponding to the virtual memory is a compressed memory block.
  • Steps 1 and 2 of the above embodiment are described by taking active recovery of the operating system as an example.
  • the operating system may also be triggered when a page fault exception occurs.
  • step 5 Determine whether the compressed memory block ms belongs to the zero page according to item; if it is the zero page, adjust to step 6; otherwise jump to step 7.
  • step 7 Scan the running status of the current CPU to determine whether there is an idle CPU. If not, select the current CPU and jump to step 9, otherwise step 8.
  • select the idle CPU with the highest affinity to the currently running CPU for example, first select the same core, then the same socket, and then the same node.
  • this specification also provides embodiments of a memory release device/memory recovery device and a terminal to which they are applied.
  • the embodiments of the memory release device/memory recovery device in this specification can be applied to computer equipment, such as servers or terminal devices.
  • the device embodiments can be implemented by software, or by hardware or a combination of software and hardware. Taking software implementation as an example, as a device in a logical sense, it is formed by the processor in which it is located reading the corresponding computer program instructions in the non-volatile memory into the memory for execution. From the hardware level, as shown in Figure 3, it is a hardware structure diagram of the computer device where the memory release device/memory recovery device of this specification is located.
  • the computer device where the memory release device/memory recovery device 331 is located in the embodiment can also include other hardware according to the actual function of the computer device, which will not be described in detail.
  • Figure 4 is a block diagram of a memory release device shown in this specification according to an exemplary embodiment, the device includes: a memory block determination module 41, used to: determine the memory block to be released in response to a memory release request; a bit determination module 42, used to: determine whether each bit of the memory block to be released is zero; a release module 43, used to: when it is determined that each bit of the memory block to be released is zero, clear the page table entry information corresponding to the memory block to be released, and create metadata corresponding to the memory block to be released, and then release the memory block to be released; wherein the metadata includes preset mark information, and the preset mark information is used to indicate that each bit of the memory block to be released is zero.
  • a memory block determination module 41 used to: determine the memory block to be released in response to a memory release request
  • a bit determination module 42 used to: determine whether each bit of the memory block to be released is zero
  • a release module 43 used to: when it is determined that each bit of the memory block to be released is zero
  • the release module 43 is also used to: when it is determined that all bits of the memory block to be released are not all zero, compress the data in the memory block to be released, store the compressed data, clear the page table entry information corresponding to the memory block to be released, and create metadata corresponding to the memory block to be released, and then release the memory block to be released; wherein the metadata includes storage location information of the compressed data.
  • the bit determination module is also used to: pre-judge whether all bytes of the memory block to be released are zero without changing the read and write permissions of the memory block to be released; if it is pre-judged that all bits of the memory block to be released are zero, the read and write permissions of the memory block to be released are changed to read-only, and then it is determined whether all bits of the memory block to be released are zero.
  • the bit determination module is also used to: select n bits from the memory block to be released, and determine whether the n bits are all zero; wherein n is a positive integer; if not, pre-judge that the bits of the memory block to be released are not all zero; and if so, pre-judge whether the bits of the memory block to be released except the set number of bits are all zero.
  • the bit determination module is further used to: select n bits starting from the highest bit of the memory block to be released; select n bits starting from the lowest bit of the memory block to be released; or randomly select n bits in the memory block to be released.
  • the metadata also includes the address of the page table entry corresponding to the memory block to be released;
  • the device also includes a search module, which is used to: maintain a target search tree based on the metadata, each node in the target search tree corresponds to the metadata of a memory block, and the address of the page table entry in the metadata of the memory block corresponding to the node is used as the unique identifier of the node.
  • the release module is further used to determine whether the size of the compressed data satisfies a preset release condition before storing the compressed data.
  • the device is applied to a current CPU of multiple CPUs of a computer device; the release module is also used to: compress the data stored in the memory block to be released by the current CPU; or, select a target CPU from other CPUs to create a process for compressing the data stored in the memory block to be released; wherein the process is bound to the target CPU so that the operating system schedules the target CPU to execute the process; wherein the operating information of the target CPU meets a preset idle condition, and/or the communication efficiency between the target CPU and the current CPU meets a preset communication condition.
  • Figure 5 is a block diagram of a memory recovery device shown in this specification according to an exemplary embodiment, the device includes: an acquisition module 51, used to: in response to a memory recovery request, obtain metadata of a memory block to be recovered based on the metadata, the metadata records compression information corresponding to each compressed memory block, and the compression information includes preset mark information or storage location information of compressed data; an allocation module 52, used to: allocate a target memory block from the memory; a recovery module 53, used to: when the compression information metadata of the memory block to be recovered includes the preset mark information, add the page table entry information of the target memory block to the page table data, and complete the recovery of the target memory block; wherein the preset mark information is used to indicate that each bit of the memory block to be recovered is zero.
  • an acquisition module 51 used to: in response to a memory recovery request, obtain metadata of a memory block to be recovered based on the metadata, the metadata records compression information corresponding to each compressed memory block, and the compression information includes preset mark information or storage location information of compressed data
  • the recovery module is also used to: when the metadata includes storage location information of compressed data, obtain the compressed data according to the storage location information, decompress the compressed data and store it in the target memory block, and then add page table entry information of the target memory block to the page table data.
  • the acquisition module is also used to: obtain the address of the page table item corresponding to the virtual address carried by the memory recovery request according to the page table data; search for the node that records the address of the page table item from a preset target search tree, and use the found node to obtain the metadata of the memory block to be recovered; wherein each node in the target search tree corresponds to the metadata of a memory block, and the address of the page table item in the metadata of the memory block corresponding to the node serves as the unique identifier of the node.
  • the apparatus is applied to a current CPU of a plurality of CPUs of a computer device; the recovery module is further used to: decompress the compressed data by the current CPU; or select a target CPU from other CPUs, A process is created for decompressing the compressed data; wherein the process is bound to the target CPU so that the operating system schedules the target CPU to execute the process; wherein the operating information of the target CPU meets a preset idle condition, and/or the communication efficiency between the target CPU and the current CPU meets a preset communication condition.
  • an embodiment of this specification also provides a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the steps of the method embodiment described in the first aspect above are implemented.
  • An embodiment of the present specification also provides a computer device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method embodiment described in the first aspect when executing the computer program.
  • the technical solution provided by the embodiments of this specification may include the following beneficial effects:
  • for the memory release solution after determining the memory block to be released, first determine whether all bits of the memory block to be released are zero; when it is determined that all bits of the memory block to be released are zero, directly clear the page table entry information corresponding to the memory block to be released, and create corresponding metadata to record that the memory block to be released is compressed, and then release the memory block to be released; it can be seen that there is no need to perform data compression operations when releasing, which improves the speed of memory release.
  • the metadata includes preset tag information, so that when the memory is restored, the target memory block can be allocated from the memory through the preset tag information, and the page table entry information of the target memory block can be directly added to the page table data, and there is no need to perform data decompression operations, which improves the speed of memory recovery.
  • an embodiment of the present specification also provides a computer program product, including a computer program, which implements the steps of the aforementioned memory release/memory recovery method embodiment when executed by a processor.
  • an embodiment of the present specification also provides a computer device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the memory release/memory recovery method embodiment when executing the program.
  • an embodiment of the present specification also provides a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the steps of the memory release/memory recovery method embodiment are implemented.
  • the relevant parts can refer to the partial description of the method embodiment.
  • the device embodiment described above is only schematic, wherein the modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, that is, they may be located in one place, or they may be distributed on multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the scheme of this specification. Ordinary technicians in this field can understand and implement it without paying creative work.
  • the above embodiments can be applied to one or more computer devices, where the computer device is a device that can automatically perform numerical calculations and/or information processing according to pre-set or stored instructions, and the hardware of the computer device includes but is not limited to a microprocessor, an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a digital signal processor (DSP), an embedded device, etc.
  • ASIC application specific integrated circuit
  • FPGA field-programmable gate array
  • DSP digital signal processor
  • embedded device etc.
  • the computer device can be any electronic product that can interact with a user, such as a personal computer, a tablet computer, a smart phone, a personal digital assistant (PDA), a game console, an interactive network television (Internet Protocol Television, IPTV), a smart wearable device, etc.
  • a personal computer such as a personal computer, a tablet computer, a smart phone, a personal digital assistant (PDA), a game console, an interactive network television (Internet Protocol Television, IPTV), a smart wearable device, etc.
  • PDA personal digital assistant
  • IPTV Internet Protocol Television
  • smart wearable device etc.
  • the computer device may also include a network device and/or a user device.
  • the network device includes, but is not limited to, a single network server, a server group consisting of multiple network servers, or a cloud consisting of a large number of hosts or network servers based on cloud computing.
  • the network where the computer device is located includes but is not limited to the Internet, a wide area network, a metropolitan area network, a local area network, a virtual private network (VPN), etc.
  • VPN virtual private network
  • step division of the above methods is only for clear description. When implemented, they can be combined into one step or some steps can be split and decomposed into multiple steps. As long as they include the same logical relationship, they are all within the protection scope of this patent; adding insignificant modifications to the algorithm or process or introducing insignificant designs without changing the core design of the algorithm and process are all within the protection scope of this application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

本说明书提供一种内存释放、内存恢复方法、装置、计算机设备及存储介质,所述内存包括多个内存块,所述方法包括:确定所述内存中的待释放内存块;确定所述待释放内存块的各个比特位是否都为零;若是,将所述待释放内存块对应的页表项信息清空,以及创建与所述待释放内存块对应的元数据后,释放所述待释放内存块;其中,所述元数据包括预设标记信息,所述预设标记信息用于表示所述待释放内存块的各个比特位都为零。

Description

内存释放、内存恢复方法、装置、计算机设备及存储介质
本申请要求于2022年11月10日提交中国专利局、申请号为202211409950.0、发明名称为“内存释放、内存恢复方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本说明书涉及计算机技术领域,尤其涉及内存释放、内存恢复方法、装置、计算机设备及存储介质。
背景技术
操作***对于每个进程,会为其分配一定数量的内存块,并为每个进程创建对应的页表数据,用于记录分配给该进程的各个内存块的信息。
为了灵活使用内存,对于已分配给各个进程的各个内存块,操作***的内存管理模块还会记录这些内存块的冷热状态信息,对于冷状态的内存块,通过一些处理机制将其进行释放,以腾出空间提高资源利用率。
例如,可以调用数据压缩算法,将冷状态的内存块中的数据进行压缩,压缩后的数据则存储在特定存储空间中,该内存块可以释放,重新用于分配。释放内存块释放前,需要在该内存块所属进程的页表数据中,清空该内存块对应的页表项信息,并在用于管理各个已压缩内存块的元数据中,记录该内存块的压缩信息,如压缩数据的存储位置信息等等。
后续,若进程需要访问该内存块的数据,操作***可以通过页表数据发现异常,通过元数据的记录,取出压缩数据并解压,并重新分配一个空闲内存块存储解压后的数据,以供进程访问。
由此可见,上述方案需要调用数据压缩算法对内存块中的数据进行压缩,执行数据压缩算法会有一定的开销,基于此,需要进行改进,以提升内存释放的速度。同理,在内存恢复时,也需要调用数据解压算法,也需要对内存恢复方案进行改进,以提升内存恢复的速度。
发明内容
为克服相关技术中存在的问题,本说明书提供了内存释放、内存恢复方法、装置、计算机设备及存储介质。
根据本说明书实施例的第一方面,提供一种内存释放方法,所述内存包括多个内存块,所述方法包括:确定所述内存中的待释放内存块;确定所述待释放内存块的各个比特位是否都为零;若是,将所述待释放内存块对应的页表项信息清空,以及创建与所述待释放内存块对应的元数据后,释放所述待释放内存块;其中,所述元数据包括预设标记信息,所述预设标记信息用于表示所述待释放内存块的各个比特位都为零。
根据本说明书实施例的第二方面,提供一种内存恢复方法,所述方法包括:响应于内存恢复请求,获取需恢复内存块的元数据,所述元数据包括预设标记信息或压缩数据的存储位置信息;从内存中分配目标内存块;若所述元数据包括预设标记信息,在页表数据中添加所述目标内存块的页表项信息,完成所述目标内存块的恢复;其中,所述预设标记信息用于表示所述需恢复内存块的各个比特位都为零。
根据本说明书实施例的第三方面,提供一种内存释放装置,所述内存包括多个内存块,所述装置包括:内存块确定模块,用于:确定所述内存中的待释放内存块;比特位确定模块,用于:确定所述待释放内存块的各个比特位是否都为零;释放模块,用于:在确定所 述待释放内存块的各个比特位都为零的情况下,将所述待释放内存块对应的页表项信息清空,以及创建与所述待释放内存块对应的元数据后,释放所述待释放内存块;其中,所述元数据包括预设标记信息,所述预设标记信息用于表示所述待释放内存块的各个比特位都为零。
根据本说明书实施例的第四方面,提供一种内存恢复装置,所述装置包括:内存块确定模块,用于:响应于内存恢复请求,获取需恢复内存块的元数据,所述元数据包括预设标记信息或压缩数据的存储位置信息;分配模块,用于:从内存中分配目标内存块;恢复模块,用于:在所述元数据包括所述预设标记信息的情况下,在页表数据中添加所述目标内存块的页表项信息,完成所述目标内存块的恢复;其中,所述预设标记信息用于表示所述需恢复内存块的各个比特位都为零。
根据本说明书实施例的第五方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现前述第一方面所述方法实施例的步骤。
根据本说明书实施例的第六方面,提供一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时实现前述第一方面所述方法实施例的步骤。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本说明书。
附图说明
在附图中,除非另外规定,否则贯穿多个附图相同的附图标记表示相同或相似的部件或元素。这些附图不一定是按照比例绘制的。应该理解,这些附图仅描绘了根据本申请公开的一些实施方式,而不应将其视为是对本申请范围的限制。
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本说明书的实施例,并与说明书一起用于解释本说明书的原理。
图1示出了本说明书根据一示例性实施例示出的一种页表数据的示意图。
图2A示出了本说明书根据一示例性实施例示出的一种内存释放方法的流程图。
图2B示出了本说明书根据一示例性实施例示出的一种内存的示意图。
图2C示出了本说明书根据一示例性实施例示出的一种NUMA架构的示意图。
图2D示出了本说明书根据一示例性实施例示出的一种目标查找树的示意图。
图2E示出了本说明书根据一示例性实施例示出的一种内存恢复方法的流程图。
图3示出了本说明书根据一示例性实施例示出的一种内存释放装置/内存恢复装置所在计算机设备的框图。
图4示出了本说明书根据一示例性实施例示出的一种内存释放装置的框图。
图5示出了本说明书根据一示例性实施例示出的一种内存恢复装置的框图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请具体实施例及相应的附图对本申请技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本说明书相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本说明书的一些方面相一致的装置和方法的例子。
在本说明书使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本说明书。在本说明书和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本说明书可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本说明书范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
进程运行时,操作***会为进程分配虚拟地址空间和物理地址空间,并创建该进程对应的页表,页表用于记录虚拟地址空间和物理地址空间的映射关系。操作***还对管理的内存维护有元数据,用于管理内存。内存交换时,由于进程的数据的存储位置发生了变更,需要更新该进程的页表;由于数据的存储位置发生变更也导致内存块的状态发生变更,因此也需要更新元数据。
页表是虚拟内存技术的概念。为了让程序获得更多的可用内存、将物理内存扩充成更大的逻辑内存,操作***还使用了虚拟内存的技术。它将物理内存抽象为地址空间,操作***为每个进程分配独立的一套虚拟地址,不同进程的虚拟地址和不同内存的物理地址映射起来。如果程序要访问虚拟地址的时候,由操作***转换成不同的物理地址。此处涉及两个地址的概念:程序所使用的内存地址叫做虚拟地址(Virtual Memory Address,VA);实际存在硬件里面的空间地址叫做物理地址(Physical Memory Address,PA)。
虚拟地址与物理地址之间通过页表来映射。页表存储在内存中,通过CPU(Central Processing Unit,处理器)的MMU(Memory Management Unit,内存管理单元)来实现虚拟内存到物理内存的转换。而当进程要访问的虚拟地址在页表中查不到时,***会产生缺页异常,进入***内核空间分配物理内存、更新进程的页表,最后再返回用户空间,恢复进程的运行。
如图1所示,是本说明书根据一示例性实施例示出的一种页表数据的示意图,相关技术中,操作***的内存管理单元将内存按设定的管理粒度进行划分,每一管理粒度可称为页(page),或者也可称为块。本实施例以该进程所分配的页为0至N为例,页表中包括N个页表项,每一个页表项即用于表示每一页的虚拟地址与物理地址之间的对应关系。从而,整个页表记录进程的虚拟地址空间、页表和物理地址空间之间的关系,进程的某个虚拟地址可以通过页表映射至对应的物理地址。
为了解决内存不够用的问题,可以通过压缩内存的方法来处理内存中不频繁使用的内存页(通常称为冷页),从而减少内存的使用量。原本分配给进程的某个内存页经过压缩,压缩的数据存储在内存中的指定位置。在发生内存压缩时,被压缩内存块对应的页表项信息需要更新。
另外,操作***还维护有内存块的元数据,元数据的类型有多种,根据不同场景,元数据可以包括但不限于表示内存块是否分配的元数据、内存的总元数据或表示内存块的冷热状态元数据、表示内存块所属进程的元数据、表示内存块所属的进程的内存分配情况的元数据等等。以虚拟机场景为例,一些方案中计算机设备专用于虚拟机,对虚拟机的内存分配方案中,对各个虚拟机配置了固定的虚拟地址空间,为了便于通过物理地址查询虚拟地址,且提升查询效率,在页表数据的基础上,还额外创建了用于表示内存分配情况的元数据mmap(memory map,内存的虚拟地址与物理地址的映射关系),通过该元数据可以实现虚拟地址与物理地址的双向查询,可提升查询效率。当内存压缩时,被压缩内存页的各种元数据也需要更新。
当进程通过虚拟地址访问到已被压缩的内存页时,由于压缩时页表数据中该内存页的页表信息被清空,因此触发缺页异常,操作***则进行内存块解压处理,具体是查找出被压缩的数据,分配新的物理内存页,将解压的数据存储至新的内存页中,更新页表数据,将该新的内存页的物理地址与虚拟地址进行映射,使进程恢复访问所需的内存页。
相关技术中,针对待压缩内存块,相关技术内存压缩方案直接调用数据压缩算法对其存储的数据进行压缩,数据压缩算法会有一定的开销,基于此,需要对内存压缩方案进行改进,以提升内存压缩的速度。同理,在内存解压时,也需要调用数据解压算法执行解压操作,基于此,也需要对内存解压方案进行改进,以提升内存解压的速度。
如图2A所示,是本说明书根据一示例性实施例示出的一种内存释放方法的示意图,包括如下步骤202至步骤206。
在步骤202、响应于内存释放请求,确定待释放内存块。
在步骤204、确定所述待释放内存块的各个比特位是否都为零。
在步骤206、若是,将所述待释放内存块对应的页表项信息清空,以及创建与所述待释放内存块对应的元数据后,释放所述待释放内存块;其中,所述元数据包括预设标记信息,所述预设标记信息用于表示所述待释放内存块的各个比特位都为零。
本实施例方法可以应用于任意计算机设备的操作***中,可以用于对内存中已分配的内存块进行释放。
在一些例子中,计算机设备可以采用传统的内存管理架构,即由操作***管理整个内存。在另一些场景中,例如虚拟机场景下,计算机设备可以采用预留内存的内存分配架构,如图2B所示,是本说明书根据一示例性示出的预留内存场景的示意图,在该架构中,宿主机的内存包括两个存储空间,如图2B中采用不同填充方式示出了内存的两个存储空间,包括供内核使用的非预留存储空间a(图中采用斜线填充),以及供虚拟机使用的预留存储空间b(图中采用竖线及灰度填充)。也即是,非预留存储空间a用于供图中的内核使用,运行于操作***上的应用(如图中示例的应用1至应用3)可使用该非预留存储空间a。而预留存储空间b则可供虚拟机(VM,Virtual Machine)使用,如图中示出的VM1至VMn共n个虚拟机。两个存储空间可以采用不同的管理粒度,即对内存的划分方式可以是不同的。图2B中为了示例方便,两个存储空间在图中是以连续的方式进行示意的。可以理解,实际应用中,两个存储空间可以是非连续的。
预留存储空间占据内存的大部分,且对于宿主机内核不可用,可以在操作***的内核中***一模块专门用于对预留存储空间进行管理。为了方便管理这一系列的内存同时避免大量元数据对内存的占用,以及考虑到为虚拟机分配内存时往往最少也是数百MB(MByte,兆字节)起,因此预留存储空间采用以较大的粒度划分,例如将预留存储空间划分为2MB等大小的内存块(Memory Section,ms)进行管理;在一些场景中,大粒度也普遍被使用,如1GB(GigaByte,吉字节)等都是可选的,本实施例对此不进行限定。
应用于预留内存场景时,操作***可以采用不同的模块分别管理预留存储空间与非预留存储空间,例如,本实施例方法可以应用于操作***中管理非预留存储空间的模块,用于处理预留存储空间中内存块的压缩。
在另一些例子中,计算机设备可以是包括多个物理CPU的设备,根据需要可以采用非一致内存访问(Non Uniform Memory Access Architecture,NUMA)架构,NUMA架构包括至少两个NUMA节点(NUMA node),如图2C所示,以两个NUMA节点作为示例,宿主机可以包括NUMA节点1和NUMA节点2。在NUMA架构下,宿主机的多个物理CPU以及多个内存从属于不同的NUMA节点。每个NUMA节点均包括至少一个物理CPU与至少一个物理内存,图2C以NUMA节点包括一个物理CPU和一个物理内存为例。在NUMA节点内部,物理CPU与物理内存之间使用集成内存控制器总线(Integrated Memory  Controller Bus,IMC Bus)进行通信,而NUMA节点之间则使用快速通道互联(Quick Path Interconnect,QPI)进行通信。由于QPI的延迟高于IMC Bus的延迟,因此宿主机上物理CPU对内存的访问就有了远近之别(remote/local)。物理CPU访问本节点的物理内存速度较快,物理CPU访问其他NUMA节点的物理内存速度较慢。
在NUMA架构场景中,本实施例的内存可以包括上述任一物理内存。例如,NUMA架构中任一物理内存还可以采用预留内存架构。基于此,本实施例所管理的存储空间还可以是指NUMA架构中任一物理内存中的预留存储空间。
可以理解,实际应用中,计算机设备还可以采用其他架构,根据实际需要,本实施例所指的内存根据实际应用场景可以有多种实现方式,在此不再一一列举。
其中,步骤202中的内存释放请求可以通过多种方式获取到。在一些例子中,可以是,操作***的内存管理模块可以有内存老化管理功能,该功能可以管理内存中各个内存块的冷热变化情况,根据需要维护有表示内存冷热状态的元数据。可以通过扫描内存中各个内存块的使用情况,确定各个内存块的冷热状态。例如,冷页集合中可以记录处于冷状态的内存块,热页集合可以记录处于热状态的内存块。作为一个示例,本实施例方案具体可以应用操作***中的内存压缩模块,该内存压缩模块按照设定条件启动,例如可以是在接收到用户的启动指令,确定接收到内存释放请求;也可以周期性的启动,或者是接收到操作***的其他模块的启动指令,确定接收到内存释放请求等等。接着,可以从冷页集合中确定需要释放的内存块,本实施例称为待释放内存块。示例性的,可以是批处理场景,例如可以从冷页集合中一次取出多个内存块,各个内存块可以采用串行的方式处理,即每次针对一个内存块,执行本实施例的方法进行释放。当然,并行处理也是可选的,本实施例对此不进行限定。
对于待释放内存块,本实施例未将其存储的数据直接进行压缩,而是先确定所述待释放内存块的各个比特位是否都为零,在确定都为零的情况下,本实施例无需进行数据压缩操作。如虚拟机等应用场景,宿主机设备上可以运行一个或多个虚拟机,虚拟机对于宿主机来说是一个进程,但虚拟机本身是一个虚拟的计算机设备,而一台计算机设备的内存使用率通常情况下不会到达100%,即通常情况下虚拟机对分配给其的内存,会剩余一些存储空间,而这些未使用的存储空间,其各个比特位都为零,这是由于宿主机在虚拟机创建、向其分配一定的存储空间时,会对所分配的存储空间进行初始化,初始化的过程即对存储空间的各个比特位置零。
因此,本实施例在确定所述待释放内存块的各个比特位是否都为零,直接将所述待释放内存块对应的页表项信息清空,以及在元数据中记录所述待释放内存块的压缩信息后,释放所述待释放内存块。若不是都为零,则对所述待释放内存块中的数据进行压缩后,存储所述压缩数据,将所述待释放内存块对应的页表项信息清空,以及创建对应的元数据以记录所述待释放内存块被压缩,之后释放所述待释放内存块。
为了提升确定内存块的各个比特位是否都为零的速度,在一些例子中,所述确定所述待释放内存块的各个比特位是否都为零,包括:在未更改所述待释放内存块的读写权限的情况下,对所述待释放内存块各个字节是否都为零进行预判断;若预判断出所述待释放内存块的各个比特位都为零,将所述待释放内存块的读写权限更改为只读后,再判断所述待释放内存块的各个比特位是否都为零。
本实施例中,考虑到在确定内存块的各个比特位都为零的过程需要耗费一定时间,且在确定的过程中内存块还可能发生更新;基于此,本实施例是在未更改所述待释放内存块的读写权限的情况下,先对所述待释放内存块各个字节是否都为零进行预判断,若预判断出所述待释放内存块的各个比特位不是都为零,确定所述待释放内存块的各个比特位不是都为零。在预判断出所述待释放内存块的各个比特位都为零,将所述待释放内存块的读写 权限更改为只读后,再进一步判断所述待释放内存块的各个比特位是否都为零。一方面,先判断出待释放内存块的各个比特位不是都为零的情况,且预判断无需更改内存块的权限,可以减少开销;若预判断出待释放内存块的各个比特位都为零,再更改为只读权限,进一步做准确的判断,因此提升了确定的效率,也可以得到准确的判断结果。
预判断的目标是先确定待释放内存块的各个比特位不是都为零,实际应用中,预判断可以采用多种实现方式,在一些例子中,所述对所述待释放内存块的各个比特位是否都为零进行预判断,包括:从所述待释放内存块中选取n个比特位,确定所述n个比特位是否都为零;其中,所述n为正整数;若否,预判断出所述待释放内存块的各个比特位不是都为零;若是,对所述待释放内存块除所述设定位数的比特位之外的其他比特位是否都为零进行预判断。
本实施例中,为了提升预判断的速度,可以先选取部分比特位先进行判断,只要其中有一个不为零,即可预判断出所述待释放内存块的各个比特位不是都为零。其中,选取的个数n可以根据需要灵活配置,例如可以基于内存块的大小和实际的开销需求而确定,本实施例对此不进行限定。
在一些例子中,所述从所述待释放内存块中选取n个比特位可以有多种方式,例如可以包括如下任一:从所述待释放内存块的最高位开始选取n个比特位;从所述待释放内存块的最低位开始选取n个比特位;或,在所述待释放内存块中随机选取n个比特位。
其中,从所述待释放内存块的最高位开始选取n个比特位或从所述待释放内存块的最低位开始选取n个比特位,这两种选取方式较为快速;而在所述待释放内存块中随机选取n个比特位的方式,则可以提升做出准确的预判断的概率。
在其他例子中,判断是否为零的方式,可以是将每个比特位存储的信息与零进行比较。或者,还可以采用编码的方式进行判断,例如,可以将取出的n个比特位的信息进行编码,得到第一编码信息,将该第一编码信息与预设的第二编码信息进行比较,该第二编码信息是对n个零进行编码得到的信息,基于此,比较第一编码信息与第二编码信息是否相同,即可确定从待释放内存块中取出的n个比特位是否都为零。实际应用中,可以采用多种编码方式,例如已有的各种哈希(hash)算法,包括但不限于MD5(Message-Digest Algorithm,消息摘要算法)、SHA(Secure Hash Algorithm,安全散列算法)等等。或者,自定义的编码算法也是可选的,编码算法对任意输入能够产生唯一的编码信息即可。
实际应用中,操作***为每个进程分配独立的一套虚拟地址,并利用页表将不同进程的虚拟地址和不同内存的物理地址映射起来。每个进程对应一份页表数据。本实施例中,每个进程的页表数据包括多个页表项信息,每个页表项对应一个内存块。实际应用中,为了减少页表数据占用的存储空间以及快速查找虚拟地址与物理地址的映射关系,一些操作***还采用了多级页表的解决方案,即每个进程的页表数据可以按级别包括多份目录项。以常见的四级页表为例,页表数据包括如下四份记录有页表目录项的数据:全局页目录项PGD(Page Global Directory);上层页目录项PUD(Page Upper Directory);中间页目录项PMD(Page Middle Directory);页表项PTE(Page Table Entry)。
上述四级页表中所具体记录的信息以及通过页表查询虚拟地址与物理地址的映射关系的过程可以参考相关技术,本实施例在此不进行赘述。
因此,本实施例中待释放内存块被释放后,需要将所述待释放内存块对应的页表项信息清空,例如,进程的页表数据中,其中一个页表项记录的是虚拟地址VA1对应内存块的物理地址PA1,该PA1对应的内存块即本实施例的待释放内存块,本实施例的清空,即在页表项中将内存块的物理地址PA1清除。如上所述,一些例子中一些页表数据可以包括四级数据,可以根据实际的内存管理粒度对相应级别的数据进行更新,实际应用中可以根据需要灵活配置,本实施例对此不进行限定。
本实施例中还在元数据中记录所述待释放内存块的压缩信息。其中,该元数据用于管理各个被压缩的内存块的信息,该元数据中记录的信息可以包括但不限于:待释放内存块对应的虚拟地址(可以包括用户态的虚拟地址及内核态的虚拟地址)、压缩数据的存储位置的物理地址或压缩数据的大小信息等等。其中,本实施例中,在待释放内存块的各个比特位都为零的情况下,还可包括预设标记信息,该预设标记信息用于指示待释放内存块的各个比特位都为零。
在一些例子中,对于待释放内存块中的数据不是都为零的情况,对所述待释放内存块中的数据进行压缩后,存储所述压缩数据,将所述待释放内存块对应的页表项信息清空,以及创建与所述待释放内存块对应的元数据后,释放所述待释放内存块;其中,所述元数据包括所述压缩数据的存储位置信息。本实施例中,对于待释放内存块不是都为零的情况,则需要执行对所述待释放内存块中的数据进行压缩后,存储所述压缩数据的操作;以及前述的将所述待释放内存块对应的页表项信息清空,以及创建对应的元数据,其中包括所述压缩数据的存储位置信息。
其中,压缩数据的存储方式,根据不同场景有不同的实现方式。例如,传统的内存管理方案中,操作***还有专门用于管理小块内存的小块内存管理模块,可以将压缩数据存储至更小管理粒度的存储空间中。或者,在预留内存等场景下,预留存储空间的内存管理方案中,内存包括多个内存块,对每个所述内存块还划分了多个内存段;将内存块设计为较大的粒度,从而减少内存块的块元数据的占用;另外,还可以对内存块中的内存段进行具体管理。因此,一个内存块压缩后数据变小,因此可以通过内存段存储压缩数据。
在一些例子中,所述存储所述压缩数据的步骤之前,还包括:确定所述压缩数据的大小满足预设释放条件。本实施例中,还可能出现内存块中数据压缩后,压缩数据仍然较大的情况,基于此,本实施例还可以先判断压缩数据的大小是否满足预设释放条件,若满足则在执行存储压缩数据的操作,若不满足,则不释放所述待释放内存块,即待释放内存块释放失败。其中,预设释放条件可以根据实际需要灵活配置,例如,可以是压缩数据的大小大于或等于预设大小阈值,可以是压缩数据的大小占内存块大小的比例大于或等于预设比例阈值等。
在完成上述将所述待释放内存块对应的页表项信息清空,以及在所述元数据中记录所述待释放内存块的压缩信息操作后,可以释放所述待释放内存块,该待释放内存块被内存管理模块回收,用于重新分配。可以理解,由于该待释放内存块被释放,其分配状态需要更新,实际应用中还可以包括该待释放内存块的其他元数据的更新,例如表示内存块是否分配的元数据、内存的总元数据、内存块的冷热状态元数据、内存中各个内存块所属进程的内存分配数据mmap等等。
元数据中可能记录很多个已压缩内存块,为了便于在内存恢复时快速查找出需要恢复的内存块,在一些例子中,元数据中还包括所述待释放内存块对应的页表项的地址;所述方法还包括:根据所述元数据维护目标查找树,所述目标查找树中每个节点对应一个内存块的元数据,节点对应的内存块的元数据中页表项的地址作为所述节点的唯一标识。
其中,目标查找树用于快速查找,树的结构可以有多种选择,例如红黑树等,实际应用中可以根据需要灵活配置。本实施例中,需要为各个内存块确定唯一的关键字,基于此,从表示内存块被压缩的信息中,确定了页表项的地址作为关键字,目标查找树中每个节点用对应一个内存块的元数据,各个节点记录所述节点对应的内存块的元数据中页表项的地址作为唯一标识,从而在恢复时,通过页表项的标识快速查找出需要恢复的内存块及其信息。如图2D所示,是本说明书根据一示例性实施例示出的一种目标查找树的示意图,该树包括7个节点(N1至N7),以节点N2和节点N3为例,节点N2链接至某个内存块的压缩信息K2,且记录该压缩信息K2中的页表项的地址;节点N3链接至某个内存块的压 缩信息K3,且记录该压缩信息K3中的页表项的地址。其中,节点链接至压缩信息的实现,可以是节点中存储压缩信息的指针。在另一些例子中,还可以是将各个元数据直接组织成树,即树的节点结构体直接包含至元数据中也是可选的。
实际应用中,计算机设备可以是单核CPU设备,也可以是多核CPU。在多核CPU场景下,还可以提升压缩处理的效率,例如,本实施例方法应用于计算机设备的多个CPU的当前CPU时,所述对所述待释放内存块中存储的数据进行压缩,可以包括:由所述当前CPU对所述待释放内存块中存储的数据进行压缩;或,从其他CPU中选取目标CPU,创建用于对所述待释放内存块中存储的数据进行压缩的进程;其中,所述进程与所述目标CPU绑定,以使操作***调度所述目标CPU执行所述进程;其中,所述目标CPU的运行信息满足预设空闲条件,和/或,所述目标CPU与所述当前CPU之间的通信效率满足预设通信条件。实际应用中,上述预设空闲条件可以根据需要灵活配置,预设通信条件也可以根据需要灵活配置。例如,从是否是相同CPU核、是否是相同socket(套接字,网络中不同主机上的应用进程之间进行双向通信的端点的抽象)或是否位于相同的NUMA节点等多个维度确定。作为一个例子,如果当前CPU处于空闲状态则采用当前CPU,若未处于空闲状态,则基于通信效率进行确定,例如选取同一个socket上的CPU,因为他们会共享同一个缓存行(cache line),通信效率相对更高;再次选同一个NUMA上的CPU,最后选取跨NUMA的空闲CPU等。
如图2E所示,是本说明书根据一示例性实施例示出的一种内存恢复方法的流程图,包括如下步骤:在步骤212中,响应于内存恢复请求,获取需恢复内存块的元数据,所述元数据包括预设标记信息或压缩数据的存储位置信息;在步骤214中,从内存中分配目标内存块;在步骤216中,若所述元数据包括所述预设标记信息,在页表数据中添加所述目标内存块的页表项信息,完成所述目标内存块的恢复;其中,所述预设标记信息用于表示所述需恢复内存块的各个比特位都为零。
内存恢复,也即是,原本分配给进程的内存块由于在内存压缩时被释放,在需要时,重新分配内存块给该进程。可以理解,恢复的内存块与原本分配给进程的内存块的物理地址不一定是相同的。
其中,内存恢复请求,可以是操作***发现缺页异常时发起的。例如,前述实施例中提及,页表数据中,分配给进程的虚拟地址VA1对应的物理地址被清空,当操作***发现进程访问到该虚拟地址VA1,由于页表数据未有记录,因此触发缺页异常,即触发一内存恢复请求,从而执行本实施例的内存恢复方案。或者,也可以是操作***主动进行的内存恢复,例如操作***检测到被压缩的内存块太多,或者内存的剩余存储空间较大等等。
内存恢复请求中携带虚拟地址,在一些例子中,如前述的目标查找树的设计,可以根据所述内存恢复请求携带的虚拟地址,从页表数据中获取与所述虚拟地址对应的页表项的地址;从预设的目标查找树中查找记录所述页表项的地址的节点,利用查找出的节点获取所述需恢复内存块的元数据。
通过压缩信息,若包括所述预设标记信息,可以在页表数据中添加所述目标内存块的页表项信息,也即是,以虚拟地址为VA1为例,也即是在页表数据中写入虚拟地址VA1所对应的目标内存块的物理地址。
在另一些例子中,若所述元数据中包括压缩数据的存储位置信息,根据所述存储位置信息获取所述压缩数据,将所述压缩数据解压并存储至所述目标内存块后,在页表数据中添加所述目标内存块的页表项信息后,完成所述目标内存块的恢复。
在多核CPU场景下,还可以提升解压处理的效率,本实施例方法应用于计算机设备的多个CPU的当前CPU,所述将所述压缩数据解压,包括:由所述当前CPU对所述压缩数据进行解压;或,从其他CPU中选取目标CPU,创建用于对所述压缩数据进行解压的进 程;其中,所述进程与所述目标CPU绑定,以使操作***调度所述目标CPU执行所述进程;其中,所述目标CPU的运行信息满足预设空闲条件,和/或,所述目标CPU与所述当前CPU之间的通信效率满足预设通信条件。目标CPU的选取方式具体可参考前述实施例,在此不再赘述。
接下来再通过如下实施例进行说明。
(1)以内存压缩为例
1、获取待压缩内存块ms(memory section)。
例如,可以从冷页集合中选取一个待压缩的内存块ms;如果没有则失败,否则继续。例如,还可以是批处理场景,可以一次获取到多个待压缩内存块,之后对各个内存块串行处理。
2、查询待换出内存块ms所属的内存分配数据mmap,方便后续将内存块的物理地址paddr转换为虚拟地址vaddr(virtual address),然后通过vaddr获得页表项pmd等。
在一实施例中,所有进程的mmap都在一份数据(例如链表等)中,因此通过ms查询mmap时需要持锁,即需要暂时锁住链表,再通过遍历查询得到该ms对应的mmap。例如,如前述实施例的批处理场景下,根据需要还可以对整份mmap数据建立副本。在本申请实施例中,对于批处理的首个待压缩内存块ms,在查询时先对整份mmap持锁,建立副本后释放锁。批处理的其他待压缩内存块则可以利用副本查询对应的mmap。
3、根据mmap,可以获取到待换出内存块ms对应的虚拟地址vaddr,以及对应的页表项pmd(以2m粒度为例),并建立用于管理该内存块的压缩信息的元数据item。
示例性的,该item中记录的信息可以包括:ms对应的用户态的虚拟地址、ms对应的内核态的虚拟地址、ms对应的页表项地址,该ms中存储的数据被压缩后存储位置的物理地址、该ms中存储的数据被压缩后的长度等等信息,以便于后续页面压缩后使用,也用于在数据解压及内存块恢复时找到。
4、判断该待换出内存块ms是否是可疑的零页,如果是则跳转至步骤5;否则跳转至步骤9。
此步骤的判断是处于预判断阶段,并非是对整个内存块进行全量的判断,而是取内存块的部分内容进行判断。
5、读取整个内存块,判断内存块是不是零页;如果是则跳转至步骤6;否则跳转至步骤9。
此步骤的判断是对内存块的全量判断,即需要读取内存块的各个比特位。此步骤的判断仍处于预判断阶段,因为在对内存块判断时未修改该内存块的读写权限,在判断的过程中内存块可能有更新。
6、将内存块ms的读写权限改成只读,再次判断是否是零页。如果是则跳转至步骤7,否则跳转至步骤8。
7、将该内存块虚拟地址vaddr对应的页表项目pmd中的页表信息清零,则进程下次访问到该虚拟地址时,操作***会通过缺页异常对该内存块中的数据进行恢复(缺页异常会执行后续实施例的数据解压及恢复流程),并且更新元数据item,在元数据中标记该内存块为零页,跳转至步骤19。
8、对该内存块ms(执行至步骤8,说明该内存块并非零页)进行压缩。
其中,需要一临时存储空间tm,用于临时保存内存块ms中数据的压缩结果。其中,考虑到极端情况下有可能压缩结果的大小大于原始数据,因此该临时存储空间tm的大小大于内存块ms大小,具体大小可以根据需要灵活设置,例如可以是内存块ms的两倍大等。在批处理场景中,对于首个待压缩内存块,可以分配该临时存储空间tm,后续的该临时存储空间tm可以复用,供其他待压缩内存块的压缩操作时使用。因此,可以先判断是否存 在临时存储空间tm,若未有,则分配;若有,则可复用,由于临时存储空间tm可能存储数据,对该临时存储空间tm清零(即临时存储空间tm的初始化,对各个比特位置零)。
9、将待压缩的内存块ms的读写权限设置为只读,避免压缩过程中内存块中的数据出现改变。
10、在多CPU场景下,可以选取一CPU进行数据压缩。
数据压缩有一定的开销;在一些例子中,计算机设备只有一个CPU核(core),则可以不执行步骤10和步骤11。在计算机设备有多个CPU的场景下,可以根据需要选取扫描当前CPU的运行情况,判断是否存在空闲的CPU,如果不存在选用当前CPU,并跳转至步骤12,否则跳转至步骤11。
11、如果存在,选取一空闲CPU,并创建一用于执行数据压缩的压缩线程,该线程绑定选取到的空闲CPU,操作***内核的线程调度模块在调度到该线程时,会将该线程分配给选取到的空闲CPU去执行。
其中,此处的选取规则可以通过多种方式确定,例如,可以从是否是相同CPU核、是否是相同socket(套接字,网络中不同主机上的应用进程之间进行双向通信的端点的抽象)或是否位于相同的NUMA节点等多个维度确定。作为一个例子,如果当前CPU处于空闲状态则采用当前CPU,若未处于空闲状态,其次选取同一个socket上的CPU,因为他们会共享同一个缓存行(cache line),通信效率相对更高;再次选同一个NUMA上的CPU,最后选取跨NUMA的空闲CPU。
12、由选取出的CPU执行上述压缩线程。
作为例子,压缩线程中可以指定数据压缩算法,以及是否利用硬件加速等;该CPU通过执行该线程,实现对内存块ms中数据的压缩,并将压缩结果保存在临时存储空间tm中。
13、判断压缩结果的大小tlen是否满足预设释放条件;例如,是否大于原始内存块ms的大小,如果大于,则压缩失败,跳转至步骤18;否则跳转至步骤14。
实际应用中,此处的预设释放条件还可以有其他实现方式,例如,可以设定阈值,确定压缩结果的大小tlen与内存块ms的大小的差异值,根据差异值与设定阈值的大小关系,确定是否压缩失败。其中,该设定阈值表示压缩是否失败的阈值,实际应用中可以根据需要进行配置,本实施例对此不进行限定。
14、分配一满足压缩结果的大小tlen的存储空间zm;若分配失败,则跳转至步骤18;否则跳转至步骤15。
此处的分配方式,根据实际的内存管理方案可以有多种方式。例如,可以是内存中有一指定的存储空间专门用于存储压缩数据,或者,可以是内存采用特定的管理方式,例如前述小块内存管理实施例中,内存划分为多个内存块,每个内存块中还划分多个内存段,本实施例可以根据压缩结果的大小tlen分配满足该tlen的内存段。
15、将tm中的压缩结果复制到存储空间zm中。
16、更新内存块ms在页表数据中对应的页表项信息pmd,将页表项信息都清空,以便后续进程访问到的时候触发缺页异常,操作***则执行数据解压及内存块恢复的流程。
17、在管理压缩内存块的元数据item中,添加内存块ms的压缩信息后,跳转至步骤19。
此处的压缩信息,即步骤3中提及的ms对应的用户态的虚拟地址、ms对应的内核态的虚拟地址、该ms中存储的数据被压缩后存储位置的物理地址、该ms中存储的数据被压缩后的长度等等信息。
18、内存块ms压缩失败,将内存块恢复成读写状态,并返回失败原因。
19、内存块ms压缩成功,该内存块被释放,整个内存压缩过程完成。
(2)数据解压及内存块恢复
1、确定需恢复的进程a,获取该进程a的记录内存分配信息的元数据mmap,以便后续查询和更新使用。
2、从进程a的内存分配信息mmap中,选取一虚拟内存vaddr,该虚拟内存对应的物理内存块是一已被压缩的内存块。
上述实施例的步骤1和2是以操作***主动恢复为例进行说明,实际应用中,还可以是操作***在出现缺页异常时触发的。
3、根据页表数据,获取虚拟地址vaddr对应的页表项的地址,查找这个内存块对应的压缩管理元数据item。
4、分配一个内存块ms,对该ms初始化,即清零;若分配失败则退出(表示当前内存未有足够存储空间),否则继续。
5、根据item判断该压缩内存块ms是否属于零页;如果是零页,则调整至步骤6;否则跳转至步骤7。
6、由于内存块ms是零页,无需进行数据解压,将该虚拟地址vaddr对应页表项pmd中记录的物理地址信息更改为内存块ms对应的物理地址,并将页表的权限改为可读写,跳转至步骤10。
7、扫描当前cpu的运行情况,判断是否存在选空闲cpu,如果不存在选用当前cpu,并跳转至步骤9,否则步骤8。
8、如果存在,选取和当前运行cpu亲和度最高的空闲cpu(比如先依次选取相同core、然后相同socket的、相同node的)。
9、根据先前配置的压缩算法,以及是否利用硬件加速等,利用选定的cpu,按照item中记录的压缩内存zm的位置和长度,进行解压,并将压缩后的结果保存在预分配的内存块ms中。
10、将vaddr对应页表项目pmd的物理地址改为ms对应的地址,并将页表属性改为读写。
11、释放保存压缩内存的小内存块zm。
12、至此,被压缩页面已经从压缩内存中恢复出来,释放管理元数据item,页面恢复过程结束。
与前述内存释放方法和内存恢复方法的实施例相对应,本说明书还提供了内存释放装置/内存恢复装置及其所应用的终端的实施例。
本说明书内存释放装置/内存恢复装置的实施例可以应用在计算机设备上,例如服务器或终端设备。装置实施例可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。以软件实现为例,作为一个逻辑意义上的装置,是通过其所在处理器将非易失性存储器中对应的计算机程序指令读取到内存中运行形成的。从硬件层面而言,如图3所示,为本说明书内存释放装置/内存恢复装置所在计算机设备的一种硬件结构图,除了图3所示的处理器310、内存330、网络接口320、以及非易失性存储器340之外,实施例中内存释放装置/内存恢复装置331所在的计算机设备,通常根据该计算机设备的实际功能,还可以包括其他硬件,对此不再赘述。
如图4所示,图4是本说明书根据一示例性实施例示出的一种内存释放装置的框图,所述装置包括:内存块确定模块41,用于:响应于内存释放请求,确定待释放内存块;比特位确定模块42,用于:确定所述待释放内存块的各个比特位是否都为零;释放模块43,用于:在确定所述待释放内存块的各个比特位都为零的情况下,将所述待释放内存块对应的页表项信息清空,以及创建与所述待释放内存块对应的元数据后,释放所述待释放内存块;其中,所述元数据包括预设标记信息,所述预设标记信息用于表示所述待释放内存块的各个比特位都为零。
在一些例子中,所述释放模块43,还用于:在确定所述待释放内存块的各个比特位不是都为零的情况下,对所述待释放内存块中的数据进行压缩后,存储所述压缩数据,将所述待释放内存块对应的页表项信息清空,以及创建与所述待释放内存块对应的元数据后,释放所述待释放内存块;其中,所述元数据包括所述压缩数据的存储位置信息。
在一些例子中,所述比特位确定模块,还用于:在未更改所述待释放内存块的读写权限的情况下,对所述待释放内存块各个字节是否都为零进行预判断;若预判断出所述待释放内存块的各个比特位都为零,将所述待释放内存块的读写权限更改为只读后,再判断所述待释放内存块的各个比特位是否都为零。
在一些例子中,所述比特位确定模块,还用于:从所述待释放内存块中选取n个比特位,确定所述n个比特位是否都为零;其中,所述n为正整数;若否,预判断出所述待释放内存块的各个比特位不是都为零;若是,对所述待释放内存块除所述设定位数的比特位之外的其他比特位是否都为零进行预判断。
在一些例子中,所述比特位确定模块,还用于:从所述待释放内存块的最高位开始选取n个比特位;从所述待释放内存块的最低位开始选取n个比特位;或,在所述待释放内存块中随机选取n个比特位。
在一些例子中,所述元数据还包括所述待释放内存块对应的页表项的地址;所述装置还包括查找模块,用于:根据所述元数据维护目标查找树,所述目标查找树中每个节点对应一个内存块的元数据,节点对应的内存块的元数据中页表项的地址作为所述节点的唯一标识。
在一些例子中,所述释放模块,还用于在存储所述压缩数据之前,确定所述压缩数据的大小满足预设释放条件。
在一些例子中,所述装置应用于计算机设备的多个CPU的当前CPU;所述释放模块,还用于:由所述当前CPU对所述待释放内存块中存储的数据进行压缩;或,从其他CPU中选取目标CPU,创建用于对所述待释放内存块中存储的数据进行压缩的进程;其中,所述进程与所述目标CPU绑定,以使操作***调度所述目标CPU执行所述进程;其中,所述目标CPU的运行信息满足预设空闲条件,和/或,所述目标CPU与所述当前CPU之间的通信效率满足预设通信条件。
如图5所示,图5是本说明书根据一示例性实施例示出的一种内存恢复装置的框图,所述装置包括:获取模块51,用于:响应于内存恢复请求,获取根据元数据确定需恢复内存块的元数据,所述元数据记录每个已压缩内存块对应的压缩信息,所述压缩信息包括预设标记信息或压缩数据的存储位置信息;分配模块52,用于:从内存中分配目标内存块;恢复模块53,用于:在所述需恢复内存块的压缩信息元数据包括所述预设标记信息的情况下,在页表数据中添加所述目标内存块的页表项信息,完成所述目标内存块的恢复;其中,所述预设标记信息用于表示所述需恢复内存块的各个比特位都为零。
在一些例子中,所述恢复模块还用于:在所述元数据中包括压缩数据的存储位置信息的情况下,根据所述存储位置信息获取所述压缩数据,将所述压缩数据解压并存储至所述目标内存块后,在页表数据中添加所述目标内存块的页表项信息。
在一些例子中,所述获取模块,还用于:根据所述内存恢复请求携带的虚拟地址,根据页表数据获取与所述虚拟地址对应的页表项的地址;从预设的目标查找树中查找记录所述页表项的地址的节点,利用查找出的节点获取所述需恢复内存块的元数据;其中,所述目标查找树中每个节点对应一个内存块的元数据,节点对应的内存块的元数据中页表项的地址作为所述节点的唯一标识。
在一些例子中,所述装置应用于计算机设备的多个CPU的当前CPU;所述恢复模块,还用于:由所述当前CPU对所述压缩数据进行解压;或,从其他CPU中选取目标CPU, 创建用于对所述压缩数据进行解压的进程;其中,所述进程与所述目标CPU绑定,以使操作***调度所述目标CPU执行所述进程;其中,所述目标CPU的运行信息满足预设空闲条件,和/或,所述目标CPU与所述当前CPU之间的通信效率满足预设通信条件。
相应的,本说明书实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现前述第一方面所述方法实施例的步骤。
本说明书实施例还提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时实现前述第一方面所述方法实施例的步骤。
上述装置中各个模块的功能和作用的实现过程具体详见上述内存释放/内存恢复方法中对应步骤的实现过程,在此不再赘述。
本说明书的实施例提供的技术方案可以包括以下有益效果:本说明书实施例中,对于内存释放方案,根据内存释放请求,确定待释放内存块后,先确定待释放内存块的各个比特位是否都为零;在确定该待释放内存块的各个比特位都为零的情况下,直接将所述待释放内存块对应的页表项信息清空,以及创建对应的元数据以记录所述待释放内存块被压缩,之后释放所述待释放内存块;由此可见,在释放时无需执行数据压缩操作,提升了内存释放的速度。并且,元数据中包括预设标记信息,从而在内存恢复时,通过预设标记信息,可以从内存中分配目标内存块,直接在页表数据中添加所述目标内存块的页表项信息,也无需执行数据解压操作,提升了内存恢复的速度。
相应的,本说明书实施例还提供了一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现前述内存释放/内存恢复方法实施例的步骤。
相应的,本说明书实施例还提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述程序时实现内存释放/内存恢复方法实施例的步骤。
相应的,本说明书实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现内存释放/内存恢复方法实施例的步骤。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本说明书方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
上述实施例可以应用于一个或者多个计算机设备中,所述计算机设备是一种能够按照事先设定或存储的指令,自动进行数值计算和/或信息处理的设备,所述计算机设备的硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程门阵列(Field-Programmable Gate Array,FPGA)、数字处理器(Digital Signal Processor,DSP)、嵌入式设备等。
所述计算机设备可以是任何一种可与用户进行人机交互的电子产品,例如,个人计算机、平板电脑、智能手机、个人数字助理(Personal Digital Assistant,PDA)、游戏机、交互式网络电视(Internet Protocol Television,IPTV)、智能式穿戴式设备等。
所述计算机设备还可以包括网络设备和/或用户设备。其中,所述网络设备包括,但不限于单个网络服务器、多个网络服务器组成的服务器组或基于云计算(Cloud Computing)的由大量主机或网络服务器构成的云。
所述计算机设备所处的网络包括但不限于互联网、广域网、城域网、局域网、虚拟专用网络(Virtual Private Network,VPN)等。
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
上面各种方法的步骤划分,只是为了描述清楚,实现时可以合并为一个步骤或者对某些步骤进行拆分,分解为多个步骤,只要包括相同的逻辑关系,都在本专利的保护范围内;对算法中或者流程中添加无关紧要的修改或者引入无关紧要的设计,但不改变其算法和流程的核心设计都在该申请的保护范围内。
其中,“具体示例”、或“一些示例”等的描述意指结合所述实施例或示例描述的具体特征、结构、材料或者特点包含于本说明书的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。
本领域技术人员在考虑说明书及实践这里申请的发明后,将容易想到本说明书的其它实施方案。本说明书旨在涵盖本说明书的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本说明书的一般性原理并包括本说明书未申请的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本说明书的真正范围和精神由下面的权利要求指出。
应当理解的是,本说明书并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本说明书的范围仅由所附的权利要求来限制。
以上所述仅为本说明书的较佳实施例而已,并不用以限制本说明书,凡在本说明书的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本说明书保护的范围之内。

Claims (14)

  1. 一种内存释放方法,所述内存包括多个内存块,所述方法包括:
    确定所述内存中的待释放内存块;
    确定所述待释放内存块的各个比特位是否都为零;
    若是,将所述待释放内存块对应的页表项信息清空,以及创建与所述待释放内存块对应的元数据后,释放所述待释放内存块;其中,所述元数据包括预设标记信息,所述预设标记信息用于表示所述待释放内存块的各个比特位都为零。
  2. 根据权利要求1所述的方法,所述方法还包括:
    若否,对所述待释放内存块中的数据进行压缩后,存储所述压缩数据,将所述待释放内存块对应的页表项信息清空,以及创建与所述待释放内存块对应的元数据后,释放所述待释放内存块;其中,所述元数据包括所述压缩数据的存储位置信息。
  3. 根据权利要求1或2所述的方法,所述确定所述待释放内存块的各个比特位是否都为零,包括:
    在未更改所述待释放内存块的读写权限的情况下,对所述待释放内存块各个字节是否都为零进行预判断;
    若预判断出所述待释放内存块的各个比特位都为零,将所述待释放内存块的读写权限更改为只读后,再判断所述待释放内存块的各个比特位是否都为零。
  4. 根据权利要求3所述的方法,所述对所述待释放内存块的各个比特位是否都为零进行预判断,包括:
    从所述待释放内存块中选取n个比特位,确定所述n个比特位是否都为零;其中,所述n为正整数;
    若否,确定所述待释放内存块的各个比特位不是都为零;
    若是,对所述待释放内存块除所述n个比特位之外的其他比特位是否都为零进行预判断。
  5. 根据权利要求1或2所述的方法,所述元数据还包括所述待释放内存块对应的页表项的地址;所述方法还包括:
    根据所述元数据维护目标查找树,所述目标查找树中每个节点对应一个内存块的元数据,节点对应的内存块的元数据中页表项的地址作为所述节点的唯一标识。
  6. 根据权利要求2所述的方法,所述方法应用于计算机设备的多个CPU的当前CPU;
    所述对所述待释放内存块中存储的数据进行压缩,包括:
    由所述当前CPU对所述待释放内存块中存储的数据进行压缩;或,
    从其他CPU中选取目标CPU,创建用于对所述待释放内存块中存储的数据进行压缩的进程;其中,所述进程与所述目标CPU绑定,以使操作***调度所述目标CPU执行所述进程;所述目标CPU的运行信息满足预设空闲条件,和/或,所述目标CPU与所述当前CPU之间的通信效率满足预设通信条件。
  7. 一种内存恢复方法,所述方法包括:
    响应于内存恢复请求,获取需恢复内存块的元数据;
    从内存中分配目标内存块;
    若所述元数据包括预设标记信息,在页表数据中添加所述目标内存块的页表项信息,完成所述目标内存块的恢复;其中,所述预设标记信息用于表示所述需恢复内存块的各个比特位都为零。
  8. 根据权利要求7所述的方法,所述方法还包括:
    若所述元数据中包括压缩数据的存储位置信息,根据所述存储位置信息获取所述压缩数据,将所述压缩数据解压并存储至所述目标内存块后,在页表数据中添加所述目标内存 块的页表项信息后,完成所述目标内存块的恢复。
  9. 根据权利要求7所述的方法,所述获取需恢复内存块的元数据,包括:
    根据所述内存恢复请求携带的虚拟地址,从页表数据中获取与所述虚拟地址对应的页表项的地址;
    从预设的目标查找树中查找记录所述页表项的地址的节点,利用查找出的节点获取所述需恢复内存块的元数据;其中,所述目标查找树中每个节点对应一个内存块的元数据,节点对应的内存块的元数据中页表项的地址作为所述节点的唯一标识。
  10. 根据权利要求8所述的方法,所述方法应用于计算机设备的多个CPU的当前CPU;
    所述将所述压缩数据解压,包括:
    由所述当前CPU对所述压缩数据进行解压;或,
    从其他CPU中选取目标CPU,创建用于对所述压缩数据进行解压的进程;其中,所述进程与所述目标CPU绑定,以使操作***调度所述目标CPU执行所述进程;所述目标CPU的运行信息满足预设空闲条件,和/或,所述目标CPU与所述当前CPU之间的通信效率满足预设通信条件。
  11. 一种内存释放装置,所述内存包括多个内存块,所述装置包括:
    内存块确定模块,用于:确定所述内存中的待释放内存块;
    比特位确定模块,用于:确定所述待释放内存块的各个比特位是否都为零;
    释放模块,用于:在确定所述待释放内存块的各个比特位都为零的情况下,将所述待释放内存块对应的页表项信息清空,以及创建与所述待释放内存块对应的元数据后,释放所述待释放内存块;其中,所述元数据包括预设标记信息,所述预设标记信息用于表示所述待释放内存块的各个比特位都为零。
  12. 一种内存恢复装置,所述装置包括:
    获取模块,用于:响应于内存恢复请求,获取需恢复内存块的元数据,所述元数据包括预设标记信息或压缩数据的存储位置信息;
    分配模块,用于:从内存中分配目标内存块;
    恢复模块,用于:在所述元数据包括所述预设标记信息的情况下,在页表数据中添加所述目标内存块的页表项信息,完成所述目标内存块的恢复;其中,所述预设标记信息用于表示所述需恢复内存块的各个比特位都为零。
  13. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至10任一所述方法的步骤。
  14. 一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时实现权利要求1至10任一所述方法的步骤。
PCT/CN2023/131110 2022-11-10 2023-11-10 内存释放、内存恢复方法、装置、计算机设备及存储介质 WO2024099448A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211409950.0A CN115712500A (zh) 2022-11-10 2022-11-10 内存释放、内存恢复方法、装置、计算机设备及存储介质
CN202211409950.0 2022-11-10

Publications (1)

Publication Number Publication Date
WO2024099448A1 true WO2024099448A1 (zh) 2024-05-16

Family

ID=85232881

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/131110 WO2024099448A1 (zh) 2022-11-10 2023-11-10 内存释放、内存恢复方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN115712500A (zh)
WO (1) WO2024099448A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115712500A (zh) * 2022-11-10 2023-02-24 阿里云计算有限公司 内存释放、内存恢复方法、装置、计算机设备及存储介质
CN116107925B (zh) * 2023-04-10 2023-09-26 阿里云计算有限公司 数据存储单元处理方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090276602A1 (en) * 2006-03-31 2009-11-05 Olivier Chedru Memory management system for reducing memory fragmentation
CN103052945A (zh) * 2010-08-06 2013-04-17 阿尔卡特朗讯 管理计算机存储器的方法、程序产品及数据存储设备
US20130205111A1 (en) * 2012-02-02 2013-08-08 Fujitsu Limited Virtual storage device, controller, and computer-readable recording medium having stored therein a control program
US20170004069A1 (en) * 2014-03-20 2017-01-05 Hewlett Packard Enterprise Development Lp Dynamic memory expansion by data compression
CN110023906A (zh) * 2017-10-13 2019-07-16 华为技术有限公司 一种压缩和解压处理器所占内存的方法及装置
CN115712500A (zh) * 2022-11-10 2023-02-24 阿里云计算有限公司 内存释放、内存恢复方法、装置、计算机设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090276602A1 (en) * 2006-03-31 2009-11-05 Olivier Chedru Memory management system for reducing memory fragmentation
CN103052945A (zh) * 2010-08-06 2013-04-17 阿尔卡特朗讯 管理计算机存储器的方法、程序产品及数据存储设备
US20130205111A1 (en) * 2012-02-02 2013-08-08 Fujitsu Limited Virtual storage device, controller, and computer-readable recording medium having stored therein a control program
US20170004069A1 (en) * 2014-03-20 2017-01-05 Hewlett Packard Enterprise Development Lp Dynamic memory expansion by data compression
CN110023906A (zh) * 2017-10-13 2019-07-16 华为技术有限公司 一种压缩和解压处理器所占内存的方法及装置
CN115712500A (zh) * 2022-11-10 2023-02-24 阿里云计算有限公司 内存释放、内存恢复方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
CN115712500A (zh) 2023-02-24

Similar Documents

Publication Publication Date Title
US20230315342A1 (en) Memory system and control method
US10983955B2 (en) Data unit cloning in memory-based file systems
US11669444B2 (en) Computing system and method for controlling storage device
US11467955B2 (en) Memory system and method for controlling nonvolatile memory
TWI661301B (zh) 記憶體系統及控制非揮發性記憶體之控制方法
US10747673B2 (en) System and method for facilitating cluster-level cache and memory space
US5899994A (en) Flexible translation storage buffers for virtual address translation
WO2024099448A1 (zh) 内存释放、内存恢复方法、装置、计算机设备及存储介质
CN101645045B (zh) 用于使用透明页变换来管理存储器的方法和***
US20020073298A1 (en) System and method for managing compression and decompression of system memory in a computer system
US9304946B2 (en) Hardware-base accelerator for managing copy-on-write of multi-level caches utilizing block copy-on-write differential update table
WO2024078429A1 (zh) 内存管理方法、装置、计算机设备及存储介质
US20210089442A1 (en) Dynamically allocating memory pool subinstances
US7711921B2 (en) Page oriented memory management
CN115617542A (zh) 内存交换方法、装置、计算机设备及存储介质
US10102116B2 (en) Multi-level page data structure
EP3249539B1 (en) Method and device for accessing data visitor directory in multi-core system
CN115756838A (zh) 内存释放、内存恢复方法、装置、计算机设备及存储介质
US20220276889A1 (en) Non fragmenting memory ballooning
CN116302376A (zh) 进程创建方法、装置、电子设备及计算机可读介质
JP7337228B2 (ja) メモリシステムおよび制御方法
JP7508667B2 (ja) メモリシステム
US11113000B2 (en) Techniques for efficiently accessing values spanning slabs of memory
CN108139967B (zh) 将数据流转换为阵列
CN116382579A (zh) 内存规整方法、装置、计算机设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23888133

Country of ref document: EP

Kind code of ref document: A1