CN116680083A - Memory processing method, device, equipment and storage medium - Google Patents

Memory processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN116680083A
CN116680083A CN202310783685.0A CN202310783685A CN116680083A CN 116680083 A CN116680083 A CN 116680083A CN 202310783685 A CN202310783685 A CN 202310783685A CN 116680083 A CN116680083 A CN 116680083A
Authority
CN
China
Prior art keywords
memory
background application
target background
memory space
swap page
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310783685.0A
Other languages
Chinese (zh)
Inventor
江志国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202310783685.0A priority Critical patent/CN116680083A/en
Publication of CN116680083A publication Critical patent/CN116680083A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application discloses a memory processing method, a device, equipment and a storage medium, which belong to the technical field of computers, wherein the memory processing method can comprise the steps of storing a swap page identifier corresponding to a target background application from a first memory space to a second memory space under the condition of executing memory recycling operation by exiting the target background application; the exchange page identifier corresponding to the target background application is used for indicating the position of the exchange page corresponding to the target background application in the first memory space; and under the condition that the swap page release condition is met, acquiring a swap page identifier corresponding to the target background application from the second memory space, and executing release operation on the swap page corresponding to the swap page identifier.

Description

Memory processing method, device, equipment and storage medium
Technical Field
The application belongs to the technical field of computers, and particularly relates to a memory processing method, a memory processing device, memory processing equipment and a memory medium.
Background
In order to meet the increasing demands of users on various application functions in electronic equipment, the number of keep-alive applications running in the background of an operating system is continuously increased, and higher requirements are put on the memory space of the operating system.
Under the condition that the background application is excessively opened, the large-scale memory application such as games, cameras, instant messaging applications and the like is continuously started, and the situation that the memory space is rapidly reduced and the background application is triggered to be closed can occur, however, the processor kernel in the electronic equipment is basically occupied by the memory flow of the foreground application and occupied by the memory recovery thread of the operating system, so that the time consumption of the background application in exiting is increased, the processor resource is preempted for a long time, and the operation of the foreground application is blocked.
Disclosure of Invention
The embodiment of the application aims to provide a memory processing method, a memory processing device, memory processing equipment and a memory storage medium, which can reduce the time consumption for exiting a background application in electronic equipment and avoid the operation of a foreground application from being blocked.
In a first aspect, an embodiment of the present application provides a memory processing method, including:
under the condition that the memory recovery operation is executed by exiting the target background application, storing the exchange page identification corresponding to the target background application from the first memory space to the second memory space; the exchange page identifier corresponding to the target background application is used for indicating the position of the exchange page corresponding to the target background application in the first memory space;
and under the condition that the swap page release condition is met, acquiring a swap page identifier corresponding to the target background application from the second memory space, and executing release operation on the swap page corresponding to the swap page identifier.
In a second aspect, an embodiment of the present application provides a memory processing apparatus, including:
the storage module is used for storing the exchange page identification corresponding to the target background application from the first memory space to the second memory space under the condition that the memory recovery operation is executed by exiting the target background application; the exchange page identifier corresponding to the target background application is used for indicating the position of the exchange page corresponding to the target background application in the first memory space;
and the release module is used for acquiring the exchange page identification corresponding to the target background application from the second memory space under the condition that the exchange page release condition is met, and executing release operation on the exchange page corresponding to the exchange page identification.
In a third aspect, an embodiment of the present application provides an electronic device, where the electronic device includes a processor, a memory, and a program or instructions stored in the memory and executable on the processor, and the program or instructions implement the steps of the memory processing method as shown in the first aspect when executed by the processor.
In a fourth aspect, an embodiment of the present application provides a readable storage medium, where a program or an instruction is stored, where the program or the instruction implements the steps of the memory processing method according to the first aspect when executed by a processor.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a display interface, where the display interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the memory processing method as shown in the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to perform the steps of the memory processing method as shown in the first aspect.
In the embodiment of the application, under the condition that the memory recovery operation is executed by exiting the target background application, the exchange page identifier corresponding to the target background application is stored from the first memory space to the second memory space, the exchange page identifier corresponding to the target background application is used for indicating the position of the first memory space of the exchange page corresponding to the target background application, under the condition that the exchange page release condition is met, the exchange page identifier corresponding to the target background application is obtained from the second memory space, and the release operation is executed on the exchange page corresponding to the exchange page identifier. In this way, by replacing the storage position of the exchange page identifier corresponding to the target background application, the exchange page identifier corresponding to the target background application cannot be acquired from the first memory space in the exiting process of the target background application, so that the exchange page corresponding to the exchange page identifier cannot be released, and the exchange page identifier corresponding to the target background application is acquired from the second memory space under the condition that the exchange page releasing condition is met, so that the exchange page corresponding to the exchange page identifier is released, thereby reducing the time consumption of exiting the target background application and avoiding the operation of the foreground application from being blocked.
Drawings
FIG. 1 is a schematic diagram of a data storage relationship;
FIG. 2 is a flowchart of a memory processing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a global linked list and a cache relationship in a memory processing method according to an embodiment of the present application;
FIG. 4 is a flow chart of a memory processing method according to an embodiment of the present application;
FIG. 5 is a second flow chart of a memory processing method according to an embodiment of the application;
FIG. 6 is a schematic diagram of a memory processing device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 8 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type not limited to the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
Under the condition that the background application of the operating system is excessively opened, the large memory application such as games, cameras, instant messaging applications and the like is continuously started, the problems that the background application cannot exit quickly, so that processor resources cannot be preempted for a long time, the memory of an anonymous page and a file page occupied by the background application cannot be released quickly, the operation of the foreground application is blocked, the power consumption of the electronic equipment is increased and the like can occur.
The inventor finds that taking Linux system as an example, a memory page (page) is a basic unit of memory management, and the size of each memory page is 4KB. The physical page types occupied by an application mainly include anonymous pages, file pages, and exchange pages. The anonymous page refers to a memory page which is not associated with a file, and can be used for process stacks, data segments, anonymous mapping and the like of an application, and data cannot be stored during release; the file page refers to a memory page associated with the file, and can be used for application process code segments, file read-write operation, shared memory and the like, and data can be stored in the exchange page during release. The swap page may be a swap-out (swapout) swap page, where the swapout swap page may include a zram memory page and a zram ufs page, where the zram memory page refers to a unit memory block size whose data is compressed and preferentially swapped and stored into a new memory space when an anonymous page occupied by an application is recovered in system memory, and the zram ufs page refers to a unit memory block size that is swapped and stored into a ufs partition when the stored zram memory page reaches a certain threshold, and the swapout swap page releases the data.
As shown in fig. 1, page Table Entries (PTEs) corresponding to anonymous pages and file pages occupied by an application are stored in memory, and when released, the associated anonymous pages or file pages can be found directly by traversing page table entries mapped by the virtual address space (vma) of the application. While the page table entry corresponding to the swap page occupied by the application is not stored in memory, it points to the swap page identifier swp _entry_t and is stored in the swap address space, where swp _entry_t is used to indicate the location number of the corresponding swap page in the swap partition, that is swp _entry_t is the index used to find the corresponding swap page in the swap partition, and when released, the corresponding swap page needs to be found through swp _entry_t, which involves related lock resource contention and block I/O procedures in the swap mechanism, resulting in significantly higher time cost of releasing the swap page than releasing anonymous pages and file pages.
Because the system memory recycling thread continuously recycles the anonymous pages and the file pages when recycling the memory, and the recycled anonymous page data exchange is stored in the swapout exchange pages, the number of the anonymous pages and the file pages occupied by background application is continuously reduced, and the number of the occupied swapout exchange pages is continuously increased. Thus, one of the main reasons for the background application to exit time is to free up the swapout swap page it occupies.
In order to solve the problems in the related art, the embodiment of the application provides a memory processing method for performing isolated caching on the swap page identifier of the occupied swap page and delaying release of the corresponding swap page in the process of executing the memory recycling operation by exiting the background application.
Based on this, the memory processing method provided by the embodiment of the present application is described in detail below with reference to fig. 2 to 6 through specific embodiments and application scenarios thereof.
First, a memory processing method according to an embodiment of the present application is described in detail with reference to fig. 2.
Fig. 2 is a flowchart of a memory processing method according to an embodiment of the present application.
As shown in fig. 2, the memory processing method provided by the embodiment of the present application may be applied to an electronic device, and based on this, the memory processing method may include the following steps:
step 210, storing the swap page identifier corresponding to the target background application from the first memory space to the second memory space under the condition that the target background application is exited to execute the memory reclamation operation; the exchange page identifier corresponding to the target background application is used for indicating the position of the exchange page corresponding to the target background application in the first memory space; step 220, under the condition that the swap page release condition is met, acquiring a swap page identifier corresponding to the target background application from the second memory space, and executing a release operation on the swap page corresponding to the swap page identifier.
In this way, by replacing the storage position of the exchange page identifier corresponding to the target background application, the exchange page identifier corresponding to the target background application cannot be acquired from the first memory space in the exiting process of the target background application, so that the exchange page corresponding to the exchange page identifier cannot be released, and the exchange page identifier corresponding to the target background application is acquired from the second memory space under the condition that the exchange page releasing condition is met, so that the exchange page corresponding to the exchange page identifier is released, thereby reducing the time consumption of exiting the target background application and avoiding the operation of the foreground application from being blocked.
The above steps are described in detail below, and are specifically described below.
Referring first to step 210, memory reclamation referred to in the embodiments of the present application refers to reclamation of system memory in an electronic device, where the system memory may include physical memory of an operating system.
The swap page may be a swapout swap page in the physical page type, and the swap page identification may be swp _entry_t corresponding to the swapout swap page, which is actually stored in the swap address space.
The first memory space in the embodiment of the application can be arranged in the disk memory, wherein the first memory space can be specifically a swap partition, and the swap partition comprises a swap address space which is a related structural space of a swap mechanism. The second memory space in the embodiment of the present application may be a system memory.
In one or more embodiments, the second memory space in the embodiments of the present application includes a global linked list. Based on this, this step 210 may specifically include:
under the condition that the memory recycling operation is executed by exiting the target background application, storing a swap page identifier corresponding to the target background application from the first memory space to a global linked list in the second memory space;
based on this, the step 220 may specifically include:
and under the condition that the swap page release condition is met, acquiring a swap page identifier corresponding to the target background application based on the global linked list, and executing release operation on the swap page corresponding to the swap page identifier.
In an exemplary embodiment, when the memory reclamation operation is executed by exiting the target background application, the page table item PTE corresponding to the swap page may be converted into the swap page identifier swp _entry_t, where the swap page identifier swp _entry_t may indicate the position of the swap page in the first memory space, and at this time, in order to isolate the swap page identifier corresponding to the target background application, the swap page identifier swp _entry_t is mounted in the global linked list, so that the swap page identifier swp _entry_t is not released immediately in the exiting process of the target background application, but is placed in the global linked list for waiting for delayed release, and at this time, the anonymous page and the file page memory occupied by the swap page may be released first to relieve the pressure of the current system memory, so as to ensure that the target background application exits quickly, and avoid preempting processor resources with the foreground application for a long time.
And, in order to avoid the problem of lock resource contention caused by frequent access to the global linked list by adding only one swp _entry_t to the global linked list at a time, the embodiment of the present application introduces a buffer storage (cache) structure, and in order to avoid the problem of internal fragmentation, the cache structure may occupy the space of the system memory as an integer multiple of the memory page, so in one or more possible embodiments, the step 210 may specifically include:
step 2101, storing a swap page identifier corresponding to the target background application to a buffer storage structure in the second memory space under the condition that the target background application is exited to execute the memory reclamation operation;
step 2102, mounting the buffer storage structure to a global linked list in the second memory space under the condition that the storage capacity of the buffer storage structure is greater than a first storage capacity threshold;
step 2103, continuing to execute the storing and mounting steps until the storing of the exchange page identifier corresponding to the target background application is completed;
under the condition that the exchange page identification corresponding to the target background application is stored, the global linked list comprises N buffer storage structures, and N is larger than 0.
For example, as shown in fig. 3, the swap page identifier swp _entry_t corresponding to the target background application may be stored in the cache structure, and if the cache structure is full, if swp _entry_t cannot be stored continuously, the cache structure may be docked by a list node, and the cache structure may be mounted to the global linked list.
At this time, it may be determined whether to store the swap page identifier corresponding to the target background application, and if it is determined that storing the swap page identifier corresponding to the target background application is completed, step 220 may be performed; on the contrary, since the global linked list may include a plurality of list nodes, each of the plurality of list nodes may be used for interfacing with the cache structure, so that the plurality of cache structures may be interfaced according to the order of the plurality of list nodes, if it is determined that storing the swap page identifier corresponding to the target background application is not completed, the foregoing storing the swap page identifier swp _entry_t through the cache structure is continuously executed, and when the cache structure is full, the cache structure is mounted to the global linked list until storing the swap page identifier corresponding to the target background application is completed.
It should be noted that, as shown in fig. 3, the cache structure may include a fixed memory area and an active memory area. The space size of the system memory of the fixed memory area is synchronously applied when the cache structure is created, and the fixed memory area comprises at least two first sub-memory areas, and each first sub-memory area is used for storing a swap page identifier. The space of the system memory occupied by the active memory area is dynamically applied when in use, and the active memory area may include at least two active memory spaces, each active memory space including at least two second sub-memory areas, each second sub-memory area for storing a swap page identifier.
Thus, if the number of the first sub-memory areas in the fixed memory area is M, the number of the active memory spaces in the active memory area is P, and the number of the second sub-memory areas included in each active memory space is X, then the maximum number of swp _entry_t that can be cached in one cache structure is m+p X, where M, P, X is an integer greater than or equal to 2, at this time, the maximum number of swp _entry_t that can be cached in one cache structure is m+p X may be used as the first storage threshold in the embodiment of the present application.
Further, based on the cache structure, in a possible example, the step 2101 may specifically include:
step 21011, in the case of executing the memory reclamation operation by exiting the target background application, storing the swap page identifier to the fixed memory area of the buffer memory structure in the second memory space;
step 21012, applying for an active memory area if the storage capacity of the fixed memory area is greater than the second storage capacity threshold;
at step 21013, the swap page identification is continued to be stored to the active memory area.
For example, as shown in fig. 3, in the case of executing the memory reclamation operation by exiting the target background application, swp _entry_t may be stored to the fixed memory area of the buffer storage structure first, and in the case that the storage capacity of the fixed memory area is greater than the fixed content area and greater than the second storage capacity threshold, the active memory area may be applied according to the number of swap page identifiers corresponding to the target background application, and the swap page identifiers may be continuously stored to the active memory area. Here, the second storage threshold may be specifically a number value of the first sub-memory area included in the fixed memory area.
And, in another possible example, the active memory area includes at least two active memory spaces, each active memory space includes at least two second sub-memory areas, each second sub-memory area is used for storing a swap page identifier, and the step 21013 may specifically include:
storing the swap page identification to a first active memory space of the at least two active memory spaces;
and storing the swap page identification to the next active memory space in the at least two active memory spaces in the case that the storage amount of the first active memory space is greater than the third storage amount threshold.
For example, swp _entry_t may be stored to a first active memory space of the at least two active memory spaces. Since the first active memory space may include at least two second sub-memory regions, if a first second sub-memory region of the at least two second sub-memory regions does not store the swap page identifier, swp _entry_t may be stored into the first second sub-memory region; if the storage capacity of the first active memory space is greater than the third storage capacity threshold, that is, each of the at least two second sub-memory regions stores the swap page identifier, swp _entry_t may be stored to the first second sub-memory region in the next active memory space of the at least two active memory spaces. Here, the third storage threshold may be a number of second sub-memory regions in the first active memory space.
And, in one or more possible embodiments, as shown in fig. 4, until the target background application exits, if an nth cache structure of the N cache structures used at this time is full, the swp _entry_t of the nth cache structure cache is mounted on the global linked list, and based on this, the memory processing method may further include:
and if the storage amount of the Nth buffer storage structure is smaller than or equal to the first storage amount threshold value, mounting the Nth buffer storage structure to the global linked list.
Then, referring to step 220, the swap page release condition in the embodiment of the present application includes at least one of the following:
the number of the buffer storage structures mounted on the global linked list is larger than a first threshold;
the free memory quantity of the first memory space is smaller than a second threshold value, and the buffer storage structure is mounted on the global linked list.
Based on this, this step 220 may include:
creating an exchange page release thread under the condition that the exchange page release condition is met;
and acquiring an exchange page identifier corresponding to the target background application from the second memory space through an exchange page release thread, and executing release operation on the exchange page corresponding to the exchange page identifier.
For the first swap page release condition, as shown in fig. 5, after the cache structures are mounted on the global linked list, whether the number of the cache structures mounted on the global linked list exceeds a specified first threshold is detected, if the number of the cache structures mounted on the current global linked list exceeds the specified first threshold, a swap page release thread kernel thread a is triggered to be created, a swap page identifier swp _entry_t corresponding to the target background application is obtained from the global linked list of the second memory space, and then a swap page corresponding to the swap page identifier swp _entry_t is released.
For the second swap page release condition, as shown in fig. 5, when anonymous pages are reclaimed in the system memory reclaiming process, if the current free memory amount of the first memory space is smaller than the second threshold and the global linked list is loaded with a buffer storage structure, the swap page release thread kernel thread a is triggered to be created, the swap page identifier swp _entry_t corresponding to the target background application is obtained from the global linked list of the second memory space, and then the swap page corresponding to the swap page identifier swp _entry_t is released.
And, when the swap page identifier includes the target swap page identifier, the step 220 may specifically include:
under the condition that the exchange page release condition is met, preferentially performing release operation on the target exchange page corresponding to the target exchange page identifier;
the process corresponding to the target exchange page is at least one of the following: empty thread, buffer process, backup process, service process not running beyond preset time length.
Therefore, the method can release empty threads, caching processes, backup processes and service processes which do not run in excess of a preset time period preferentially, so that the pressure of the current system memory is relieved, the target background application is ensured to exit rapidly, and the situation that the processor resources are preempted with the foreground application for a long time is avoided.
It should be noted that, the embodiment of the present application provides releasing the swapot switch page, which refers to releasing the swapot switch page corresponding to swp _entry_t mounted on the global linked list. And the kernel thread A is used for bearing the task of releasing the swap page corresponding to the swp _entry_t mounted on the global linked list, and in order to avoid the problem of lock resource competition caused by frequent traversal of the global linked list by the kernel thread A, the kernel thread A also takes a cache structure from the global linked list every time to sequentially release the swap page corresponding to the swp _entry_t cached by the kernel thread A.
In addition, in one or more possible embodiments, the memory processing method may further include:
and under the condition that the memory reclamation operation is executed by exiting the target background application, transferring the data of the anonymous page corresponding to the target background application to the exchange page corresponding to the target background application, and determining the idle memory quantity of the first memory space.
In another or more possible embodiments, the memory processing method may further include:
and under the condition that the memory reclaiming operation is executed by exiting the target background application, releasing the file page and the anonymous page corresponding to the target background application.
Here, this step may be performed simultaneously with or after or before the step of mounting the swap page identifier corresponding to the target background application to the global linked list, and is not limited herein.
Therefore, in the exit flow of the background application, the swp _entry_t corresponding to the swap page occupied by the background application is isolated and cached in the cache structure, and is then mounted on the global linked list to wait for delayed release after the cache structure is full, and is not released immediately in the exit flow of the background application, so that the background application can be ensured to exit quickly, the occupied anonymous page and file page memory can be released, and the problems of foreground application blocking, equipment power consumption increase and the like caused by long-time preemption of processor resources with the foreground application are avoided.
In summary, in the process of backing application exit, the anonymous page and the file page of the page table item PTE mapped by the virtual address space vma in the memory are released, and the swp _entry_t corresponding to the swapout exchange page occupied by the page table item PTE is cached in the cache structure in an isolated manner and is not released immediately in the target backing application exit flow, so that the target backing application can be ensured to exit quickly. And sequentially mounting the cache structure on the global linked list to wait for proper time to release, so that the target background application can be rapidly exited, and the problems of foreground application blocking, equipment power consumption increase and the like caused by long-time occupation of CPU resources and incapability of rapidly releasing the occupied anonymous page and file page memory due to the fact that the background application is exited in a background application scene when the large memory application is started to trigger and close the background application when the system is used for multiple background applications are effectively solved.
According to the memory processing method provided by the embodiment of the application, the execution main body can be a memory processing device. In the embodiment of the application, the memory processing device is taken as an example to execute the memory processing method, and the device of the memory processing method provided by the embodiment of the application is described.
Based on the same inventive concept, the application also provides a memory processing device. This is described in detail with reference to fig. 6.
Fig. 6 is a schematic structural diagram of a memory processing device according to an embodiment of the present application.
As shown in fig. 6, the memory processing apparatus 60 may be applied to an electronic device, and the memory processing apparatus 60 may specifically include:
the storage module 601 is configured to store, when the memory reclamation operation is performed by exiting the target background application, a swap page identifier corresponding to the target background application from the first memory space to the second memory space; the exchange page identifier corresponding to the target background application is used for indicating the position of the exchange page corresponding to the target background application in the first memory space;
and the release module 602 is configured to obtain, from the second memory space, an swap page identifier corresponding to the target background application, and perform a release operation on a swap page corresponding to the swap page identifier when the swap page release condition is satisfied.
The memory processing device 60 according to the embodiment of the present application is described in detail below, and is specifically described as follows.
In one or more possible embodiments, the storage module 601 in the embodiment of the present application is specifically configured to store, in a case where the memory reclamation operation is performed by exiting the target background application, the swap page identifier corresponding to the target background application from the first memory space to the global linked list in the second memory space;
The release module 602 is specifically configured to, when the swap page release condition is satisfied, obtain, based on the global linked list, a swap page identifier corresponding to the target background application, and execute a release operation on a swap page corresponding to the swap page identifier.
In another or more possible embodiments, the memory processing device 60 in the embodiment of the present application further includes a mounting module and a processing module; wherein,,
the storage module 601 is specifically configured to store, in a buffer storage structure in the second memory space, a swap page identifier corresponding to the target background application when the memory reclamation operation is performed by exiting the target background application;
the mounting module is used for mounting the buffer storage structure to the global linked list in the second memory space under the condition that the storage capacity of the buffer storage structure is larger than the first storage capacity threshold value;
the processing module is used for continuously executing the steps of storing and mounting until the storage of the exchange page identification corresponding to the target background application is completed;
under the condition that the exchange page identification corresponding to the target background application is stored, the global linked list comprises N buffer storage structures, and N is larger than 0.
In still another or more possible embodiments, the storage module in the embodiment of the present application may be specifically configured to store the swap page identifier to a fixed memory area of the buffer storage structure in the second memory space in the case of performing the memory reclamation operation by exiting the target background application;
Applying for the active memory area when the memory capacity of the fixed memory area is greater than the second memory capacity threshold;
the swap page identification is continued to be stored to the active memory area.
In yet another or more possible embodiments, the fixed memory region includes at least two first sub-memory regions, each for storing a swap page identification.
In still another or more possible embodiments, the storage module in the embodiment of the present application may be specifically configured to store, in a case where the active memory area includes at least two active memory spaces, each active memory space includes at least two second sub-memory areas, each second sub-memory area is configured to store a swap page identifier to a first active memory space of the at least two active memory spaces;
and storing the swap page identification to the next active memory space in the at least two active memory spaces in the case that the storage amount of the first active memory space is greater than the third storage amount threshold.
In still another or more possible embodiments, the mounting module may be further configured to mount the nth buffer storage structure to the global linked list if the storage amount of the nth buffer storage structure is less than or equal to the first storage amount threshold.
In yet another or more possible embodiments, the swap page release condition in an embodiment of the present application includes at least one of:
the number of the buffer storage structures mounted on the global linked list is larger than a first threshold;
the free memory quantity of the first memory space is smaller than a second threshold value, and the buffer storage structure is mounted on the global linked list.
In still another or more possible embodiments, the memory processing device 60 in the embodiment of the present application further includes a creation module; wherein,,
the creation module is used for creating an exchange page release thread under the condition that the exchange page release condition is met;
the release module 602 is specifically configured to obtain, by the swap page release thread, a swap page identifier corresponding to the target background application from the second memory space, and execute a release operation on a swap page corresponding to the swap page identifier.
In still another or more possible embodiments, the processing module 601 in the embodiment of the present application may be further configured to, in a case where the memory reclamation operation is performed by exiting the target background application, transfer data of an anonymous page corresponding to the target background application to a swap page corresponding to the target background application, and determine an amount of free memory in the first memory space.
The memory processing device in the embodiment of the application can be an electronic device, or can be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The memory processing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an IOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The device cooperation apparatus provided by the embodiment of the present application can implement each process implemented by the embodiments of the memory processing method shown in fig. 1 to 5, so as to achieve the same technical effect, and in order to avoid repetition, a detailed description is omitted here.
Based on this, in the memory processing device provided in the embodiment of the present application, under the condition that the memory recovery operation is executed by exiting the target background application, the swap page identifier corresponding to the target background application is stored from the first memory space to the second memory space, the swap page identifier corresponding to the target background application is used to indicate the position of the first memory space of the swap page corresponding to the target background application, and under the condition that the swap page release condition is satisfied, the swap page identifier corresponding to the target background application is obtained from the second memory space, and the release operation is executed on the swap page corresponding to the swap page identifier. In this way, by replacing the storage position of the exchange page identifier corresponding to the target background application, the exchange page identifier corresponding to the target background application cannot be acquired from the first memory space in the exiting process of the target background application, so that the exchange page corresponding to the exchange page identifier cannot be released, and the exchange page identifier corresponding to the target background application is acquired from the second memory space under the condition that the exchange page releasing condition is met, so that the exchange page corresponding to the exchange page identifier is released, thereby reducing the time consumption of exiting the target background application and avoiding the operation of the foreground application from being blocked.
Optionally, as shown in fig. 7, the embodiment of the present application further provides an electronic device 70, which includes a processor 701 and a memory 702, where the memory 702 stores a program or an instruction that can be executed on the processor 701, and the program or the instruction implements each step of the above-mentioned memory processing method embodiment when executed by the processor 701, and the steps achieve the same technical effects, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 8 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
The electronic device 800 includes, but is not limited to: radio frequency unit 801, network module 802, audio output unit 803, input unit 804, sensor 805, display unit 806, user input unit 807, interface unit 808, memory 809, processor 810, and the like.
Those skilled in the art will appreciate that the electronic device 800 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 810 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
In this embodiment of the present application, the memory 809 is configured to store, in a case where the memory reclamation operation is performed by exiting the target background application, the swap page identifier corresponding to the target background application from the first memory space to the second memory space; the swap page identifier corresponding to the target background application is used for indicating the position of the swap page corresponding to the target background application in the first memory space. The processor 810 may be further configured to, when the swap page release condition is met, obtain, from the second memory space, a swap page identifier corresponding to the target background application, and perform a release operation on a swap page corresponding to the swap page identifier.
The electronic device 800 is described in detail below, and is specifically as follows:
in one or more possible embodiments, the memory 809 may be further configured to store, in a case where the memory reclamation operation is performed by exiting the target background application, a global linked list in which swap page identifiers corresponding to the target background application are stored from the first memory space to the second memory space;
in the case that the processor 810 is further configured to, when the swap page release condition is met, obtain, based on the global linked list, a swap page identifier corresponding to the target background application, and perform a release operation on a swap page corresponding to the swap page identifier.
In yet another or more possible embodiments, the memory 809 is further configured to store, in a buffer storage structure in the second memory space, a swap page identifier corresponding to the target background application in a case where the memory reclamation operation is performed by exiting the target background application;
the processor 810 may also be configured to mount the buffer memory structure to the global linked list if the memory amount of the buffer memory structure is greater than a first memory amount threshold;
the processor 810 may be further configured to continue performing the storing and mounting steps until storing the swap page identifier corresponding to the target background application is completed; under the condition that the exchange page identification corresponding to the target background application is stored, the global linked list comprises N buffer storage structures, and N is larger than 0.
In another or more possible embodiments, the memory 809 in the embodiment of the present application may be specifically configured to store the swap page identifier to the fixed memory area of the buffer storage structure in the second memory space in the case of performing the memory reclamation operation by exiting the target background application; applying for the active memory area when the memory capacity of the fixed memory area is greater than the second memory capacity threshold; the swap page identification is continued to be stored to the active memory area.
In yet another or more possible embodiments, the fixed memory region includes at least two first sub-memory regions, each for storing a swap page identification.
In still another or more possible embodiments, the memory 809 in the embodiments of the present application may be specifically configured to store the swap page identifier to a first active memory space of the at least two active memory spaces when the active memory areas include at least two active memory spaces, each active memory space including at least two second sub-memory areas, each second sub-memory area being configured to store one swap page identifier;
and storing the swap page identification to the next active memory space in the at least two active memory spaces in the case that the storage amount of the first active memory space is greater than the third storage amount threshold.
In yet another or more possible embodiments, the processor 810 may be further configured to mount the nth buffer memory structure to the global linked list if the amount of memory of the nth buffer memory structure is less than or equal to the first memory threshold.
In yet another or more possible embodiments, the swap page release condition in an embodiment of the present application includes at least one of:
The number of the buffer storage structures mounted on the global linked list is larger than a first threshold;
the free memory quantity of the first memory space is smaller than a second threshold value, and the buffer storage structure is mounted on the global linked list.
In yet another or more possible embodiments, the processor 810 of an embodiment of the application may also be configured to create a swap page release thread if a swap page release condition is met;
and acquiring the swap page identifier corresponding to the target background application from the second memory space through a swap page release thread, and executing release operation on the swap page corresponding to the swap page identifier.
In still another or more possible embodiments, the processor 810 in an embodiment of the present application may be further configured to, in a case where the memory reclamation operation is performed by exiting the target background application, transfer data of an anonymous page corresponding to the target background application to a swap page corresponding to the target background application, and determine a free memory amount of the first memory space.
It should be appreciated that the input unit 804 may include a graphics processor (Graphics Processing Unit, GPU) 8041 and a microphone 8042, the graphics processor 8041 processing image data of still images or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 806 may include a display panel, which may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 807 includes at least one of a touch panel 8071 and other input devices 8072. Touch panel 8071, also referred to as a touch screen. The touch panel 8071 may comprise two parts, a touch detection device and a touch display. Other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume display keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 809 may be used to store software programs and various data, and the memory 809 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 809 may include volatile memory or nonvolatile memory, or the memory 809 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 809 in embodiments of the application includes, but is not limited to, these and any other suitable types of memory.
The processor 810 may include one or more processing units; optionally, processor 810 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless display signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 810.
The embodiment of the application also provides a readable storage medium, and the readable storage medium stores a program or an instruction, which when executed by a processor, implements each process of the memory processing method embodiment, and can achieve the same technical effects, so that repetition is avoided, and no further description is provided herein.
The processor is a processor in the electronic device in the above embodiment. Among them, the readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic disk or optical disk, etc.
In addition, the embodiment of the application further provides a chip, the chip comprises a processor and a display interface, the display interface is coupled with the processor, the processor is used for running programs or instructions, the processes of the memory processing method embodiment can be realized, the same technical effects can be achieved, and the repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the memory processing method described above, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in part in the form of a computer software product stored on a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (15)

1. A memory processing method, comprising:
under the condition that the memory recovery operation is executed by exiting the target background application, storing the exchange page identification corresponding to the target background application from the first memory space to the second memory space; the exchange page identifier corresponding to the target background application is used for indicating the position of the exchange page corresponding to the target background application in the first memory space;
and under the condition that the swap page release condition is met, acquiring a swap page identifier corresponding to the target background application from the second memory space, and executing release operation on the swap page corresponding to the swap page identifier.
2. The method according to claim 1, wherein storing the swap page identifier corresponding to the target background application from the first memory space to the second memory space in the case of performing the memory reclamation operation by exiting the target background application comprises:
under the condition that the memory recycling operation is executed by exiting the target background application, storing a swap page identifier corresponding to the target background application from the first memory space to a global linked list in the second memory space;
Under the condition that the swap page release condition is met, acquiring a swap page identifier corresponding to the target background application from the second memory space, and executing release operation on a swap page corresponding to the swap page identifier, wherein the method comprises the following steps:
and under the condition that the swap page release condition is met, acquiring a swap page identifier corresponding to the target background application based on the global linked list, and executing release operation on the swap page corresponding to the swap page identifier.
3. The method according to claim 2, wherein storing the swap page identifier corresponding to the target background application from the first memory space to a global linked list in the second memory space in the case of performing a memory reclamation operation by exiting the target background application, comprises:
under the condition that the memory recycling operation is executed by exiting the target background application, storing the exchange page identification corresponding to the target background application into a buffer storage structure in the second memory space;
under the condition that the storage capacity of the buffer storage structure is larger than a first storage capacity threshold value, the buffer storage structure is mounted to a global linked list in the second memory space;
Continuing to execute the steps of storing and mounting until the storage of the exchange page identification corresponding to the target background application is completed;
and under the condition that the exchange page identification corresponding to the target background application is stored, the global linked list comprises N buffer storage structures, and N is larger than 0.
4. The method according to claim 3, wherein storing the swap page identifier corresponding to the target background application to the buffer storage structure in the second memory space in the case of performing the memory reclamation operation by exiting the target background application includes:
storing the swap page identification to a fixed memory area of a buffer memory structure in the second memory space under the condition of executing a memory reclamation operation by exiting the target background application;
applying for an active memory area when the memory capacity of the fixed memory area is greater than a second memory capacity threshold;
and continuing to store the swap page identification to the active memory area.
5. The method of claim 4, wherein the fixed memory region comprises at least two first sub-memory regions, each first sub-memory region for storing a swap page identifier.
6. The method of claim 4, wherein the active memory area comprises at least two active memory spaces, each active memory space comprising at least two second sub-memory areas, each second sub-memory area for storing a swap page identifier;
the storing the swap page identification to the active memory area includes:
storing the swap page identification to a first active memory space of the at least two active memory spaces;
and storing the swap page identification to the next active memory space in the at least two active memory spaces in the case that the storage amount of the first active memory space is greater than a third storage amount threshold.
7. A method according to claim 3, characterized in that the method further comprises:
and if the storage amount of the Nth buffer storage structure is smaller than or equal to the first storage amount threshold value, mounting the Nth buffer storage structure to the global linked list.
8. A method according to claim 3, wherein the swap page release condition comprises at least one of:
the number of the buffer storage structures mounted on the global linked list is larger than a first threshold;
The free memory quantity of the first memory space is smaller than a second threshold value, and the global linked list is provided with a buffer storage structure.
9. The method of claim 8, wherein the method further comprises:
and under the condition that the memory recycling operation is executed by exiting the target background application, transferring the data of the anonymous page corresponding to the target background application to the exchange page corresponding to the target background application, and determining the idle memory quantity of the exchange memory space.
10. The method according to claim 1, wherein, in the case that the swap page release condition is satisfied, acquiring, from the second memory space, a swap page identifier corresponding to the target background application, and executing a release operation on a swap page corresponding to the swap page identifier, includes:
creating an exchange page release thread under the condition that the exchange page release condition is met;
and acquiring the swap page identifier corresponding to the target background application from the second memory space through the swap page release thread, and executing release operation on the swap page corresponding to the swap page identifier.
11. A memory processing apparatus, comprising:
The storage module is used for storing the exchange page identification corresponding to the target background application from the first memory space to the second memory space under the condition that the memory recovery operation is executed by exiting the target background application; the exchange page identifier corresponding to the target background application is used for indicating the position of the exchange page corresponding to the target background application in the first memory space;
and the release module is used for acquiring the exchange page identifier corresponding to the target background application from the second memory space under the condition that the exchange page release condition is met, and executing release operation on the exchange page corresponding to the exchange page identifier.
12. The apparatus of claim 10, wherein the storage module is specifically configured to store, in a case where the memory reclamation operation is performed by exiting the target background application, a swap page identifier corresponding to the target background application from the first memory space to a global linked list in the second memory space;
the release module is specifically configured to, when the swap page release condition is satisfied, acquire, based on the global linked list, a swap page identifier corresponding to the target background application, and execute a release operation on a swap page corresponding to the swap page identifier.
13. The apparatus of claim 11, wherein the memory processing apparatus further comprises a mounting module and a processing module; wherein,,
the storage module is specifically configured to store, in a buffer storage structure in the second memory space, a swap page identifier corresponding to the target background application when the memory reclamation operation is performed by exiting the target background application;
the mounting module is configured to mount the buffer storage structure to a global linked list in the second memory space when the storage capacity of the buffer storage structure is greater than a first storage capacity threshold;
the processing module is used for continuously executing the steps of storing and mounting until the storage of the exchange page identification corresponding to the target background application is completed;
and under the condition that the exchange page identification corresponding to the target background application is stored, the global linked list comprises N buffer storage structures, and N is larger than 0.
14. An electronic device, comprising: a processor, a memory and a program or instruction stored on the memory and executable on the processor, the program or instruction when executed by the processor implementing the steps of the memory processing method as claimed in any one of claims 1 to 10.
15. A readable storage medium, wherein a program or instructions is stored on the readable storage medium, which when executed by a processor, performs the steps of the memory process according to any one of claims 1-10.
CN202310783685.0A 2023-06-28 2023-06-28 Memory processing method, device, equipment and storage medium Pending CN116680083A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310783685.0A CN116680083A (en) 2023-06-28 2023-06-28 Memory processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310783685.0A CN116680083A (en) 2023-06-28 2023-06-28 Memory processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116680083A true CN116680083A (en) 2023-09-01

Family

ID=87790899

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310783685.0A Pending CN116680083A (en) 2023-06-28 2023-06-28 Memory processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116680083A (en)

Similar Documents

Publication Publication Date Title
US11531625B2 (en) Memory management method and apparatus
US10963387B2 (en) Methods of cache preloading on a partition or a context switch
US9256532B2 (en) Method and computer system for memory management on virtual machine
JP2001517829A (en) An application programming interface that controls the allocation of physical memory by classifying code or data by application programs in a virtual storage system.
US20210200668A1 (en) Reserved memory in memory management system
CN113778662B (en) Memory recovery method and device
CN114546897A (en) Memory access method and device, electronic equipment and storage medium
US8751724B2 (en) Dynamic memory reconfiguration to delay performance overhead
CN112925606B (en) Memory management method, device and equipment
US9772776B2 (en) Per-memory group swap device
CN113590509A (en) Page exchange method, storage system and electronic equipment
CN116680083A (en) Memory processing method, device, equipment and storage medium
US20090031100A1 (en) Memory reallocation in a computing environment
US20160170899A1 (en) Embedded device and memory management method thereof
CN116954924A (en) Memory management method and device and electronic equipment
CN106547619B (en) Multi-user storage management method and system
CN116954911A (en) Memory processing method, memory processing device, electronic equipment and readable storage medium
CN116954925A (en) Memory release method and device, electronic equipment and medium
CN117170872A (en) Memory management method, device, equipment and storage medium
CN117311967A (en) Memory processing method and device and electronic equipment
CN118295809A (en) Memory management method, memory management device, electronic equipment and readable storage medium
CN116841833A (en) Memory leakage detection method and device, electronic equipment and readable storage medium
CN117271383A (en) Memory recycling management method and device, electronic equipment and readable storage medium
CN117311966A (en) Memory processing method and device and electronic equipment
CN115994032A (en) Cache management method, device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination